*Role: Principal Data Engineer* *Location: Delhi* *Mode: Hybrid* *Type: Contract* *Job Description:* We are seeking a *Principal Data Engineer* with strong enterprise-level *data architecture and engineering* experience, specifically within the *logistics and manufacturing domains*. This role is heavily focused on *architecting and developing* scalable data solutions using the *Azure ecosystem*, including *Databricks*, *ADF*, *Synapse*, and *PySpark*. *Key Responsibilities:* * 90% focus on data architecture and hands-on development * 10% involvement in stakeholder collaboration * Design and build scalable data pipelines and models in Azure * Implement best practices in data engineering and architecture *Required Skills:* * Strong experience in *Azure Data Engineering* (ADF, Databricks, Synapse) * Proficient in *PySpark*, *Python*, and *SQL* * Solid understanding of *data modeling* (conceptual/logical/physical) * Experience with tools like *Erwin*, *Toad*, and *Snowflake* is a plus * Domain expertise in *logistics/supply chain or HR domain* Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: * Databricks: 4 years (Required) * Total: 10 years (Required) * ADF: 3 years (Required) * Pyspark: 2 years (Required) * Python: 2 years (Required) * Data Modelling: 3 years (Required) * Snowflake tools : 3 years (Required) * HR domain: 2 years (Required) Work Location: In person