End-to-End Data Engineering
We’re looking for a hands-on Cloud Data Engineer who’s an expert in Python, PySpark, and SQL — with proven experience building end-to-end data pipelines in Azure using Data Factory, Synapse, and Databricks .
This role blends strong technical skills with sharp business understanding — ideal for someone who loves solving problems, designing scalable data solutions, and working closely with business teams.
Please note : Must be ready to join within 2 weeks after the interview process is completed.
Key Responsibilities:
End-to-End Data Engineering
- Build and optimize data pipelines and ETL processes using ADF, Synapse, and Databricks .
- Develop high-performance data transformations using Python, PySpark, and Advanced SQL .
- Design and implement Lakehouse / Medallion architecture on Azure.
- Create data models and Lakehouse to support analytics and BI initiatives.
- Work directly with business stakeholders to gather requirements and translate them into scalable technical solutions.
- Ensure data quality, governance, and performance optimization across large-scale datasets.
Customer Interaction & Technical Documentation
- Interact with clients and business stakeholders to gather and analyze data requirements for building customized solutions.
- Create clear and concise technical specification documents, detailing the architecture, data flow, and integration plans for project delivery.
CI/CD & Automation
- Implement and manage CI/CD pipelines for data engineering projects, ensuring continuous integration and delivery of data processing and ETL jobs.
- Automate data workflows and operationalize data processes, ensuring high performance and reliability.
Leadership & Mentorship
- Lead and mentor junior data engineers, fostering a collaborative environment for learning and development.
- Provide technical leadership and guidance throughout the project lifecycle, ensuring best practices are adhered to in all stages.
Required Skills & Experience:
- Experience: 3-5 years in data engineering roles, with preferably at least 2 years in hands on role.
- Azure Ecosystem: In-depth experience with Azure Data Factory, Azure Databricks, Azure SQL Data Warehouse, and Data Lake Storage.
- Data Engineering Concepts: Strong understanding of end-to-end data engineering concepts, including ETL pipelines, data integration, and real-time data processing.
- Dimensional Modeling & Data Warehousing: Solid experience with dimensional modeling and designing scalable data warehousing solutions.
- Lakehouse Architecture & Medallion Architecture: Practical experience with implementing Lakehouse architecture and Medallion architecture patterns on Azure.
- Security & Governance: Experience designing data governance frameworks, ensuring data security and compliance with industry standards.
- CI/CD: Proficiency in setting up and maintaining CI/CD pipelines, automating deployment processes for data engineering.
- On-prem & Cloud Databases: Experience with managing both on-premise and cloud-based large-scale databases, ensuring performance, security, and scalability.
- Customer Interaction: Excellent communication skills with the ability to gather business requirements, create technical specs, and ensure stakeholder satisfaction.
Preferred Skills:
- Certifications: Azure/AWS Data Engineer or similar certifications are a must
Personal Attributes:
- Problem-Solving: Strong analytical and troubleshooting skills.
- Collaborative: Ability to work effectively with cross-functional teams and mentor junior engineers.
- Detail-Oriented: Strong attention to detail with a practical approach to complex data engineering challenges.
Salary Range : Negotiable depending on experience and interview performance
Time Preferred : Night shift till 3:00 am - IST (This is a must and no exception)
PTO : 18 days/year and 10 public holidays
Important to have very good conversational skills in English