Job Title: Data Engineer with Python, Azure, SQL
Location: Remote (PAN India)
Duration: 6 Months with possible extension
Employment Type: C2C
Working Hours: 2:00 PM to 11:00 PM IST
Interview Process:
- 1st round 30 minutes internal screening
- 2nd round 30 minutes with SME
- 3rd round 45 to 60 minutes of final interview
Responsibilities:
- Develop and maintain applications using Python.
- Utilize SQL to retrieve, manipulate, cleanse, and store data.
- Leverage PySpark for data processing tasks.
- Define, implement, debug, and optimize data integration mappings and scripts from various data sources.
- Conduct unit testing, system integration testing, regression testing, and assist with user acceptance testing.
- Collaborate with business stakeholders to develop documentation and communication materials for accurate data usage and interpretation.
- Work independently or as part of a team to deliver data warehouse projects.
- Adhere to established standards and best practices and provide input for process improvement.
- Maintain a high-volume data warehouse application.
- Handle multiple tasks simultaneously and manage conflicting priorities effectively.
- Adapt to a quickly changing environment with minimal supervision.
Technical Skills & Competencies:
- Strong Python and PySpark development experience.
- Proficient in Azure Services/Architecture, with extensive experience in Azure Data Factory and Azure Databricks.
- Strong experience in data ingestion and data transformation.
- Proficient in SQL.
- Excellent communication and presentation skills.
If you are a self-motivated team player who thrives in a fast-paced environment and meets the above requirements, we would love to hear from you. Apply now to join our share your CV to [Confidential Information] or share it via WhatsApp at 9109436045