Applicants can submit CV direct to Email : Sunil Chandran [Confidential Information] ;
Experience : 6 - 12 Years
Responsibilities:
- Design, develop, and maintain scalable and efficient data pipelines to extract, transform, and load (ETL) data from various sources into data lakes and data warehouses.
- Design and develop microservices to
- Collaborate with data scientists, analysts, and cross functional teams to design data models, database schemas and data storage solutions.
- Implement data integration and data quality processes to ensure the accuracy and reliability of data for analytics and reporting.
- Optimize data storage, processing, and querying performance for large-scale datasets.
- Enable advanced analytics and machine learning capabilities on the data platform.
- Continuously monitor and improve data quality and data governance practices.
- Stay up to date with the latest data engineering trends, technologies, and best practices.
Requirements:
- Bachelor's degree in computer science, Engineering, or a related field.
- 8+ of proven experience in data engineering, data warehousing, and ETL processes.
- Proficiency in data engineering tools and technologies such as SQL, Python, Spark, Hadoop, DBT, Airflow, Apache Kafka and Presto.
- Solid experience with table formats such as Delta Lake or Apache Iceberg
- Design and development experience with batch and real time streaming infrastructure and workloads.
- Solid experience with implementing data lineage, data quality and data observability for Big data workflows.
- Solid experience designing and developing microservices and distributed architecture.
- Hands-on experience with GCP ecosystem and data lakehouse architectures.
- Strong experience with implementing Databricks.
- Strong experience with container technologies such as Docker, Kubernetes.
- Strong understanding of data modeling, data architecture, and data governance principles.
- Excellent experience with DataOps principles and test automation.
- Familiarity with data processing and querying using distributed systems and NoSQL databases.
- Ability to optimize and tune data processing and storage for performance and cost-efficiency.
- Excellent problem-solving skills and the ability to work on complex data engineering challenges.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
- Previous experience mentoring and guiding junior data engineers is a plus.
- Relevant certifications in data engineering or cloud technologies are desirable.
Nice to have:
- Experience working in a Data Mesh architecture.
- Supply Chain domain experience.
Special requirements
- Support US Data platform teams
- Monitor and manage platform infrastructure
- Assist with platform deployment and maintenance
- Support Insights, Control Tower, and other production needs
- For Data pipeline automation and efficient data integration
Skills required: Strong experience with platform support and operations, cloud infrastructure management. Strong understanding of Insights, Control Tower and other products that data platform supports. Expertise with data engineering.