We are seeking a seasoned Data Engineer to join our vibrant team. As a pivotal member of our data engineering team, you will play a crucial role in designing, implementing, and optimizing data pipelines, ensuring seamless integration with cloud platforms. The ideal candidate will possess a strong command of Python, data engineering principles, and a proven track record of successful implementation of scalable solutions in cloud environments.
Responsibilities:
- Design, develop, and maintain scalable and efficient data pipelines using Python and cloud-based technologies.
- Implement Extract, Transform, Load (ETL) processes to seamlessly move data from diverse sources into our cloud-based data warehouse.
- Utilize cloud platforms (e.g., Google Cloud, AWS, Azure) to deploy, manage, and optimize data engineering solutions.
- Leverage cloud-native services for storage, processing, and analysis of large datasets.
- Collaborate with data scientists, analysts, and other stakeholders to design effective data models that align with business requirements.
- Ensure the scalability, reliability, and performance of the overall data infrastructure on cloud platforms.
- Continuously optimize data processes for improved performance, scalability, and cost-effectiveness in a cloud environment.
- Monitor and troubleshoot issues, ensuring timely resolution and minimal impact on data availability.
- Collaborate with cross-functional teams to identify and address data quality issues.
- Work closely with data scientists, analysts, and other teams to understand data requirements and provide technical support.
- Create and maintain comprehensive documentation for data engineering processes, cloud architecture, and pipelines.
Qualifications
- Bachelor's degree in computer science, Engineering, or a related field.
- Should have 1 year of experience in Data Engineering and oversee end-to-end implementation of data pipelines on cloud-based data platforms.
- Proven expertise in designing and implementing Extract, Transform, Load (ETL) processes.
- Experience in Snowflake and Knowledge in transforming data using Data Build Tool.
- Strong programming skills in Python, PySpark and some combination Java, Scala (preferred).
- Strong understanding of data modeling principles and experience in designing effective data models.
- Experience in implementing data quality checks and validation processes.
- Strong problem-solving and critical-thinking abilities.
- Excellent communication skills, both written and verbal.
- Ability to work collaboratively in a cross-functional team environment.
- Attention to detail and commitment to delivering high-quality solutions.
Location:- Kochi
If you meet these qualifications and are interested in joining our team, apply with your resume to [Confidential Information].