We are seeking an experienced Senior Data Engineer with a background in data integration, ETL processes, and data warehousing. The ideal candidate will have 4-6 years of experience in data engineering, with advanced knowledge of data architecture, pipeline creation, and big data technologies.
This role demands a proactive individual skilled in designing, building, and maintaining data systems, collaborating with cross-functional teams, and ensuring the highest standards of data quality and performance.
Job Responsibilities
- Design, develop, and maintain scalable data pipelines for data ingestion, processing, and storage.
- Build and optimize data architectures and data models for efficient data storage and retrieval.
- Develop complex ETL processes to transform and load data from various sources into data warehouses and data lakes.
- Ensure data integrity, quality, and security across all data systems.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
- Monitor, troubleshoot, and optimize data pipelines and workflows to ensure high availability and performance. Document data processes, architectures, and data flow diagrams.
Candidate Required Skills
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5-7 years of experience in data engineering and data architecture.
- Proficiency in SQL and at least one programming language (e.g., Python, Java, Scala).
- Advanced experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data services.
- Strong knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica).
- Expertise in data modeling, data structures, and database design.
- Strong analytical and problem-solving skills, with the ability to handle complex data challenges.
- Excellent communication and collaboration skills, with the ability to work independently and as part of a team. Experience with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).
- Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- Knowledge of data governance and best practices in data management.
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Experience with data visualization tools (e.g., Tableau, Power BI).
Benefits: This role offers the flexibility of working remotely in India.