Job Title: Senior Data Engineer
Location: Bangalore / Mumbai
Company: Saviynt
About Us: Saviynt is a leading provider of Cloud Security and Identity Governance solutions.
Saviynt enables enterprises to secure applications, data, and infrastructure in a single platform
for Cloud (Office 365, AWS, Azure, Salesforce, Workday) and Enterprise (SAP, Oracle EBS).
Saviynt is pioneering Identity Governance Administration by integrating advanced risk analytics
and intelligence with fine-grained privilege management.
Job Description: As a Senior Data Engineer, you will be responsible for designing, developing,
and maintaining scalable data pipelines and infrastructure for both batch and real-time
processing. You will collaborate closely with cross-functional teams to understand complex data
needs, implement robust data solutions, and ensure data quality and reliability. The ideal
candidate will have a strong background in software engineering and data management, with a
passion for building data-driven applications and systems.
Responsibilities
- Design and implement end-to-end data pipelines that are scalable, reliable, and efficient.
- Develop data architectures for streaming and batch data processing.
- Optimize data infrastructure and platform performance.
- Ensure data quality and reliability through automated testing and monitoring.
- Collaborate with data scientists and analysts to understand data requirements and
deliver actionable insights.
- Mentor junior engineers and promote best practices in software development and data
management.
- Stay updated with the latest technologies and trends in data engineering and
recommend best practices for adoption.
Requirements
- Bachelors or Masters degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in data engineering or a similar role.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with cloud platforms (AWS, Azure, GCP) and their data services (e.g., S3,
Redshift, BigQuery).
- Experience working with Apache Iceberg
- Strong knowledge of distributed computing frameworks (e.g., Spark, Hadoop).
- Solid understanding of relational and NoSQL databases.
- Experience with data pipeline orchestration tools (e.g., Airflow, Luigi).
- Excellent problem-solving skills and ability to work independently as well as in a team
environment.
- Strong communication skills and ability to collaborate effectively with cross-functional
teams.