Role Title: Software Engineering Advisor
Position Summary:
Data engineer on the Data integration team
Job Description & Responsibilities:
- Work with business and technical leadership to understand requirements.
- Design to the requirements and document the designs
- Ability to write product-grade performant code for data extraction, transformations and loading using Spark, Py-Spark
- Do data modeling as needed for the requirements.
- Write performant queries using Teradata SQL, Hive SQL and Spark SQL against Teradata and Hive
- Implementing dev-ops pipelines to deploy code artifacts on to the designated platform/servers like AWS.
- Troubleshooting the issues, providing effective solutions and jobs monitoring in the production environment
- Participate in sprint planning sessions, refinement/story-grooming sessions, daily scrums, demos and retrospectives.
Experience Required:
- Overall 8-10 years of experience
Experience Desired:
- Strong development experience in Spark, Py-Spark, Shell scripting, Teradata.
- Strong experience in writing complex and effective SQLs (using Teradata SQL, Hive SQL and Spark SQL) and Stored Procedures
- Healthcare domain knowledge is a plus
Education and Training Required:
Primary Skills:
- Excellent work experience on Databricks as Data Lake implementations
- Experience in Agile and working knowledge on DevOps tools (Git, Jenkins, Artifactory)
- AWS (S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch)
- Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure / AWS integration
Additional Skills:
- Experience in Jira and Confluence
- Exercises considerable creativity, foresight, and judgment in conceiving, planning, and delivering initiatives.