Responsibilities
As an experienced member of the team, in this role, you will:
- Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development
- You will research, design and code, troubleshoot and support. What you create is also what you own.
- You will focus on performance, cost efficiency, reliability, security and high availability of the products/modules and features that you own.
- Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.
- Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.
Basic Qualifications
- Bachelor's degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
- 5+ years relevant professional experience in Data Engineering and Business Intelligence
- 5+ years in with Advanced SQL (analytical functions, window functions, Rollups/Cubes, Complex joins, Complex scanning methods and Join methods), ETL, Data Warehousing.
- Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
- Ability to effectively communicate with both business and technical teams.
- Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
- Understanding of relational and non-relational databases & data stores
- Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script
Preferred Qualifications
- Experience with building data pipelines from application databases.
- Experience with AWS services - S3, Redshift Spectrum, EMR, Glue, Athena, ELK, AWS lambda and Step functions.
- Experience working with Data Lakes & Data Mesh.
- Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space.
- Sharp problem-solving skills and ability to resolve ambiguous requirements.
- Experience on working with both structured & unstructured data at Big Data volume and internet scale.
- Knowledge and experience on working with Hive, Spark, Kafka & the Hadoop ecosystem.
- Knowledge and experience on coding with Pyspark, SparQL.
- Experience working with Data Science teams.