Role: Sr. AWS Data Engineer
Total Experience: 5-7Years
Job Location: Gurgaon
Job Purpose:
- be able to create Project Technical Documentation
- Solution architecture, and work on Data Ingestion, Preparation and Transformation. Debugging the production failures and identifying the solution.
- efficient frameworks for development and testing using (AWS Dynamo DB, EKS, Kafka, Kinesis/Spark/Streaming/Python) to enable seamless data ingestion to process on to the Hadoop platform.
- Data Governance and Data Discovery on Cloud Platform
- data processing framework using Spark, Glue, pyspark, Kinesis
- of Data Security Framework on AWS Cloud
- of Data Pipeline Automation using DevOps tools (CI/CD) or CDK/CloudFormation
- of Job Monitoring framework on AWS Cloud CloudWatch/ELK
- of handling structured, Un Structured, Semi Structure and Streaming data.
Must have - Technical & Soft Skills
- on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming)
- experience on Data Ingestion framework/tools i.e Apache Airflow, Sqoop, Glue, Informatica
- hands-on data Solutioning experience in Big-Data Technologies (AWS preferred)
- on experience in: AWS Dynamo DB/RDS, EKS, Kafka, Kinesis, Glue, EMR, RedShift
- experience of programming language like Python with Spark.
- on working Experience with AWS Athena
- Warehouse exposure on AWS Redshift spectrum
- of ML models on AWS Data pipeline
- Engineering/Data Processing to be used for Model development
- gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
- building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous data
- working experience in analyzing source system data and data flows, working with structured and unstructured data
- be very strong in writing SQL queries
- technical, analytical, and problem-solving skills
If interested please share resume at [Confidential Information]