Job Title: AWS Data Engineer
Location: Hyderabad
Mode of Interview: L1 - Virtual, L2 - F2F
Job Description
Design, develop, and maintain scalable data pipelines using AWS services (e.g., AWS Glue, Amazon Kinesis) and PySpark for efficient data processing.
Create and manage ETL processes using Python and SQL to ingest, transform, and load data into data lakes and warehouses.
Design and optimize AWS data architecture (e.g., Amazon S3, Redshift) for effective data storage, retrieval, and analysis.
Skills
- Proficiency in PySpark for big data processing and transformation.
- Strong programming skills in Python, with experience in data manipulation and ETL processes.
- Solid experience with SQL and familiarity with database technologies (e.g., PostgreSQL, MySQL, Amazon Redshift).
- Understanding of data modeling and data warehousing concepts.