- Design, develop, and maintain scalable data pipelines to ingest, process, and transform large volumes of data from various sources using AWS services such as AWS Glue, AWS Lambda, and AWS Step Functions
- Implement data modeling techniques to structure and organize data for efficient analysis.
- Build and maintain data warehouses and data lakes on AWS platforms such as Amazon Redshift, Amazon S3, and Amazon Athena, optimizing storage and query performance.
- Design and implement ETL processes to extract data from source systems, transform it into usable formats, and load it into target systems, ensuring data quality and reliability.
- Align architecture with business requirements and provide solutions which fits best to solve the business problems.
- Monitor data pipelines and infrastructure performance, troubleshoot issues, and optimize processes for improved efficiency, scalability, and cost-effectiveness.
- Collaborate with the team to identify opportunities for performance optimization.
Education level : Bachelor s degree (B.E. / B. Tech) in Computer Science or equivalent from reputed institute
Experience:
1 to 3 years hands-one as a Data Engineer.
Financial services industry experience or familiarity with the stock exchange/capital market
domain will be an added advantage