Job Description
We are looking for an experienced and passionate Data Engineer with 6+ years of experience in building scalable, high-performance distributed systems that deal with large data volumes. You will be responsible for development work on all aspects of Big Data, data provisioning, modeling, performance tuning and optimization. Responsibilities:
- Work closely with business and dev teams to translate the business/functional requirements into technical specifications that drive Big Data solutions to meet functional requirements.
- Participate in software design meetings and write technical design documents.
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volumes of data focusing on scalability, low -latency, and fault-tolerance in every system built.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
- Maintain application stability and data integrity by monitoring key metrics and improving code base accordingly.
- Understand & maintain existing codebase by regular re-factoring and applying requested fixes and features.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Should be flexible to learn new technology / required frameworks.
To apply for this position, please upload your resume at :
We prefer to select professionals those who are Certified in their domain from Official Certification company's like RedHat, Chef Software Inc, Puppet, Docker, Google, Amazon Inc. etc. or authorized training partners like scmGalaxy Inc. DevOpsCertification.co etc.