At Cloud Destinations, we are leaders in developing enterprise SaaS applications for Fortune 500 clients across AWS, Azure, and Google Cloud platforms in the United States. If this opportunity interests you, please get in touch with us. Kindly share your contact details and availability.
Cloud Destinations is a well-established technology organization headquartered in Silicon Valley, specializing in digital transformation, enterprise application development, infrastructure projects, and professional services related to large-scale cloud migrations, multi-cloud operations, DevOps, RNOC, security operations, data centers, UC collaboration, and IoT. We pride ourselves on our deep domain expertise in retail, healthcare, finance, travel, and high technology. Our strong technical and leadership teams, with offices globally, believe our successes stem from teamwork and mutual respect for each others talents and unique perspectives.
Job Description
Responsibilities include, but are not limited to:
- Build, test, scale and maintain highly reliable data pipelines from a variety of batch data sources and real-time streams
- Contribute to the data infrastructure and platform used to build our data pipelines
- Serve as a core member of the data engineering team and be proficient in assisting the business with understanding data attributes
- Design and present recommendations to guide future business and research directions
- Build and maintain highly validated data-marts with ensured clarity and correctness of key business metrics for BI reporting purposes
- Collaborate with other Data Engineers, Data Scientists, and BI Engineers to architect and implement a shared technical vision
- Follow agile processes with a focus on delivering production-ready, testable deliverables in an iterable fashion
- Serve as a senior technical contact for the data solutions engineering team
- Perform code reviews and in-depth technical reviews of system design architectures for junior engineers
- Participate in the entire software development lifecycle, from concept to release
Minimum Qualification
- BS, MS, Ph.D., or equivalent industry experience in Computer Science, Software Engineering, or other related Science/Technology/Engineering/Math fields.
- 3+ years experience of near Real Time (Streaming) & Batch Data Pipeline development in a large scale organization
- 7.5+ years of relevant experience in software development in total.
- Experience in writing reusable/efficient code to automate analysis and data processes
- 2+ of business/marketing analytics experience, preferably in a consumer-based organisation
- Experience successfully working on an independent project with very minimal supervision
- Experience in processing structured and unstructured data into a form suitable for analysis and reporting with integration with a variety of data metric providers ranging from web analytics, consumer analytics, and advertising
- Strong Experience with data modelling, batch data pipeline design and implementation
- Strong Experience in software development and engineering principles
- Experience implementing scalable, distributed, and highly available systems using AWS services such Kinesis, DynamoDB, S3
- Exceptional communication skills, particularly in communicating and visualizing quantitative findings in a compelling and actionable manner for business stakeholders
- Experience in mentoring and supporting junior members of the team
- High Proficiency in Python/PySpark, Scala or Java
- High Proficiency in SQL
- Experience with Databricks/Spark
- Experience with orchestration tools such as Airflow (we use Astronomer)
- Comfortable with CI/CD (we use GitHub Actions) Pipelines
- Experience with Git version control, and other software adjacent tools
- Terraform used as Infra as service tool.