Job role: AWS Big data Engineer(Remote location)
About the company:
We provide enterprise grade platforms that accelerates adoption of Kubernetes and Data.
Gravity, our flagship platform, provides a simplified Kubernetes experience for developers by removing all the removing all the underlying complexities. Developers can use tailor-made workflows to deploy their Microservices, Workers, Data and MLOps workloads to Kubernetes environments across multiple Cloud Providers. Gravity takes care of all the Kubernetes related orchestration such as cluster provisioning, workload deployments, configuration and secret management, scaling and provisioning of cloud services. Gravity also provides out of the box Observability for workloads helping developers to get started with Day 2 operations quickly.
Dark Matter provides a unified data platform for enterprises to extract value out of their data lakes. Data Data Engineers and Data Analysts can discover datasets in enterprise data lakes through an Augmented Data Catalog. Data Profile, Data Quality & Data Privacy are deeply integrated within the catalog to provide an immediate snapshot of datasets in Data Lakes. Organizations can maintain Data Quality by defining quality rules that automatically monitor Accuracy, Validity and Consistency of data to meet their data governance standards. The built-in Data Privacy engine can discover sensitive data in your data lakes and can take automated actions (such as redactions) through an integrated Policy and Governance engine
Job Responsibilities:-
Minimum 5+ years of experience working with high volume data infrastructure.
Experience with AWS and/or Databricks, kubernetes, ETL and Job orchestration tooling.
Extensive experience programming in one of the following languages: Python / Java.
Experience in data modeling, optimizing SQL queries, and system performance tuning.
Knowledge and proficiency in the latest open source and data frameworks, modern data platform tech stacks and tools.
You are proficient with SQL, AWS, Databases, Apache Spark, Spark Streaming, EMR, kubernetes, and Kinesis/Kafka
You delight in crushing messy unstructured data and making the world sane by producing quality data that is clean and usable
Always be learning and staying up to speed with the fast moving data world.
You have good communication skills and can work independently
BS in Computer Science, Software Engineering, Mathematics, or equivalent experience
Benefit of working from home.