MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build anywhereon the edge, on premises, or across cloud providers. With offices worldwide and over 175,000 developers joining MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications.
Headquartered in New York, with offices across North America, Europe, and Asia-Pacific, MongoDB has more than 29,000 customers, which include some of the largest and most sophisticated businesses in nearly every vertical industry, in over 100 countries.
MongoDB is growing rapidly and seeking a Data Engineer to be a key contributor to the overall internal data platform at MongoDB. You will build data driven solutions to help drive MongoDB's growth as a product and as a company. You will take on complex data-related problems using very diverse data sets.
We are looking to speak to candidates who are based in Gurugram for our hybrid working model.
Our ideal candidate has experience with
Several programming languages (Python, Scala, Java, etc.)
Data processing frameworks like Spark
Streaming data processing frameworks like Kafka, KSQ, and Spark Streaming
A diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc
Different storage format like Parquet, Avro, Arrow, and JSON
AWS services such as EMR, Lambda, S3, Athena, Glue, IAM, RDS, etc
Orchestration tools such as Airflow, Luiji, Azkaban, Cask, etc
Git and Github
CI/CD Pipelines
You might be an especially great fit if you
Enjoy wrangling huge amounts of data and exploring new data sets
Value code simplicity and performance
Obsess over data: everything needs to be accounted for and be thoroughly tested
Plan effective data storage, security, sharing and publishing within an organization
Constantly thinking of ways to squeeze better performance out of data pipelines
Nice to haves
You are deeply familiar with Spark and/or Hive
You have expert experience with Airflow
You understand the differences between different storage formats like Parquet, Avro, Arrow, and JSON
You understand the tradeoffs between different schema designs like normalization vs. denormalization
In addition to data pipelines, you're also quite good with Kubernetes, Drone, and Terraform
You've built an end-to-end production-grade data solution that runs on AWS
Responsibilities
As a Data Engineer, you will
Build large-scale batch and real-time data pipelines with data processing frameworks like Spark on AWS
Help drive best practices in continuous integration and delivery
Help drive optimization, testing, and tooling to improve data quality
Collaborate with other software engineers, machine learning experts, and stakeholders, taking learning and leadership opportunities that will arise every single day
Success Measures
In 3 months
you will have familiarized yourself with much of our data platform, be making regular contributions to our codebase, will be collaborating regularly with stakeholders to widen your knowledge and helping to resolve incidents and respond to user requests
6 Months
you will have successfully investigated, scoped, executed, and documented a small to medium sized project and worked with stakeholders to make sure their data needs are satisfied by implementing improvements to our platform
12 Months
You will have become the key person for several projects within the team and will have contributed to the data platform's roadmap. You will have made several sizable contributions to the project and are regularly looking to improve the overall stability and scalability of the architecture
Do you know why MongoDB is a fantastic place to work and build your career
Disrupting a $64 Billion market
Top NoSQL database in the world
Largest Ecosystem and the fastest growing database in the world
Close to 29,000 customers in over 100 countries and over 200+ million downloads
>120% net ARR expansion rate over each of the last twenty quarters
Sequoia Capital and a number of other Top VC firms have invested in MongoDB. Sequoia Capital calls us out as one of their flagship portfolios; Sequoia has also invested in Apple, Google, Youtube, and WhatsApp
9-figure revenue company, with very high double-digit growth rates
Be a part of the company that's reinventing the database, focused on innovation and speed
Enjoy a fun, inspiring culture that is engineering focused
Work with talented people around the globe
Learn, contribute, and make an impact on the product and community
To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
*MongoDB is an equal opportunities employer.*