Search by job, company or skills
Who We Are
Delhivery is India's largest fully integrated logistics services provider. With its nationwide network covering over 18,500 pin codes, the company provides a full suite of logistics services such as express parcel transportation, PTL freight, TL freight, cross-border, supply chain, and technology services. Delhivery has successfully fulfilled over 2 billion shipments since inception and today works with over 26,500 customers, including large & small e-commerce participants, SMEs, and other enterprises & brands. At Delhivery, we use technology to drive supply chain innovation. Our mission is to fulfill all of India's online and offline consumption demand through best-in-class industry solutions, domain expertise, and pan-India operations.
About Team
The Data Platform & Engineering group is focused on using the latest technologies to deliver data as a product. We own every aspect of the data lake, data warehousing, data pipelines. As part of the team, you will own a considerable slice of this platform and will be developing solutions to some of the world's toughest data management problems. If you like solving data engineering problems, this is an ideal place to work.
Requirements:
7+ years of professional experience with Big Data systems, pipelines, and data processing.
Strong algorithmic and object-oriented skills.
Provide operational excellence through root cause analysis and continuous improvement of Big Data technologies and processes.
Proven experience using distributed computer frameworks and event streaming like Hadoop, Spark, distributed SQL, Kafka, and NoSQL query engines.
Able to set up a large scale data pipeline and data monitoring system to make sure the overall pipelines are healthy.
Willing to take ownership of the data platform, build framework, modernize data platform.
Can communicate concisely and persuasively to a varied audience including data provider, engineering and analysts.
Ability to identify, prioritize, and answer the most critical areas where analytics and modeling will have a material impact.
Experience building data pipelines and tools using Java/Scala.
Understanding of design and development of large scale, high throughput, and low latency applications is a plus.
Aptitude to independently learn new technologies.
Excellent verbal and written communication skills are required.
Ideate and innovate to address big data challenges and perform proof of concepts to demonstrate the ideas.
Embrace data-driven culture in day-to-day work.
Knowledge of database modeling and design. Able to perform data analysis, ingestion, and integration.
Architectural experience designing large scale, data intensive applications.
Experience with at least one major cloud infrastructure environment (eg. AWS, GCP, Azure, k8s).
Experience using Event Sourcing like Kafka and CQRS oriented architectures.
Experience using more complex data frameworks like Hudi, IceBerg, etc.
Experience in logistics, supply chain management, or groceries is a strong plus
Login to check your skill match score
Date Posted: 17/11/2024
Job ID: 100609697