Remote Work: Hybrid
Overview:
At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer's and partner's needs and solve their challenges.
Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve.
You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about locally and globally. We've only begun reimaging the future for our people, our customers, and the world.
Let's create tomorrow together.
Looking for a Data Science professional with expertise in PySpark/Databricks and experience working on different stages of a Data Science project life cycle. Incumbent is expected to build and optimize Data Pipelines, tune/enhance models, and explain output to business stakeholders. Team primarily works on Demand Forecasting, Promotion Modelling, and Inventory Optimization problems for CPG/Retail customers, prior experience in CPG/Retail is strongly preferred.
Responsibilities:
Essential Duties and Responsibilities:
- Design, optimize, and maintain scalable ETL pipelines using PySpark and Databricks on cloud platforms (Azure/GCP).
- Develop automated data validation process to proactively perform data quality checks.
- Optimal allocation of cloud resources to control cloud cost.
- Create and schedule jobs on Databricks platform. Work with GitHub repositories and ensure that best practices are being implemented.
- Work on Supply Chain Optimization models, such as Demand Forecasting, Price Elasticity of Demand, and Inventory Optimization.
- Build and tune forecast model, identify improvement opportunities, and perform experiments to prove value.
- Incumbent is expected to have frequent conversation with Business Stakeholders. Explain data deficiencies, forecast variances, and role of different forecast drivers.
- Follow best practices in Architecture, Coding, and BAU operations.
- Collaborate with cross-functional teams, such as Business Stakeholders, Engagement Managers, Data Ops/Job Monitoring, Product Engineering/UI, Data Science, and Data Engineering.
Qualifications:
- Preferred Education: bachelor's in computer science/IT or in similar field with strong programming exposure and master's degree in Statistics/Operations Research/Mathematics/Data Science.
- 3 8 years of experience in Data Science/Data Engineering. Exposure to Demand Forecasting and Inventory optimization in CPG/Retail will be a big plus.
- Proven experience building and optimizing Data Pipelines/ETL processes in PySpark, DataBricks, Python (Pandas/NumPy) and SQL. Experience working with Git as a collaboration tool.
- Good understanding of cloud platform preferably Azure.
- Exposure to conventional time series forecasting (ESM, ARIMA) and Machine Learning models (GBM, ANN, Random Forests).
- The role is expected to work independently with very low supervision.
- Good communication skills, ability to present output to business stakeholder and convey data deficiencies.
To protect candidates from falling victim to online fraudulent activity involving fake job postings and employment offers, please be aware our recruiters will always connect with you via @zebra.com email accounts. Applications are only accepted through our applicant tracking system and only accept personal identifying information through that system. Our Talent Acquisition team will not ask for you to provide personal identifying information via e-mail or outside of the system. If you are a victim of identity theft contact your local police department.