This role is for one of the Weekday's clients
Key Responsibilities
- Automate ML Pipelines: Design and implement automated pipelines for developing, testing, deploying, and monitoring machine learning models.
- Collaborate with Teams: Work closely with Data Science and Engineering teams to integrate models into production environments.
- Infrastructure Development: Develop infrastructure for model versioning, scaling, and serving to ensure high availability and low latency.
- Establish CI/CD Processes: Set up continuous integration and deployment processes for models and data pipelines, ensuring reproducibility and consistency.
- Monitor Model Performance: Implement logging and alert systems to monitor the performance of ML models in production.
- Optimize ML Workloads: Enhance performance and cost-efficiency of ML workloads in cloud environments like AWS, GCP, or Azure.
- Ensure Data Integrity and Security: Maintain data integrity, compliance, and security standards, especially for sensitive agricultural data.
- Promote Sustainability: Participate in green computing initiatives to minimize the carbon footprint of ML operations.
- Maintain Data Platforms: Assist in creating and maintaining a central data platform for collaborative model development, ensuring thorough documentation of data pipelines and models.
Ideal Profile
- Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.
- Experience: 67 years in IT roles focusing on MLOps, DevOps, or Data Engineering.
- Cloud Expertise: Proficient with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools like Terraform or CloudFormation.
- Containerization: Hands-on experience with Kubernetes or other container orchestration technologies for scaling ML models.
- CI/CD Proficiency: Skilled with CI/CD tools (e.g., Jenkins, GitLab CI, ArgoCD) and familiar with version control systems like Git.
- Data Pipeline Tools: Experience with data pipeline orchestration tools such as Apache Airflow or Kubeflow.
- Monitoring Skills: Understanding of monitoring tools and techniques for tracking model performance (e.g., Prometheus, Grafana, ELK stack).
- Data Management: Knowledgeable in data management and ETL processes, especially with agricultural and environmental data.
- Programming Skills: Proficient in Python with exposure to ML frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Industry Experience: Experience in the agriculture sector or climate tech, with knowledge of carbon/GHG emissions projects.
- Geospatial Knowledge: Familiarity with geospatial data and remote sensing tools (e.g., Sentinel-2, Google Earth Engine) is a plus.
- Commitment to Excellence: Display a dedication to automation, efficiency, and sustainability in ML operations.
Skills: jenkins,geospatial data,git,infrastructure,mlops,ci/cd,gcp,elk stack,kubeflow,devops,scikit-learn,grafana,tensorflow,aws,data engineering,apache airflow,python,pytorch,cloudformation,azure,terraform,ml,etl,gitlab ci,remote sensing,cloud,prometheus,kubernetes,pipelines,argocd