Developing and deploying innovative AI solutions using the AWS technology stack
You will be responsible for assisting with automation of the entire AI lifecycle, from data staging, model training and development to deployment, monitoring, and application integration
Responsibilities
Design, develop, and deploy machine learning models using AWS services such as Bedrock, Titan, Comprehend, SageMaker as well as Rekognition and Transcribe
Effectively integrate, automate, and optimize data pipelines for model training and inference using AWS Glue, S3, and Lambda
Implement effective AWS infrastructure for training, testing, and deployi ng machine learning models, including deep learning models such as large language models (LLMs) and integration of foundation models
Monitor and maintain deployed models in production, ensuring high performance and accuracy
Collaborate with data engineers, data scientists and software engineers to integrate AI models into existing applications and systems
Stay up-to-date on the latest advancements in AI and AWS technologies
Benchmark and optimize computational performance of AI modeling processes
Document and communicate technical findings and solutions effectively
Qualifications
Masters degree in Computer Science, Data Science, or a related field
3 to 8 years of experience in machine learning engineering or a related field
Experience in deploying and scaling machine learning models in AWS production environments
Experience with AWS services such as SageMaker, Bedrock, Titan, Comprehend, SageMaker, Rekognition, or Transcribe Experience working with LLMs and foundation models such as provide through AWS Bedrock or Titan
Experience with AWS Lambda and Glue
Experience with MLOps and continuous integration/continuous delivery (CI/CD) pipelines
Understanding of machine learning concepts and algorithms