Responsibilities :
Develop new techniques, and technical tools to embed Responsible AI in all AI projects, deploy technical guardrails for all internal and external AI projects, undertake research and experimentation to design methodologies and best practices for improving outcomes like transparency, bias mitigation, security, reproducibility and hallucinations, safety etc in AI models, use-cases etc.
Technical and Professional Requirements:
Develop and build platforms and tooling for Responsible AI tenets like fairness, security, Explainability etc. for different model types (ML/DL/LLMs), data types and lifecycle stages. Deploy solutions and techniques in existing AI projects that can act as a guardrail for different model vulnerabilities like toxicity, adversarial attacks. Engage with business analysts, engineers, and other stakeholders to align data science initiatives with Responsible AI principles. Aid in the design and implementation of rigorous adversarial testing of AI models to ensure alignment to responsible AI standards. Research on latest cutting-edge techniques, architectures and methods that automate adherence to responsible AI practices throughout the AI lifecycle from data preparation to model deployment and inferencing. Establish systems for monitoring model performance over time and implement mechanisms for regular model updates and maintenance.