Search by job, company or skills
In this role, you'll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be part of a diverse team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems.
In this role, you will be responsible for developing functional and/or performance models for ML compute IPs and integrating them with the Cloud TPU SoC model. You will be working closely with ML and SoC architecture teams to understand the instruction set and architecture of ML IPs in good detail. You will be working closely with pre-silicon (e.g., DV, emulation), post-silicon, and Software teams who will be using these models as a part of their validation flow and aid in delivering high quality designs for next generation data center accelerators.
Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers engineers and love voiding warranties by taking things apart so we can rebuild them. We keep our networks up and running, ensuring our users have the best and fastest experience possible.
Function:Technology
Job Type:Permanent Job
Login to check your skill match score
Date Posted: 29/10/2024
Job ID: 98421083