NVIDIA is seeking an AI DevOps Engineering Manager to lead the development of next-gen inference operations infrastructure. This role involves overseeing a team to enhance AI inference delivery using advanced CI/CD practices and Infrastructure as Code.
NVIDIA is looking for an outstanding AI DevOps Engineering Manager to lead and expand our next-gen inference operations infrastructure. Join us in transforming AI inference delivery, supporting NVIDIA's innovative products like Dynamo, Triton, NIXL, and our quickly growing range of AI inference solutions. This role is essential for our GitHub First initiative, enabling public CI/CD infrastructure with GPU and Kubernetes capabilities to deliver high-throughput, low-latency inferencing solutions in distributed environments. Lead a team ensuring our AI products achieve outstanding performance and reliability worldwide. What you'll be doing: • Supervise a team of DevOps engineers with expertise in AI inference infrastructure, test automation (SDET), and Infrastructure as Code (IaC) • Architect and implement scalable test automation strategies for AI inference workloads, including performance benchmarking and automated quality gates • Lead the maintenance of our GitHub First public CI infrastructure, focusing on single/multi-GPU testing, Kubernetes multi-node GPU testing, and CSP validation • Drive Infrastructure as Code efforts by employing Terraform, Ansible, and Kubernetes to support scaling across multiple clouds and lead GPU clusters effectively. • Attain operational proficiency encompassing 24x7 on-call rotations, SRE methodologies, automated monitoring, and self-repairing systems to guarantee uptime exceeding 99.9% • Lead release coordination, cost optimization, and management of multi-cloud deployments What we need to see: • Bachelor's/Master's degree in Computer Science, Engineering, or equivalent experience • 4+ years leading DevOps/SRE organizations with direct SDET leadership experience • 8+ years hands-on experience in software development, test automation, or infrastructure engineering with AI/ML or GPU-intensive workloads • Proficiency in Infrastructure as Code (IaC) platforms: Terraform, Ansible, or CloudFormation with exposure to multiple cloud environments (AWS, Google Cloud Platform, Azure, OCI) • Strong technical leadership in test automation frameworks, CI/CD pipeline development, and quality engineering practices • Familiarity with containerization and orchestration tools such as Docker and Kubernetes for leading AI/ML workloads and GPU resources • Proven success building and scaling teams in fast-paced, high-growth environments • Effective interpersonal skills to collaborate with remote teams and build agreement • Proficiency in Python, Rust, or related programming languages along with the capability to engage in architecture conversations • Demonstrated history of operational proficiency encompassing 24x7 on-call oversight, SRE methodologies, and robust high-availability infrastructures Ways to stand out from the crowd: • Experience with CI/CD (specifically GitHub Actions), releasing Open-source AI software • Proficient in Deep AI/ML infrastructure with expertise in NVIDIA technologies such as CUDA, TensorRT, Dynamo and Triton Inference Server, including coordinating GPU cluster operations and GPU workload performance benchmarking • Background in DevOps, system software testing, and previous experience leading teams on inference engines, model serving platforms, or AI acceleration frameworks • Track record with monitoring tools (Prometheus, Grafana), security scanning, static/dynamic analysis tools, and license compliance automation for critical AI inferencing frameworks. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 224,000 USD - 356,500 USD for Level 3, and 272,000 USD - 425,500 USD for Level 4. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until September 29, 2025. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is seeking an AI DevOps Engineering Manager to lead the development of next-gen inference operations infrastructure. This role involves overseeing a team to enhance AI inference delivery using advanced CI/CD practices and Infrastructure as Code.
CMK Resources, Inc. is seeking an Engineering Manager - AI/DevOps to lead cloud infrastructure and data operations. The ideal candidate will have extensive experience in AWS, Azure, CI/CD, and team leadership.
Humana is seeking a Lead AI/ML DevOps Engineer to oversee the development and deployment of AI/ML models while ensuring their scalability and reliability. This role involves collaboration with various teams to streamline processes and enhance performance metrics.
Waste Connections is seeking a seasoned DevOps Manager to lead cloud infrastructure and data operations. The role requires expertise in AWS, Azure, CI/CD, and team leadership.
ServiceNow is seeking a Senior Staff DevOps Engineer to establish the Cloud Analytics & FinOps Engineering Platform. This role involves architecting and operationalizing a hybrid cloud data platform infrastructure across multiple cloud environments.
Softech Inc. is seeking a Developer Lead to oversee data engineering and DevOps initiatives in the Public Behavioral Healthcare sector. The role involves leading the design and implementation of data solutions with a focus on AI integration.
NVIDIA is seeking an AI DevOps Engineering Manager to lead the development of next-gen inference operations infrastructure. This role involves overseeing a team to enhance AI inference delivery using advanced CI/CD practices and Infrastructure as Code.
CMK Resources, Inc. is seeking an Engineering Manager - AI/DevOps to lead cloud infrastructure and data operations. The ideal candidate will have extensive experience in AWS, Azure, CI/CD, and team leadership.
Humana is seeking a Lead AI/ML DevOps Engineer to oversee the development and deployment of AI/ML models while ensuring their scalability and reliability. This role involves collaboration with various teams to streamline processes and enhance performance metrics.
Waste Connections is seeking a seasoned DevOps Manager to lead cloud infrastructure and data operations. The role requires expertise in AWS, Azure, CI/CD, and team leadership.
ServiceNow is seeking a Senior Staff DevOps Engineer to establish the Cloud Analytics & FinOps Engineering Platform. This role involves architecting and operationalizing a hybrid cloud data platform infrastructure across multiple cloud environments.
Softech Inc. is seeking a Developer Lead to oversee data engineering and DevOps initiatives in the Public Behavioral Healthcare sector. The role involves leading the design and implementation of data solutions with a focus on AI integration.
NVIDIA is seeking an AI DevOps Engineering Manager to lead the development of next-gen inference operations infrastructure. This role involves overseeing a team to enhance AI inference delivery using advanced CI/CD practices and Infrastructure as Code.
CMK Resources, Inc. is seeking an Engineering Manager - AI/DevOps to lead cloud infrastructure and data operations. The ideal candidate will have extensive experience in AWS, Azure, CI/CD, and team leadership.
NVIDIA is seeking an AI DevOps Engineering Manager to lead the development of next-gen inference operations infrastructure. This role involves overseeing a team to enhance AI inference delivery using advanced CI/CD practices and Infrastructure as Code.