NVIDIA is seeking an AI/ML Infrastructure Software Engineer to enhance productivity for researchers by optimizing GPU cluster infrastructure. The role involves collaboration with diverse teams to implement scalable AI/ML solutions.
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are currently hiring an AI/ML Infrastructure Software Engineer at NVIDIA to join our Hardware Infrastructure team. As an Engineer, you will play a crucial role in boosting productivity for our researchers through implementing advancements across the entire stack. Your primary responsibility will involve working closely with customers to identify and resolve infrastructure gaps, enabling innovative AI and ML research on GPU Clusters. Together, we can create powerful, efficient, and scalable solutions as we shape the future of AI/ML technology! What You Will Be Doing • Collaborate closely with our AI and ML research teams to understand their infrastructure needs and obstacles, translating those observations into actionable improvements. • Monitor and optimize the performance of our infrastructure ensuring high availability, scalability, and efficient resource utilization. • Help define and improve important measures of AI researcher efficiency, ensuring that our actions are in line with measurable results. • Collaborate with diverse teams, including researchers, data engineers, and DevOps professionals, to build a seamless and coordinated AI/ML infrastructure ecosystem. • Stay on top of the latest advancements in AI/ML technologies, frameworks, and effective strategies, and promote their implementation within the company. What We Need To See • BS or equivalent experience in Computer Science or related field, with 8+ years of proven experience in AI/ML and HPC workloads and infrastructure. • Hands-on experience in using or operating High Performance Computing (HPC) grade infrastructure as well as in-depth knowledge of accelerated computing (e.g., GPU, custom silicon), storage (e.g., Lustre, GPFS, BeeGFS), scheduling & orchestration (e.g., Slurm, Kubernetes, LSF), high-speed networking (e.g., Infiniband, RoCE, Amazon EFA), and containers technologies (Docker, Enroot). • Expertise in running and optimizing large-scale distributed training workloads using PyTorch (DDP, FSDP), NeMo, or JAX. Also, possess a deep understanding of AI/ML workflows, encompassing data processing, model training, and inference pipelines. • Proficiency in programming & scripting languages such as Python, Go, Bash, as well as familiarity with cloud computing platforms (e.g., AWS, GCP, Azure) in addition to experience with parallel computing frameworks and paradigms. • Passion for continual learning and keeping abreast of new technologies and effective approaches in the AI/ML infrastructure field. • Excellent communication and collaboration skills, with the ability to work effectively with teams and individuals of different backgrounds. NVIDIA provides competitive salaries and a comprehensive benefits package. Our engineering teams are expanding rapidly due to exceptional growth. If you're a passionate and independent engineer with a love for technology, we want to hear from you. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits . Applications for this job will be accepted at least until September 12, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. JR1997842
NVIDIA is seeking a Senior AI and ML Storage Infra Software Engineer to enhance storage infrastructure for AI/ML research on GPU Clusters. This role involves collaboration with research teams to optimize performance and implement innovative solutions.
NVIDIA is seeking an AI/ML Infrastructure Software Engineer to enhance productivity for researchers by optimizing GPU cluster infrastructure. The role involves collaboration with diverse teams to implement scalable AI/ML solutions.
NVIDIA is seeking a Senior AI and ML Storage Infra Software Engineer to enhance storage infrastructure for AI/ML research on GPU Clusters. The role involves collaboration with research teams to optimize performance and implement innovative solutions.
Canonical is seeking a Python and Kubernetes Software Engineer to develop open-source solutions for data analytics and AI/ML workflows. This role involves collaboration on end-to-end solutions in a distributed work environment.
Prodigy Resources is seeking a Senior AI/ML Systems Engineer to integrate AI technologies into engineering workflows in Richardson, Texas. This role focuses on building scalable systems that enhance productivity and maintain high-quality standards.
NVIDIA is seeking a Senior ML Storage Engineer to design and manage high-speed storage solutions for GPU clusters, enhancing AI workloads. This role involves collaboration with various teams to ensure efficient and reliable storage infrastructure.
NVIDIA is seeking a Senior AI and ML Storage Infra Software Engineer to enhance storage infrastructure for AI/ML research on GPU Clusters. This role involves collaboration with research teams to optimize performance and implement innovative solutions.
NVIDIA is seeking an AI/ML Infrastructure Software Engineer to enhance productivity for researchers by optimizing GPU cluster infrastructure. The role involves collaboration with diverse teams to implement scalable AI/ML solutions.
NVIDIA is seeking a Senior AI and ML Storage Infra Software Engineer to enhance storage infrastructure for AI/ML research on GPU Clusters. The role involves collaboration with research teams to optimize performance and implement innovative solutions.
Canonical is seeking a Python and Kubernetes Software Engineer to develop open-source solutions for data analytics and AI/ML workflows. This role involves collaboration on end-to-end solutions in a distributed work environment.
Prodigy Resources is seeking a Senior AI/ML Systems Engineer to integrate AI technologies into engineering workflows in Richardson, Texas. This role focuses on building scalable systems that enhance productivity and maintain high-quality standards.
NVIDIA is seeking a Senior ML Storage Engineer to design and manage high-speed storage solutions for GPU clusters, enhancing AI workloads. This role involves collaboration with various teams to ensure efficient and reliable storage infrastructure.
NVIDIA is seeking a Senior AI and ML Storage Infra Software Engineer to enhance storage infrastructure for AI/ML research on GPU Clusters. This role involves collaboration with research teams to optimize performance and implement innovative solutions.
NVIDIA is seeking an AI/ML Infrastructure Software Engineer to enhance productivity for researchers by optimizing GPU cluster infrastructure. The role involves collaboration with diverse teams to implement scalable AI/ML solutions.
NVIDIA is seeking an AI/ML Infrastructure Software Engineer to enhance productivity for researchers by optimizing GPU cluster infrastructure. The role involves collaboration with diverse teams to implement scalable AI/ML solutions.