CVPR 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2025.
Search Opportunities
Cambridge, MA
At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this ground-breaking shift in mobility, we’ve built an extraordinary team in Energy & Materials, Human-Centered AI, Human Interactive Driving, and Robotics.
The Mission
Make general-purpose robots a reality.
The Challenge
We envision a future where robots assist with household chores and cooking, aid older individuals in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots need to be able to operate successfully in convoluted, unstructured environments. Our mission is to answer the question “What will it take to create truly general-purpose robots that can accomplish a wide variety of tasks in settings like human homes with minimal human supervision?”. We believe that the answer lies in cultivating large-scale datasets of physical interaction from a variety of sources and building on the latest advances in machine learning to learn general-purpose robot behaviors from this data.
The Team
Our goal is to redefine the field of robotic manipulation, enabling long-horizon dexterous behaviors to be efficiently educated, learned, and improved over time in diverse, real-world environments.
We have a deep multi-functional expertise across simulation, perception, controls, and machine learning, and we measure our success in terms of fundamental capabilities development, as well as research impact via open-source software and publications. Our north star is a fundamental technological advancement in building robots that can flexibly perform a wide variety of tasks in diverse environments with minimal human supervision. Join us and let’s make general-purpose robots a reality.
We operate a fleet of robots, and robot-embodied teachings and deployment are key parts of our strategy.
The Opportunity
We’re looking for a driven engineer with a “make it happen” mentality. The ideal candidate is able to operate independently when needed, but works well as part of a larger integrated group at the forefront of pioneering robotics and machine learning. If our mission of revolutionizing robotics through machine learning resonates with you, get in touch and let’s talk about how we can create the next generation of AI-powered capable robots together!
Responsibilities
-
Research and provide to the design of novel robotic systems through software development, including control, perception, planning, and their interactions with learned policies.
-
Develop tooling, drivers, and controllers to enable stable and performant robotic platforms.
-
Enable research into robot foundation models by working with mechanical/electrical engineers, technicians, and researchers to build and integrate new enabling robotics technologies.
-
Help robotics research scientists work toward applying and integrating their research toward more robust, perceptive, and scalable systems.
-
Design and integrate imaginative system solutions; combining actuation, structure, and sensing, as well as new mechanisms and sensory for human-scale manipulation.
Qualifications
-
B.S. or equivalent experience in an engineering-related field and 4+ years of relevant industry experience.
-
Strong software engineering skills; very familiar with working in mixed C++ and Python codebase.
-
Multi-functional understanding of robotics software stacks.
-
Experience with cloud-scalable workflows, for example, AWS.
-
Experience with inter- and in-process communication, parallelism, logging, networking, and data systems, and common methods.
Location USA, WA, Seattle
Description Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads.
Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience.
As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people.
We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you!
Location Pittsburgh, PA
Description Who we are
Aurora’s mission is to deliver the benefits of self-driving technology safely, quickly, and broadly.
The Aurora Driver will create a new era in mobility and logistics, one that will bring a safer, more efficient, and more accessible future to everyone.
At Aurora, you will tackle massively complex problems alongside other passionate, intelligent individuals, growing as an expert while expanding your knowledge. For the latest news from Aurora, visit aurora.tech or follow us on LinkedIn.
Aurora hires talented people with diverse backgrounds who are ready to help build a transportation ecosystem that will make our roads safer, get crucial goods where they need to go and make mobility more efficient and accessible for all. We’re searching for a Perception Platform Backend Engineer. The ideal candidate will enjoy the challenge of developing large-scale systems that track the autonomy development process and produce interactive metrics and insights. You’ll be writing analysis tools, generating and managing large datasets, and scaling out interconnected systems and services. The impact of your work will be fast and reliable ML model evaluation and involve tight collaboration within and across teams, including Simulation, Planning, Cloud, Tools, and more.
In this role, you will
Architect powerful backend systems that flow into dynamic web services and visualizations Build scalable services and specialized systems for Perception evaluations Improve the automation and reliability of large data processing pipelines Integrate backend data pipelines with human workflows in an efficient manner Significant ownership, independence, and leadership working across systems and teams Required Qualifications
BS / MS / PhD degree in Computer Science or a related field Excellent programming and software design skills in Python or C++/C Strong industry software experience (5+ years) Experience with backend design and data modeling Comfort working on large codebases and software systems Great collaboration and communication skills Desirable Qualifications (this section is optional)
Experience with containerized services in datacenter or cloud environments (e.g., AWS) Familiarity with rich sensor data (e.g., LIDAR, RADAR, camera) Knowledge of Perception and Autonomous Driving / ML Systems / ML Infrastructure
Location Mountain View, CA
Description Who we are
Aurora’s mission is to deliver the benefits of self-driving technology safely, quickly, and broadly.
The Aurora Driver will create a new era in mobility and logistics, one that will bring a safer, more efficient, and more accessible future to everyone.
At Aurora, you will tackle massively complex problems alongside other passionate, intelligent individuals, growing as an expert while expanding your knowledge. For the latest news from Aurora, visit aurora.tech or follow us on LinkedIn.
Aurora hires talented people with diverse backgrounds who are ready to help build a transportation ecosystem that will make our roads safer, get crucial goods where they need to go, and make mobility more efficient and accessible for all. We’re searching for a Tech Lead Manager for the Perception Team.
In this role, you will
Be responsible for leading a team that owns a deployed ML subcomponent of the perception system. This will include defining technical direction for the team, setting and executing against milestones, allocating resources and owning all aspects of a deployed production ML model - from metrics and ML research to on-road execution and performance. Be responsible for managing a team of engineers and researchers and growing their careers. Required Qualifications
Excellent software engineering skills in Python and/or C++ and contemporary ML frameworks (e.g., PyTorch) Extensive exp in Computer Vision, Machine Learning, Deep Learning, or other relevant areas of Artificial Intelligence (e.g., as evidenced by industry experience, publication record) 3+ years of industry experience directly managing a team of engineers/researchers. Strong problem solving skills and ability to innovate Desirable Qualifications
Relevant industry experience (prior work on self-driving vehicles, autonomy, and/or robotics projects) Contributions to open source project(s) Strong track record in any field related to machine learning, as evidenced by top-tier publications Examples of relevant fields and conferences: computer vision (e.g., CVPR, ECCV, ICCV, IJCV), machine learning (e.g., ICML, NeurIPS, JMLR, PAMI), robotics (e.g., RSS, IJRR)
Location: Los Altos
Description
At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team in Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavioral Models, and Robotics.
TRI is assembling a world-class team to develop and integrate innovative solutions that enable a robot to perform complex, human-level mobile manipulation tasks, navigate with and among people, and learn and adapt over time. The team will develop, deploy, and validate systems in real-world environments, in and around homes.
The team will be focused on heavily leveraging machine learning to marry perception, prediction, and action to produce robust, reactive, coordinated robot behaviors, bootstrapping from simulation, leveraging large amounts of data, and adapting in real world scenarios.
TRI has the runway, roadmap, and expertise to transition the technology development to a product that impacts the lives of millions of people. Apply to join a fast moving team that demands high-risk innovation and learning from failures, using rigorous processes to identify key technologies, develop a robust, high quality system, and quantitatively evaluate performance. As part of the team, you will be surrounded and supported by the significant core ML, cloud, software, and hardware expertise at TRI, and be a part of TRI's positive and diverse culture.
Responsibilities
- Develop, integrate, and deploy algorithms linking perception to autonomous robot actions, including manipulation, navigation, and human-robot interaction.
- Invent and deploy innovative solutions at the intersection of machine learning, mobility, manipulation, human interaction, and simulation for performing useful, human-level tasks, in and around homes.
- Invent novel ways to engineer and learn robust, real-world behaviors, including using optimization, planning, reactive control, self-supervision, active learning, learning from demonstration, simulation and transfer learning, and real-world adaptation.
- Be part of a team that fields systems, performs failure analysis, and iterates on improving performance and capabilities.
- Follow software practices that produce maintainable code, including automated testing, continuous integration, code style conformity, and code review.
Location USA, WA, Seattle USA, NY, New York USA, CA, Palo Alto
Description Amazon Sponsored Products is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of GenAI/LLM powered self-service performance advertising products that drive discovery and sales. Our products are strategically important to Amazon’s Selling Partners and key to driving their long-term growth. We deliver billions of ad impressions and clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving team with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities.
This position will be part of the Advertiser Growth organization within Sponsored Products. Our team focuses on launching GenAI initiatives that help advertisers create and manage performant SP campaigns. We develop AI powered experiences to help advertisers use advertising effectively and develop state-of-art ML based optimization services to represent our advertisers in the SP marketplace. We strive to improve the ability for advertisers to help shoppers discover relevant products on customer search, browse and detail pages.
We are seeking a Senior SW Engineer, Robotics & AI with primary focus on enabling humanoids loco-manipulation skills. You will be responsible for building pioneering models and data generation pipelines that enable robots to perform complex manipulation tasks while navigating dynamic environments. You will work closely with a multidisciplinary team of researchers and engineers to create robust, efficient, and scalable solutions that advance the field of robotics.
What You Will Be Doing:
- Work alongside top NVIDIA researchers and engineers to develop advanced AI solutions for versatile humanoid robots and embodied agents.
- Build reliable and scalable data processing workflows.
- Develop and deploy innovative AI algorithms and models.
- Create large-scale infrastructure for AI training and inference of foundational models.
- Systematically test and analyze AI models in simulation environments and on robotic hardware.
- Engage with research and engineering teams across NVIDIA to ensure seamless integration with upstream and downstream stacks.
- Collaborate with research and engineering teams across all of NVIDIA to transfer research to products and services.
What we need to see:
- MS or PhD in Robotics, Computer Science, Electrical Engineering, Mechanical Engineering, or a related field (or equivalent experience).
- 3+ Years of experience in desired field.
- Strong background in robots or autonomous driving system.
- Experience with robot learning for robotics locomotion / navigation / manipulation.
- Hands-on experience in working with large-scale machine learning/AI systems and compute infrastructure.
- Proficiency in Python or C++.
- Outstanding engineering skills in rapid prototyping and product development.
- Familiar with model training frameworks (e.g., PyTorch, Jax, TensorFlow).
- Background with deep learning techniques, especially imitation learning or reinforcement learning.
- Hands-on experience with robotic hardware.
Ways to Stand out from The Crowd:
- Proven track record of publications in top robotics or AI conferences/journals.
- Experience developing robust, scalable data pipelines and working with humanoid or mobile-manipulator robots.
- Familiarity with physics simulation frameworks such as Isaac Lab, Isaac Sim or MuJoCo.
- Knowledge of control methods, including PID, model predictive control, and whole-body control.
- Experience on building multimodal foundation models.
With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!
The base salary range is 148,000 USD - 287,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Redwood City, CA
We help make autonomous technologies more efficient, safer, and accessible.
Helm.ai builds AI software for autonomous driving and robotics. Our Deep Teaching™ methodology is uniquely data and capital efficient, allowing us to surpass traditional approaches. Our unsupervised learning software can train neural networks without the need for human annotation or simulation and is hardware-agnostic. We work with some of the world's largest automotive manufacturers and we've raised over $100M from Honda, Goodyear Ventures, Mando, and others to help us scale.
Our team is made up of people with a diverse set of experiences in software and academia. We work together towards one common goal: to integrate the software you'll help us build into hundreds of millions of vehicles.
You will:
You will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who'd enjoy applying their skills to deeply complex and novel AI problems. Here, you will:
- Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
- Carefully execute development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole
- Work closely with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
You have:
- A sense of practical optimism: not all experiments are successful, but the ones that are more than make up for it!
- Comfort operating in a fast-paced environment to deliver customer projects
- Introspection, thoughtfulness, and detail-orientation
- Experience working with neural networks, Tensorflow and/or PyTorch - Fluency in Python and working knowledge of C/C++ programing
- A strong interest in unsupervised learning, computer vision, and/or the autonomous vehicle industry
- Master’s or Ph.D. in a related field and/or 5+ years of experience in a related field
We offer:
- Competitive health insurance options
- 401K plan management
- Remote-friendly and flexible team culture
- Free lunch and fully-stocked kitchen in our South Bay office
- Additional perks: monthly wellness stipend, office set up allowance, company retreats, and more to come as we scale
- The opportunity to work on one of the most interesting, impactful problems of the decade
Helm.ai is proud to be an equal opportunity employer building a diverse and inclusive workforce. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
Redwood City, CA
We help make autonomous technologies more efficient, safer, and accessible.
Helm.ai builds AI software for autonomous driving and robotics. Our Deep Teaching™ methodology is uniquely data and capital efficient, allowing us to surpass traditional approaches. Our unsupervised learning software can train neural networks without the need for human annotation or simulation and is hardware-agnostic. We work with some of the world's largest automotive manufacturers and we've raised over $100M from Honda, Goodyear Ventures, Mando, and others to help us scale.
Our team is made up of people with a diverse set of experiences in software and academia. We work together towards one common goal: to integrate the software you'll help us build into hundreds of millions of vehicles.
You will:
Join a pioneering team that's redefining autonomous vehicle development through the power of unsupervised learning. At Helm, we've established ourselves as industry leaders by successfully developing OEM-grade perception models using an AI-first approach. Now, we're embarking on an ambitious new chapter: developing a comprehensive ML centric autonomous vehicle stack for urban environments.
As a member of our AV Controls team, you'll be at the center of our autonomous driving initiative—spearheading vehicle control systems while collaborating closely with our perception and planning teams to create a cohesive, state-of-the-art AV stack. While trajectory tracking and execution are core responsibilities, you'll have the opportunity to influence and contribute across the entire autonomous driving pipeline. This is a unique opportunity to shape the future of autonomous driving alongside a proven team that combines deep technical expertise with cutting-edge AI methodologies.
Main Responsibilities:
- Partner with perception and planning teams to architect, build, and test an AI-first autonomous driving platform for urban environments
- Design and implement robust vehicle control systems for trajectory tracking and execution, with a focus on real-time performance and reliability
- Develop and validate vehicle dynamics models through system identification and data analysis, driving continuous improvement in control system performance
- Create and execute comprehensive validation strategies across simulation and real-world testing environments
You have:
- 4+ years of hands-on experience developing vehicle control systems in the autonomous vehicle or ADAS industry
- Demonstrated expertise implementing real-time optimal control strategies (MPC or iLQR) for vehicle trajectory tracking, with proven on-vehicle implementation experience
- Experience addressing real-world controls challenges, including sensor noise, system latencies, and state estimation uncertainty
- Track record of developing and validating vehicle dynamics models through system identification and data-driven approaches
- Strong theoretical foundation in control theory, optimization, and vehicle dynamics, with practical application experience
We offer:
- Competitive health insurance options
- 401K plan management
- Remote-friendly and flexible team culture
- Free lunch and fully-stocked kitchen in our South Bay office
- Additional perks: monthly wellness stipend, office set up allowance, company retreats, and more to come as we scale
- The opportunity to work on one of the most interesting, impactful problems of the decade
Helm.ai is proud to be an equal opportunity employer building a diverse and inclusive workforce. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
Cambridge, MA
At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team in Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavioral Models, and Robotics.
The Team
The Human Aware Interactions and Learning team uses approaches from machine learning, robotics, and computer vision, along with insights from human factors literature, to devise new techniques that improve on the state of the art towards better machine understanding, prediction, and interactions with people in the driving domain, both in and around the vehicle.
We work with computational and cognitive researchers to test our approaches from a variety of data sources and human-in-the-loop experiments to devise ML approaches that work with the driver.
The Opportunity
We are seeking a Research Scientist to lead groundbreaking research at the intersection of machine learning, computer vision, and human factors. This role focuses on understanding, detecting, and developing intervention strategies for driver impairments, such as cognitive distraction and intoxication. The ideal candidate will contribute to fundamental research, publish in top-tier venues, and build machine learning models and prototypes that integrate human-in-the-loop data towards novel approaches for understanding and assisting drivers under diverse situations.
This is an opportunity to work on innovative research in human-robot interaction and intelligent vehicle systems in a collaborative and interdisciplinary team of experts in robotics, AI, and human factors. You will have access to innovative robotic platforms and simulation tools with the potential to contribute to academic publications and impactful real-world applications.
Responsibilities
-
Conduct original research on driver impairment detection and intervention (e.g. warning, coaching, actuation) using machine learning and computer vision.
-
Develop algorithms and models to analyze driver behavior, physiological signals, and other multimodal inputs, as well as perform ML-based interactions with the driver.
-
Design, implement, and conduct human-in-the-loop behavioral studies, ensuring robustness and real-world applicability.
-
Publish findings in high-impact conferences and journals.
-
Collaborate with interdisciplinary teams, including human factors experts, cognitive scientists, and engineers.
-
Prototype and validate ML-based intervention strategies to enhance driver safety and performance.
Qualifications
-
PhD in Computer Vision, Machine Learning, Human-Centered AI, or a related field.
-
Research experience in human and machine vision, behavior analysis, or multimodal learning.
-
Strong publication record (e.g., CVPR, NeurIPS, ICCV, ICLR).
-
Experience working with human-in-the-loop data: data collection, annotation strategies, and model training.
-
Proficiency in deep learning frameworks (e.g., PyTorch, Jax, Hugginface) and data analysis tools.
-
Ability to work both independently and as part of an interdisciplinary team.
Bonus Qualifications
-
Experience in developing real-time AI systems for human monitoring.
-
Familiarity with physiological and cognitive state estimation (e.g., eye tracking, EEG, heart rate variability).
-
Background in human factors, cognitive psychology, or related fields.
-
Experience deploying machine learning models in real-world environments.
-
Knowledge of software development industry practices (version control, CI/CD, documentation).
Please submit a brief cover letter and add a link to Google Scholar to include a full list of publications when submitting your CV for this position.