CVPR 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2025.
Search Opportunities
Our Autonomous Vehicles Platform team is searching for engineers to develop and bring NVIDIA's automotive platform out to the world. You will participate in a focused effort to develop and productize ground-breaking solutions that will revolutionize the world of transportation and the growing field of self-driving cars. You will work with hardworking and dedicated multi-functional engineering development teams across various vehicle subsystems to integrate their work into our autonomous driving DRIVE SW platform, while achieving or exceeding all meaningful NVIDIA and automotive standards & guidelines. You'll find the work is exciting, fun, and very meaningful. We have deadlines, customers, and competition.
What you'll be doing:
- Develop embedded real time system software responsible providing safety services for Advance Driver Assistance System
- Optimize solution to meet all product, safety and system level requirements and satisfy key performance indicators
- Adhere to development rigor that insures achieving the overall product safety goals and develop supporting work-products that align with functional safety process
- Collaborate with software, hardware, and integration teams to derive and drive system level architecture and requirements
- Regularly engage with customer teams directly to customize, integrate, and productize
- Analyze sophisticated technical issues and independently drive resolution across multiple teams
- Work in an environment which involves Hypervisor, Linux, QNX RTOS, Classic AUTOSAR
What we need to see:
- BS, MS in CS/CE/EE, or equivalent experience
- 8+ years in a related field
- Must have detailed experience with Classic Autosar Software Stack and associated development tools
- Excellent C coding skills along with proficiency in Scripting languages
- Master of software debugging tools: software debuggers, analyzers, trace loggers
- Deep understanding of SoC principles, general systems architectures, operating systems, device drivers, memory management, multithreading, and real-time scheduling
- Excellent communication and organization skills, with a logical approach to problem solving, good time management and task prioritization
Ways to stand out from the crowd:
- Prior experience in Automotive field
- Background with QNX RTOS and tools
- Prior experience working with CAN and tools, RADAR, LiDAR is a plus
- Experience with onsite and offsite customer support
- Prior experience writing Network Switch Configuration Firmware
The base salary range is 184,000 USD - 356,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by
University of Surrey, Guildford, UK
Warehouses represent complex, dynamic environments requiring efficient navigation and task execution by autonomous robots, with minimal downtime and human intervention. Modern SLAM (Simultaneous Localization and Mapping) techniques can provide a robust mapping of the warehouse itself. However, there are also a huge number of dynamically interacting entities (including other robots, human operators, vehicles, etc). This, coupled with significant variations across regions and customer requirements, can lead to complex emergent situations at deployment that were not considered during development. This project seeks to address these challenges by developing flexible generative world models which are capable of rapidly simulating diverse warehouse environments using varying sensor loadouts and metadata.
By leveraging generative diffusion models, multimodal generative models, and large language models, the project will create adaptive, scalable world models that can simulate and predict warehouse dynamics. This enables robots to train, test, and adapt in virtual environments before deployment, significantly reducing real-world testing requirements and robustness. This digital twinning capability also makes it possible to safely test resilience to rare emergency scenarios, such as hardware failures or human obstacles.
Academic Institution
This is an opportunity to join the Centre for Vision, Speech and Signal Processing at the University of Surrey. This is the largest such UK institute, and is ranked 1st for Computer vision research in the UK and 3rd in Europe (csrankings.org). The supervisory team includes award-winning and world-renowned academics, with the applicant joining a large and tightly knit research team of 10+ peers working on various related topics (http://personalpages.surrey.ac.uk/s.hadfield/).
Industrial Sponsorship and Research Impact
This project is funded at a 50% rate by Locus Robotics. Locus are world-leaders in Warehouse Automation. They have a fleet of more than 50,000 active robots in daily operation around the world, and supply warehouse logistics systems for more than 50 internationally recognisable customers including DHL, UPS and Boots. In addition to funding studentship costs, Locus are also providing access to their hardware, warehouses, and engineer time to support the project. There is also an opportunity to undertake a paid internship in industry, helping build impact for your research activities and getting real-world research experience.
Eligibility criteria Open to any UK or international candidates. Up to 30% of our UKRI funded studentships can be awarded to candidates paying international rate fees.
You will need to meet the minimum entry requirements for our PhD programme.
San Diego
Qualcomm's Multimedia R&D computer vision Group is seeking passionate candidates with expertise in Computer Vision and Computer Graphics. The team is developing advanced AI and deep learning algorithms, architectures, and systems for multimedia applications enriching the embedded systems for Smartphone, IOT, Compute and Extended Reality.
The candidate should have a deep understanding in digital signal processing, and data-driven approaches in computer vision or computer graphics, exhibiting excellent analytical and algorithm development skills and proficiency in Python and C/C++ programming. The selected candidate will have responsibilities in the following areas: System architecture design and development of computer vision applications on Android/windows/Linux platforms. Research and develop efficient deep learning architectures and models for AI-enabled camera, video analytics, biometrics, and extended reality.
Research and development of multimedia applications by generative AI, including LLM, VLM, LVM, and other multi-modal models. This includes but not limited to video semantic analytics, human face and body 3D analysis, biometrics with forgery detection, face and body synthesis and editing with GenAI, and 3D body/face reconstruction/editing from 2D images.
Must have Qualifications - Master's or PhD degree in Electrical Engineering, Computer Science, and/or closely related field. - 3+ years of professional experience in computer deep learning software development for applications like Computer Vision or Computer Graphics. - 2+ years of experience developing practical deep learning algorithms using PyTorch, TensorFlow, or other deep learning frameworks. - Solid Background in software development, digital signal processing, computer vision or computer graphics. - Solid background in machine/deep learning fundamental knowledges and mathematics. - Proficiency in C/C++ and Python programming.
Preferred Qualifications - Excellent knowledge of C++ and object-oriented programming, capability to design and implement robust, high-performance, and flexible system software. - Expertise in various networks architectures such as VAE, GAN, diffusion, transformer, Gaussian Splatting, NeRF, U-Nets, and ResNets models. - Experience of deep learning model pruning, compression, and quantization for executing on edge device without performance decreasing. - Excellent written and verbal communication skills. - Track record of driving ideas from design through commercialization. - Experience in computer vision algorithm design development and integrating machine learning algorithm into camera systems. - Self-motivated and strong desire to learn new technologies, design novel techniques and propose them for technology commercialization. Team Player.
Minimum Qualifications: • Master's degree in Computer Engineering, Computer Science, Electrical Engineering, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Engineering, Computer Science, Electrical Engineering, or related field.
For more positions and to register: https://qualcomm.eightfold.ai/events/candidate/landing?plannedEventId=bkyZbLR19
San Diego, CA
Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. We are a leader in fleet safety solutions. With growth exceeding 4x year over year, our solution is quickly being recognized as a significant disruptive technology. Our team is growing, and we need forward-thinking, uncompromising, competitive team members to continue to facilitate our growth.
Reporting to the Vice President, of Engineering-Analytics, the Senior DL/AI Research Engineer will be responsible for developing and implementing advanced deep learning and artificial intelligence models to solve complex problems across various domains. This role requires a strong foundation in machine learning algorithms, deep learning frameworks, and the ability to translate theoretical concepts into practical applications. The AI Research Engineer will work closely with cross-functional teams to drive innovation and advance the state-of-the-art in AI research and development.
Responsibilities:
To perform this job successfully, an individual must be able to perform each essential function satisfactorily.
- Research and Development
- Conduct original research and develop DL/AI models under the guidance of senior researchers.
- Develop novel algorithms and models to address challenging problems
- Contribute to research papers for conferences and journals
- Stay updated with the latest advancements in the field and apply them to current projects
- Model Development and Evaluation
- Design and execute experiments to validate DL/AI models
- Optimize model performance and scalability
- Perform rigorous testing and validation to ensure robustness and accuracy
- Data Management
- Preprocess and analyze large datasets to extract meaningful insights
- Develop and maintain data pipelines to support model training and evaluation
- Perform other duties as assigned;
- Enhance professional growth and development through participation in educational programs, current literature, and training;
- Documentation and Reporting
- Maintain detailed research documentation, technical reports, and code for reproducibility and transparency
- Prepare technical reports and presentations to share results with internal and external stakeholders
- Code Development
- Write and maintain research code
Background Required:
- PhD in Computer Science, Machine Learning, Artificial Intelligence, or a related field. 0 – 2 years of experience.
- Strong publication record in top-tier AI, machine learning, and related conferences and journals.
- Proven experience in developing and implementing machine learning and deep learning algorithms.
- Proficiency in programming languages such as Python or C++, and familiarity with ML frameworks such as TensorFlow, JAX or PyTorch.
- Strong communication skills, both written and verbal, with the ability to present complex technical information to a diverse audience.
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Location: San Diego, CA
Other Essential Abilities and Skills:
- Excellent communication skills (verbal & written)
- Establishes & maintains effective, collaborative relationships (internally & externally)
- Computer literate (Office Suite, PowerPoint, Word, Excel)
- Team player
- Self-motivated
Based in the Georgetown, Kentucky area
TRI is assembling a world-class team to develop and integrate innovative solutions that enable a robot to perform complex, human-level mobile manipulation tasks, navigate with and among people, and learn and adapt over time. The team will develop, deploy, and validate systems in real-world environments, in and around homes.
The team will be focused on heavily leveraging machine learning to marry perception, prediction, and action to produce robust, reactive, coordinated robot behaviors, bootstrapping from simulation, leveraging large amounts of data, and adapting in real world scenarios.
TRI has the runway, roadmap, and expertise to transition the technology development to a product that impacts the lives of millions of people. Apply to join a fast paced team that demands high-risk innovation and learning from failures, using thorough processes to identify key technologies, develop a robust, high quality system, and quantitatively evaluate performance. As part of the team, you will be surrounded and supported by the significant core ML, cloud, software, and hardware expertise at TRI, and be a part of TRI's positive and diverse culture.
Responsibilities
-
Support TRI robot deployments in Toyota factories in Kentucky, Indiana, and potentially other sites.
-
Independently operate and test advanced autonomous robots ranging from inspection systems, AMRs, industrial arms, cobot arms, and mobile manipulators.
-
Perform failure analysis and troubleshooting of complex autonomous robots, including mechanical and electrical hardware repair, Linux system debugging, custom tool usage, software debugging, and detailed/accurate bug reporting.
-
Integrate these robots with existing systems in the factories, including interfacing to PLCs, wifi/networks, fleet management systems, etc.
-
Work with the factory shops to safely commission these robots.
-
Write robot operation procedures tailored to individual robots and individual tasks/processes.
-
Work closely with TMNA factory team members to collect and document system and user interface feedback.
-
Work closely with TRI product and software/hardware development teams to iteratively prioritize new features, design improvements, and bug fixes.
-
Develop and maintain adaptable and interactive web applications. Make updates to these applications based on user feedback from TMNA factory team members.
-
Train appropriate factory team members on robot operations.
-
Ensure clear communication and alignment across multidisciplinary teams from very different organizational cultures (TMNA and TRI).
Qualifications
-
BS in an engineering-related field and 5+ years of relevant industry experience.
-
Experience with advanced robot development, deployment, failure analysis, and debugging.
-
Experience with solving complex electromechanical systems.
-
Familiarity with actuators (including motor controllers, gear trains, encoders, etc.) and sensors (f/t sensors, cameras, lidars, IMUs, etc).
-
Familiarity with various buses and communication protocols: USB, CAN, RS485, EtherCAT, Ethernet/IP, etc.
-
Proficient in one or more of the following languages: C, C++, or Python.
-
Proficient in one or more of the following languages: HTML, CSS, and JavaScript/TypeScript.
-
Software experience: ROS or other robot system software.
-
Experience with Linux (kernel, networking, CLI, etc).
-
Ability to travel up to 75% of the time.
-
Have a passion for taking on technical challenges and an independent working style with strong problem-solving abilities.
Location USA, WA, Seattle
Description Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads.
Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience.
As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people.
We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you!
We are looking for a scientist happy to work in a multicultural and multi-disciplinary group, where junior and senior scientists collaborate, each with their expertise, to carry out a scientific activity with shared research goals.
The Artificial Intelligence for Good (AIGO) Research Unit is coordinated by Prof. Vittorio Murino. It focuses on fundamental AI topics from methodological and theoretical perspectives, yet functional to tackle a number of applications and actual case studies related to several domains such as biomedical, healthcare, and several others ultimately leading to people wellbeing. Specifically, AIGO aims at studying learning paradigms in presence of imperfect data, especially in multimodal scenarios, hence tackling unsupervised, semi-supervised and self-supervised settings, weakly or noisy labeled data, few, class imbalanced, or biased. Domain adaptation and generalization, few/zero-shot learning, learning with biased data, and continual learning, even extended to multimodal scenarios, are among the major areas to be investigated, also given their valence in tackling practical, real-world applications.
AIGO will also consider generative AI models, especially related to the most recent trend regarding multimodal foundation models, including large language models (LLMs) and vision and language models (VLMs).
Main applications will involve biomedical, biological, neuroscience, and healthcare in general. Brain investigation (and its diseases) is identified as the main (but not exclusive) area of interest. Ultimately, AIGO will seek to develop models that can also be readily applicable to IIT interdisciplinary research, ranging from neuroscience to robotics, in particular by leveraging our in-house robotics platforms (iCub, ErgoCub, R1 et al.), IIT neuroscience teams, and HPC computational facilities.
Within the research team, your main responsibilities will be:
• Pursue of research in some of the above-mentioned topics addressed by AIGO Research Line, at both individual and collaborative level. Interdisciplinarity research along IIT Flagship programs is also an important aspect in AIGO and in IIT in general;
• Supervision of the research activities of PhD students;
• Publications on major conferences and top journals;
• Search and preparation of funding opportunities, e.g., project proposals to apply to national and international grants, as well as to acquire funds from industrial partners.
Look the webpage for the essential requirements and additional skills, compensation package, and other info.
Deadline is July 31st, 2025.
Location of the work is Genova, Italy.
I'm at CVPR til Sunday June 15, feel free to contact me in case of interest!
Lecturer/ Senior Lecturer
The Department of Computer Science at the University of Bath invites applications for up to seven faculty positions at various ranks from candidates who are passionate about research and teaching in artificial intelligence and machine learning. These are permanent positions with no tenure process. The start date is flexible.
The University of Bath is based on an attractive, single-site campus that facilitates interdisciplinary research. It is located on the edge of the World Heritage City of Bath and offers the lifestyle advantages of working and living in one of the most beautiful areas in the United Kingdom.
For more information and to apply, please visit: https://www.bath.ac.uk/campaigns/join-the-department-of-computer-science/
RI, Carnegie Mellon University, Pittsburgh
Description
Location USA, WA, Seattle
Description As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use.
Are you excited about developing state-of-the-art Deep Learning, Computer Vision and GenAI models using large data sets to solve real world problems? Do you have proven analytical capabilities and can multi-task and thrive in a fast-paced environment? You enjoy the prospect of solving real-world problems that, quite frankly, have not been solved at scale anywhere before. Along the way, you’ll get opportunities to be a fearless disruptor, prolific innovator, and a reputed problem solver—someone who truly enables machine learning to create significant impacts.
We're seeking a Principal Applied Scientist with the ability to apply deep learning and generative AI techniques to conceptualize, promote, and execute cutting-edge solutions for previously unsolved challenges. As an Applied Scientist at Amazon One, you will bring AI advancements to build foundational models for customer-facing identity/biometrics solutions in complex industrial settings. You will be working in a fast-paced, cross-disciplinary team of researchers who are leaders in the field. You will take on challenging problems, distill real requirements, and then deliver solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products.