Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole
  • Work closely with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms

Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Natick, MA, United States


The Company: Cognex is a global leader in the exciting and growing field of machine vision. This position is a hybrid role in our Natick, MA corporate HQ.

The Team: This position is for an experienced Software Engineer in the Core Vision Technology team at Cognex, focused on architecting and productizing the best-in-class computer vision algorithms and AI models that power Cognex’s industrial barcode readers and 2D vision tools with a mission to innovate on behalf of customers and make this technology accessible to a broad range of users and platforms. Our products combine custom hardware, specialized lighting and optics, and world-class vision algorithms/models to create embedded systems that can find and read high-density symbols on package labels or marked directly on a variety of industrial parts, including aircraft engines, electronics substrates, and pharmaceutical test equipment. Our devices need to read hundreds of codes per second, so speed-optimized hardware and software work together to create best in class technology. Companies around the world rely on Cognex vision tools and technology to guide assembly, automate inspection, and speed up production and distribution.

Job Summary: The Core Vision Technology team is seeking an experienced developer with deep knowledge of the software development life cycle, creative problem solving skills and solid design thinking, with a focus on productization of AI technology on embedded platforms. You will play the critical role of ** a chief architect **, who will lead the development and productization of computer vision AI models and algorithms on multiple Cognex products; with the goal of making the technology modular and available to a broad range of users and platforms. In this role, you will interface with machine vision experts in R&D, product, hardware, and other software engineering teams at Cognex. A successful individual will lead design discussions, make sound architectural choices for the future on different embedded platforms, advocate for engineering excellence, mentor junior engineers and extend technical influence across teams. Prior experience with productization of AI technology is essential for this position.

Essential Functions: -Develop and productize innovative vision algorithms, including AI models developed by the R&D team for detecting and reading challenging 1D and 2D barcodes, and vision tools for gauging, inspection, guiding, and identifying industrial parts. -Lead software and API design discussions and make scalable technology choices meeting current and future business needs.
-More details in the link below

Minimum education and work experience required: MS or PhD from a top engineering school in EE, CS or equivalent 7+ years relevant, high tech work experience

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Location New York, NY Seattle, WA Boston, MA


Description Climate Pledge Friendly helps customers discover and shop for products that are more sustainable. We partner with trusted sustainability certifications to highlight products that meet strict standards and help preserve the natural world. By shifting customer demand towards more sustainable products, we incentivize selling partners to build better selection, creating a virtuous cycle that yields significant environmental benefit at scale.

We are seeking a Senior Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. You will take the lead in conceptualizing, building, and launching models that significantly improve our shopping experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology.

You will work with business leaders, scientists, and engineers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed ML models and services. The types of initiatives you can expect to work on include a) personalized recommendations that help our customers find the right sustainable products on each shopping journey, b) automated solutions that combine ML/LLM and data mining to identify products that align with our sustainability goals and resonate with our customers' values, and c) models to measure the environmental and econometric impacts of sustainable shopping.


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

London


Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly.

Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

Where you will have an impact We're looking for an experienced Applied Scientist with expertise in Neural Radiance Fields (NeRFs) and Gaussian Splatting to join our Vision & Graphics team and advance our innovative neural simulator, Ghost Gym. This role is central to improving Ghost Gym's capabilities, utilizing state-of-the-art neural rendering techniques to craft photorealistic 4D worlds. You'll be at the forefront of developing and applying groundbreaking research to generate thousands of simulated scenarios. These scenarios are critical for training, testing, and debugging our end-to-end AI driving models, contributing significantly to the creation of safe and reliable AI driving technology. Your work will focus on improving the efficiency, realism, and dynamism of our simulations, especially for dynamic and outdoor environments, pushing the limits of current photorealistic visualization technologies.

Challenges you will own Conducting cutting-edge research in NeRFs, Gaussian splatting, and related technologies, with a focus on solving real-world challenges in 3D rendering Developing and implementing algorithms for efficient, high-quality 3D scene reconstruction and rendering, particularly for dynamic and outdoor environments Collaborating with cross-functional teams to integrate research findings into scalable, production-level solutions Staying abreast of the latest developments in the field, evaluating and incorporating state-of-the-art techniques into our workflows Potentially finding opportunities to publish research findings in top-tier journals and conferences, contributing to the scientific community and establishing Wayve as a leader in the field What you will bring to Wayve Essential Proven track record of research in NeRFs, Gaussian splatting, or closely related areas, demonstrated through publications or deployed applications Strong programming skills in Python with experience in deep learning frameworks such as PyTorch Solid foundation in mathematics and physics underlying 3D graphics and rendering techniques Excellent problem-solving skills and the ability to work independently as well as in a team environment Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment

Desirable Experience with dynamic scene reconstruction and rendering, particularly in outdoor environments Familiarity with parallel computing, GPU programming, and optimization techniques PhD or MSc in Computer Science, Computer Engineering, or a related field, with a focus on computer graphics, computer vision, or machine learning What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact Competitive compensation and benefits A dynamic and fast-paced work environment in which you will grow every day - learning on the job, from the brightest minds in our space, and with support for more formal learning opportunities too A culture that is ego-free, respectful and welcoming (of you and your dog) - we even eat lunch together every day


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Open to Seattle, WA; Costa Mesa, CA; or Washington, DC

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

WHY WE’RE HERE The Mission Software Engineering team builds, deploys, integrates, extends, and scales Anduril's software to deliver mission-critical capabilities to our customers. As the software engineers closest to Anduril customers and end-users, Mission Software Engineers solve technical challenges of operational scenarios while owning the end-to-end delivery of winning capabilities such as Counter Intrusion, Joint All Domain Command & Control, and Counter-Unmanned Aircraft Systems.

As a Mission Software Engineer, you will solve a wide variety of problems involving networking, autonomy, systems integration, robotics, and more, while making pragmatic engineering tradeoffs along the way. Your efforts will ensure that Anduril products seamlessly work together to achieve a variety of critical outcomes. Above all, Mission Software Engineers are driven by a “Whatever It Takes” mindset—executing in an expedient, scalable, and pragmatic way while keeping the mission top-of-mind and making sound engineering decisions to deliver successful outcomes correctly, on-time, and with high quality.

WHAT YOU’LL DO -Own the software solutions that are deployed to customers -Write code to improve products and scale the mission capability to more customers -Collaborate across multiple teams to plan, build, and test complex functionality -Create and analyze metrics that are leveraged for debugging and monitoring -Triage issues, root cause failures, and coordinate next-steps -Partner with end-users to turn needs into features while balancing user experience with engineering constraints -Travel up to 30% of time to build, test, and deploy capabilities in the real world

CORE REQUIREMENTS -Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics. -At least 2-5+ years working with a variety of programming languages such as Java, Python, Rust, Go, JavaScript, etc. (We encourage all levels to apply) -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -A desire to work on critical software that has a real-world impact -Must be eligible to obtain and maintain a U.S. TS clearance

Desired Requirements -Strong background with focus in Physics, Mathematics, and/or Motion Planning to inform modeling & simulation (M&S) and physical systems -Developing and testing multi-agent autonomous systems and deploying in real-world environments. -Feature and algorithm development with an understanding of behavior trees. -Developing software/hardware for flight systems and safety critical functionality. -Distributed communication networks and message standards -Knowledge of military systems and operational tactics

WHAT WE VALUE IN MISSION SOFTWARE Customer Facing - Mission Software Engineers are the software engineers closest to Anduril customers, end-users, and the technical challenges of operational scenarios. Mission First - Above all, MSEs execute their mission in an expedient, scalable, and pragmatic way. They keep the mission top-


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply