Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

We’re looking for engineers with a Machine Learning and Artificial Intelligence background to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • You will be driving fundamental and applied research in ML/AI. You will explore the boundaries of what is possible with the current technology set.
  • You will be combining industry best practices and a first-principles approach to design and build ML models.
  • Work in concert with product and infrastructure engineers to improve Figma’s design and collaboration tool through ML powered product features.
  • We'd love to hear from you if you have:
  • 5+ years of experience in programming languages (Python, C++, Java or R)
  • 3+ years of experience in one or more of the following areas: machine learning, natural language processing/understanding, computer vision, generative models.
  • Proven experience researching, building and/or fine-tuning ML models in production environments
  • Experience communicating and working across functions to drive solutions

While not required, It’s an added plus if you also have:

  • Proven track record of planning multi-year roadmap in which shorter-term projects ladder to the long-term vision.
  • Experience in mentoring/influencing senior engineers across organizations.

Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location Madrid, ESP


Description Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site.

The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content?

We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables.


Apply

Seattle, US


Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The Opportunity Photoshop ART is seeking a Research Scientist to join our inpainting R&D team focused on making significant progress in image generation/restoration, low level vision, image editing with an eventual posture toward productization. Individuals in this role are expected to be expert in identified research areas such as artificial intelligence, machine learning, computer vision, and image processing. The ideal candidate will have a keen interest in producing new science to advance Adobe products.

What you'll Do Work towards long-term results-oriented research goals, while identifying intermediate achievements. Contribute to research that can be applied to Adobe product development. Help integrating novel research work into Adobe’s product. Lead and collaborate on research projects across different Adobe divisions. What you need to succeed Ph.D. and solid publications in machine learning, AI, computer science, statistics, or scene semantic understanding. Experience communicating research for public audiences of peers. Experience working in teams. Knowledge in a programming language. Preferred Qualification 2 years of professional full-time experience preferred, but not required 2+ year(s) of internship with primary emphasis on AI research in image generation, low level vision, image restoration, and segmentation Experience in collaboration with a team with varied strengths. 4+ First-author publications at peer-reviewed AI conferences (e.g. NIPS, CVPR, ECCV, ICML, ICLR, ICCV, and ACL). Experience in developing and debugging in Python. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.

If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.

Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $129,400 -- $242,200 annually. Pay within this range varies by work location and may also depend on job-relate


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

The Advanced Development Center at ASML in Wilton, Connecticut is seeking an Optical Data Analyst with expertise processing of images for metrology process development of ultra-high precision optics and ceramics. The Advanced Development Center (ADC) is a multi-disciplinary group of engineers and scientists focused on developing learning loop solutions, proto-typing of next generation wafer and reticle clamping systems and industrialization of proto-types that meet the system performance requirements.

Role and Responsibilities The main job function is to develop image processing, data analysis and machine learning algorithm and software to aid in development of wafer and reticle clamping systems to solve challenging engineering problems associated with achieving nanometer (nm) scale precision. You will be part of the larger Development and Engineering (DE) sector – where the design and engineering of ASML products happens.

As an Optical Data Analyst, you will: Develop/improve image processing algorithm to extract nm level information from scientific imaging equipment (e.g. interferometer, SEM, AFM, etc.) Integrate algorithms into image processing software package for analysis and process development cycles for engineering and manufacturing users Maintain version controlled software package for multiple product generations Perform software testing to identify application, algorithm and software bugs Validate/verify/regression/unit test software to ensure it meets the business and technical requirements Use machine learning models to predict trends and behaviors relating to lifetime and manufacturing improvements of the product Execute a plan of analysis, software and systems, to mitigate product and process risk and prevent software performance issues Collaborate with the design team in software analysis tool development to find solutions to difficult technical problems in an efficient manner Work with database structures and utilize capabilities Write software scripts to search, analyze and plot data from database Support query code to interrogate data for manufacturing and engineering needs Support image analysis on data and derive conclusions Travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Redmond, Washington, United States


Overview Within AI Platform, the Cognitive Services team empowers developers and data scientists around the world and of all skill levels to easily add AI capabilities to their apps. #aiplatform

We are looking for a Research Scientist with a background in Computer Vision, Natural Language Processing and/or Artificial Intelligence, including topics like layout analysis, chart understanding, multi-page multi-document question answering, novel ways of leveraging large language models for document understanding and solving problems inherent to large language models (grounding, retrieval-based generation, etc.). Familiarity with modern large language models is a plus, but not required.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Your responsibilities will include:

Conduct pioneering research to propel the state-of-the-art in various tasks in document understanding. Work closely with fellow Research Scientists and Product Engineering teams to translate research outcomes into practical solutions. Provide expertise and support to the engineering team on various challenges, fostering collaboration between research and practical application. Take charge of the research agenda from problem definition to algorithm and model development.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly. Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience leading projects in AI applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas: Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural, oral and physical models of cars, people, and other dynamic agents

Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a technical leader within our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

Actively contributing to the Science’s technical leadership community, inclusive of proposing new projects, organising their work, and delivering substantial impact across Wayve. Leveraging our large, rich, and diverse sources of real-world driving data Architecting our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows Investigating learning algorithms to use (e.g. reinforcement learning, behavioural cloning) Leveraging simulation for controlled experimental insight, training data augmentation, and re-simulation Scaling models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborating with cross-functional, international teams to integrate research findings into scalable, production-level solutions Potentially contributing to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Experience leading a research agenda aligned with larger organisation or company goals Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly.

Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

Where you will have an impact We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience actively participating in AI projects applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas:

Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural and physical models of cars, people, and other dynamic agents Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborate with cross-functional teams to integrate research findings into scalable, production-level solutions. You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field.

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Desirable: Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact Competitive compensation and benefits A dynamic and fast-paced work environment in which you will grow every day - learning on the job, from the brightest minds in our space, and with support for more formal learning opportunities too


Apply

Zoox is looking for a software engineer to join our Perception team and help us build novel architectures for classifying and understanding the complex and dynamic environments in our cities. In this role, you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms. We are creating new algorithms for segmentation, tracking, classification, and high-level scene understanding, and you could work on any (or all!) of these components.

We're looking for engineers with advanced degrees and experience building perception pipelines that work with real data in rapidly changing and uncertain environments.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Location Seattle, WA Palo Alto, CA


Description Amazon’s product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on an AI-first initiative to continue to improve the way we do search through the use of large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced multi-modal deep-learning models on very large scale datasets, specifically through the use of advanced systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge Computer Vision and Deep Learning technologies and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: * How can multi-modal inputs in deep-learning models help us deliver delightful shopping experiences to millions of Amazon customers? * Can combining multi-modal data and very large scale deep-learning models help us provide a step-function improvement to the overall model understanding and reasoning capabilities? We are looking for exceptional scientists who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization.


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply