Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory comprised of over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking a Senior Researcher in the area of  Artificial Specialized Intelligence located in Vancouver, British Columbia, with a keen interest in developing cutting-edge large foundation models and post-training techniques for different domains and scenarios. This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.  
  Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.

Responsibilities Conduct cutting-edge research in large foundation models, focusing on applying large foundation models in specific domain. Collaborate with cross-functional teams to integrate solutions into Artificial Intelligence (AI) -driven system. Develop and maintain research prototypes and software tools, ensuring that they are well-documented and adhere to best practices in software development. Publish research findings in top-tier conferences and journals and present your work at industry events. Collaborate with other AI researchers and engineers, sharing knowledge and expertise to foster a culture of innovation and continuous learning within the team.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Senior Research Engineer – Generative AI to join their team in Redmond, WA and New York City, NY.

The mission of the AI Frontiers lab is to expand the pareto frontier of AI capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca), foundation models for actions (e.g., in gaming, robotics, and Office productivity tools) and Multi-Agent AI (e.g. AutoGen).

We are seeking Senior Research Engineers to join our team and contribute to the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Research Engineer, you will play a crucial role in developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of language models, action models and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring models to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

Microsoft Research (MSR) offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Embody our Culture and Values

Responsibilities As a Senior Research Engineer in AI Frontiers, you will design, develop, execute, and implement technology research projects in collaboration with other researchers, engineers, and product groups.

As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide.


Apply

Tokyo, Tokyo-to, Japan


Overview As one of the world's leading industrial research laboratories, Microsoft Research (MSR) has more than 1,000 researchers and engineers working across the globe. In the past 30 years, Microsoft scientists have not only carried out world-class computer science research, but also transferred the advanced technologies into our products and services that have changed millions of people’s lives and ensured that Microsoft is at the forefront of digital transformation.

Part of Microsoft Research, Microsoft Research Asia (MSR Asia), established in 1998, is a leading research lab with major sites in Beijing, Shanghai and Vancouver. Over the years, technologies developed by MSR Asia have made a significant impact within Microsoft and also around the world, and new, innovative technologies are constantly being born from the lab. As one of the world-class research labs, MSRA offers an exhilarating, supportive, open and inclusive environment for top talents to create the future through their disruptive and cutting-edge research. (More information about Microsoft Research Lab - Asia - Microsoft Research).

Along with business growth, Microsoft Research Asia (MSRA) is increasing its presence in Japan, and looking for a Principal Research Manager who specializes in AI with an emphasis on Embodied AI and Robotics, AI Model innovations (NLP, vision, multi-modality), Societal AI, Wireless sensing, and Wellbeing. This is a unique opportunity to lead an ambitious research agenda and work with various teams to explore new applications of those research areas.

Responsibilities •As a leading and accomplished expert in a broad research area (e.g., Embodied AI and Robotics, AI Model, Multimedia and Vision), has a comprehensive understanding of the relevant literature, research methods, and business and academic context. •Defines and articulates a clear long-term research vision that is in line with MSRA strategic focus and drive research agenda landing with planned schedule •As a local representative, fosters cooperative relationships with local governments, academic communities, industry partners and business groups within Microsoft to establish MSRA presence locally and support future growth •Creates synergy among MSRA research groups in multiple locations to enable collaboration and creativity • As a people manager, hires and retains top talents. Deliveries success through empowerment and accountability


Apply

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply

※Location※ South Korea Seoul / Pangyo


※Description※ 1) Deep learning compression and optimization - Development of algorithms for compression and optimization of deep learning networks - Perform deep learning network embedding (requires understanding of HW platform)

2) AD vision recognition SW - Development of deep learning recognition technology based on sensors such as cameras - Development of pre- and post-processing algorithms and function output - Development of optimization of image recognition algorithm

3) AD decision/control SW - Development of information-based map generation technology recognized by many vehicles - Development of learning-based nearby object behavior prediction model - Development of driving mode determination and collision prevention function of Lv 3 autonomous driving system


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Mountain View


Who we are Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning. With our multi-national world-class technical team, we’re building things differently.

We don’t think it’s scalable to tell an algorithm how to drive through hand-coded rules and expensive HD maps. Instead, we believe that using experience and data will allow our algorithms to be more intelligent: capable of easily adapting to new environments. Our aim is to be the future of self-driving cars: the first to deploy in 100 cities across the world bringing autonomy to everyone, everywhere.

Where you will have an impact: Science is the team that is advancing our end-to-end autonomous driving research. The team’s mission is to accelerate our journey to AV2.0 and ensure the future success of Wayve by incubating and investing in new ideas that have the potential to become game-changing technological advances for the company.

As the first Research Manager in our Mountain View office, you will be responsible for managing & scaling a strong Science team which is building our Wayve Foundational Model in collaboration with other Wayve science teams in London and Vancouver. You will provide coaching and guidance to each of the researchers and engineers within your team and work with leaders across the company to ensure sustainable career growth for your team during a period of growth in the company. You will participate in our project-based operating model where your focus will be unlocking the potential of your team and its technical leaders to drive industry-leading impact. As part of your work, you will help identify the right projects to invest in, ensure the right allocation of resources to those projects, keep the team in good health, provide technical feedback to your team, share progress to build momentum, and build alignment and strong collaboration across the wider Science organisation. We are actively hiring and aim to substantially grow our research team over the next two years and you will be at the heart of this.

What you’ll bring to Wayve Essential: Prior experience as a manager of research teams (10-15+ people) with a clear career interest towards management Passionate about fostering personal and professional growth in individual team members Experience with roadmap planning, stakeholder management, requirements gathering and alignment with peers towards milestones and deliverables Strong knowledge of Machine Learning and related areas, such as Deep Learning, Natural Language Processing, Computer Vision, etc. Industry experience with machine learning technology development which has had real-world product impact Experience driving a team and technical project through the full lifecycle, ideally within the language, vision or multimodal space Passionate about bringing research concepts through to product Research and engineering fundamentals MS or PhD in Computer Science, Engineering, or similar experience

Desirable: Experience managing the execution of a technical product Good experience working in a project-based (“matrix”) operating environment Proven track record of successfully delivering research projects and publications Experience working with robotics, self-driving, AR/VR, or LLMs Our offer Competitive compensation, on-site chef and bar, lots of fun socials, workplace nursery scheme, comprehensive private health insurance and more! Immersion in a team of world-class researchers, engineers and entrepreneurs. A position to shape the future of autonomous driving, and thus bring about a real world deployment of a breakthrough technology. Help relocating/travelling to London, with visa sponsorship. Flexible working hours - we trust you to do your job well, at times that suit you and your team.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer on our Motion Planning team, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Develop our planner behavior and trajectories in collaboration with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole

Apply