Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Madrid, ESP


Description Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site.

The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content?

We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables.


Apply

Redmond, Washington, United States


Overview We are seeking a highly skilled and passionate Research Scientist to join our Responsible & OpenAI Research (ROAR) team in Azure Cognitive Services.

As a Research Scientist, you will play a key role in advancing the field of Responsible Artificial Intelligence (AI) to ensure safe releases of the rapidly advancing AI technologies, such as GPT-4, GPT-4V, DALL-E 3 and beyond, as well as to expand and enhance our standalone Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Contribute to the development of Responsible AI policies, guidelines, and best practices and ensure the practical implementation of these guidelines within various AI technology stacks across Microsoft, promoting a consistent approach to Responsible AI. Enable the safe release of new Azure OpenAI Service features, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Other: Embody our Culture and Values


Apply

Location Niskayuna, NY


Description Job Description Summary At GE Aerospace Research, our team develops advanced embedded systems technology for the future of flight. Our technology will enable sustainable air travel and next generation aviation systems for use in commercial as well as military applications. As a Lead Embedded Software Engineer, you will architect and develop state-of-the-art embedded systems for real-time controls and communication applications. You will lead and contribute to advanced research and development programs for GE Aerospace as well as with U.S. Government Agencies. You will collaborate with fellow researchers from a range of technology disciplines, contributing to projects across the breadth of GE Aerospace programs. Job Description Essential Responsibilities: As a Lead Embedded Software Engineer, you will:

Work independently as well as with a team to develop and apply advanced software technologies for embedded controls and communication systems for GE Aerospace products Interact with hardware suppliers and engineering tool providers to identify the best solutions for the most challenging applications Lead small to medium-sized projects or tasks Be responsible for documenting technology and results through patent applications, technical reports, and publications Expand your expertise staying current with advances in embedded software to seek out new ideas and applications Collaborate in a team environment with colleagues across GE Aerospace and government agencies

Qualifications/Requirements:

Bachelor’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 7 years of industry experience OR a master’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 5 years of industry experience OR a Ph.D. in Electrical Engineering, Computer Science, or related disciplines with a minimum of 3 years of industry experience. Strong background in software development for embedded systems (e.g., x86, ARM) Strong embedded programming skills such as: C/C++, Python, and Rust Familiarity with CNSA and NIST cryptographic algorithms Willingness to travel at a minimum of 2 weeks per year Ability to obtain and maintain US Government Security Clearance US Citizenship required Must be willing to work out of an office located in Niskayuna, NY You must submit your application for employment on the careers page at www.gecareers.com to be considered Ideal Candidate Characteristics:

Coding experience with Bash, Python, C#, MATLAB, ARMv8 assembly, RISCV assembly Experience with embedded devices from Intel, AMD, Xilinx, NXP, etc. Experience with hardware-based security (e.g., UEFI, TPM, ARM TrustZone, Secure Boot) Understanding of embedded system security requirements and security techniques Experience with Linux OS and Linux security Experience with OpenSSL and/or wolfSSL Experience with wired and wireless networking protocols or network security Knowledge of 802.1, 802.3, and/or 802.11 standards Experience in software defined networks (SDN) and relevant software such as OpenFlow, Open vSwitch, or Mininet Hands-on experience with embedded hardware (such as protoboards) or networking equipment (such as switches and analyzers) in a laboratory setting Experience with embedded development in an RTOS environment (e.g., VxWorks, FreeRTOS) Demonstrated ability to take an innovative idea from a concept to a product Experience with the Agile methodology of program management The base pay range for this position is 90,000 - 175,000 USD Annually. The specific pay offered may be influenced by a variety of factors, including the candidate’s experience, education, and skill set. This position is also eligible for an annual discretionary bonus based on a percentage of your base salary. This posting is expected to close on June 16, 2024


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

At Zoox, you will collaborate with a team of world-class engineers with diverse backgrounds in areas such as AI, robotics, mechatronics, planning, control, localization, computer vision, rendering, simulation, distributed computing, design, and automated testing. You’ll master new technologies while working on code, algorithms, and research in your area of expertise to create and refine key systems and move Zoox forward.

Working at a startup gives you the chance to manifest your creativity and highly impact the final product.


Apply

San Jose, CA

The Media Analytics team at NEC Labs America is seeking outstanding researchers with backgrounds in computer vision or machine learning. Candidates must possess an exceptional track record of original research and passion to create high impact products. Our key research areas include autonomous driving, open vocabulary perception, prediction and planning, simulation, neural rendering, agentic LLMs and foundational vision-language models. We have a strong internship program and active collaborations with academia. The Media Analytics team publishes extensively at top-tier venues such as CVPR, ICCV or ECCV.

To check out our latest work, please visit: https://www.nec-labs.com/research/media-analytics/

Qualifications: 1. PhD in Computer Science (or equivalent) 2. Strong publication record at top-tier computer vision or machine learning venues 3. Motivation to conduct independent research from conception to implementation.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies? Do you enjoy working in a diverse and collaborative team?

The Microsoft Research AI Frontiers group is looking for a Principal Research Software Engineer with demonstrated machine learning experience to advance the state-of-the-art in foundational model-based technologies. Areas of focus on our team include, but are not limited to:

Human-AI interaction, collaboration, and experiences Applications of foundation models and model-based technologies Multi-agent systems and agent platform technologies Model, agent, and AI systems evaluation As a Principal Research Software Engineer on our team, you will need:

A drive for real world impact, demonstrated by a passion to build and deploy applications, prototypes, or open-source technologies. Demonstrated experience working with large foundation models and state-of-the-art ML frameworks and toolkits. A team player mindset, characterized by effective communication, collaboration, and feedback skills. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Leverage full-stack software engineering skills to build, test, and deploy robust and intuitive AI based technologies. Work closely with researchers and engineers to rapidly develop and test research ideas and drive a high-impact agenda. Collaborate with product partners to integrate and test new ideas within existing frameworks and toolchains. Embody our culture and values.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply