Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Location Madrid, ESP


Description Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site.

The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content?

We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables.


Apply

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

Redmond, Washington, United States


Overview In Mixed Reality, people—not devices—are at the center of everything we do. Our tech moves beyond screens and pixels, creating a new reality aimed at bringing us closer together—whether that’s scientists “meeting” on the surface of a virtual Mars or some yet undreamt-of possibility. To get there, we’re incorporating diverse groundbreaking technologies, from the revolutionary Holographic Processing Unit to computer vision, machine learning, human-computer interaction, and more. We’re a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox. We believe there is a better way. If you do too, we need you! 

You are drawn to work on the latest and most innovative products in the world. You seek projects that will transform how people interact with technology. You have a drive to grow your skillset by finding unique challenges that have yet to be solved. We are looking for Senior Software Engineer to come and join us in delivering the next wave of holographic experiences in an exciting new project. 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Incorporate the latest Artificial Intelligence, Machine Learning, Computer Vision and Sensor Fusion capabilities into the design of our Products and Services Characterize various sensors like inertial measurement units, magnetometers, visual light cameras, depth cameras, GPS etc. to understand their properties and how they relate to the accuracy of tracking, mapping, and localization. Implement and design innovative measurement solutions to quantify the accuracy and reliability of Mixed Reality devices against industry gold standard ground-truth systems Partner with engineers, designers, and program managers to deliver solid technical designs. Embody our Culture and Values


Apply

B GARAGE was founded in 2017 by two PhD graduates from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the co-founders set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

The B GARAGE team is always looking for an enthusiastic, proactive, and collaborative Robotics and Automation Engineers to support the launch of intelligent aerial robots and autonomously sustainable ecosystems.

If you're interested in joining the B Garage team but don't see a role open that fits your background, apply to the general application and we'll reach out to discuss your career goals.


Apply

Location Seattle, WA Arlington, VA New York, NY San Francisco, CA


Description Join us at the cutting edge of Amazon's sustainability initiatives to work on environmental and social advancements to support Amazon's long term worldwide sustainability strategy. At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people.

The Worldwide Sustainability (WWS) organization capitalizes on Amazon’s scale & speed to build a more resilient and sustainable company. We manage our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable.

Sustainability Science and Innovation (SSI) is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise. We use this expertise and skills to identify, develop and evaluate the science and innovations necessary for Amazon, customers and partners to meet their long-term sustainability goals and commitments.

We’re seeking a Senior Principal Scientist for Sustainability and Climate AI to drive technical strategy and innovation for our long-term sustainability and climate commitments through AI & ML. You will serve as the strategic technical advisor to science, emerging tech, and climate pledge partners operating at the Director, VPs, and SVP level. You will set the next generation modeling standards for the team and tackle the most immature/complex modeling problems following the latest sustainability/climate sciences. Staying hyper current with emergent sustainability/climate science and machine learning trends, you'll be trusted to translate recommendations to leadership and be the voice of our interpretation. You will nurture a continuous delivery culture to embed informed, science-based decision-making into existing mechanisms, such as decarbonization strategies, ESG compliance, and risk management. You will also have the opportunity to collaborate with the Climate Pledge team to define strategies based on emergent science/tech trends and influence investment strategy. As a leader on this team, you'll play a key role in worldwide sustainability organizational planning, hiring, mentorship and leadership development.

If you see yourself as a thought leader and innovator at the intersection of climate science and tech, we’d like to connect with you.


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

Location Mountain View, CA


Description Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role: We are seeking passionate Senior/Staff Software Engineers, who have strong fundamentals in software development practices and are experts in C++ language in production-oriented environment. The ideal candidate is a highly experienced C++ developer with a passion for enabling the world's first safe, reliable & efficient network of autonomous vehicles. You will partner with the research and software engineers to design, develop, test and validate AV features for our autonomous fleet.

This role will be onsite at our Mountain View office.

What you'll do: +Design, implement, integrate, and support real-time mission-critical software for the Gatik’s autonomy stack +Work with the research engineers to develop maintainable, testable and robust software designs +Architect and implement solutions to complex issues between components partitioned across the large software stack +Be at the forefront of guiding & ensuring best SDLC practices while contributing to improving the safety in the core autonomy stack +Collaborate with the Infrastructure and DevOps teams for efficient, secure and scalable software delivery to a network of Gatik’s autonomous fleet
+Guide and mentor autonomy researchers and algorithm developers to make sure their components are running efficiently and with optimal compute and memory usage +Review and refine technical requirements and translate them into high-level design & plans to support the development of safe AV technology +Conduct code and design reviews and advise on technical matters

Click the apply button below to see the full job description and apply


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer on our Motion Planning team, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Develop our planner behavior and trajectories in collaboration with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole

Apply

Location Seattle, WA


Description To help a growing organization quickly deliver more efficient features to Prime Video customers, Prime Video’s READI organization is innovating on behalf of our global software development team consisting of thousands of engineers. The READI organization is building a team specialized in forecasting and recommendations. This team will apply supervised learning algorithms for forecasting multi-dimensional related time series using recurrent neural networks. The team will develop forecasts on key business dimensions and recommendations on performance and efficiency opportunities across our global software environment.

As a member of the team, you will apply your deep knowledge of machine learning to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them for customers, where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into designs with development teams and developing ready-to-use learning models. You consistently bring strong, data-driven business and technical judgment to decisions.


Apply