Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

Location San Diego


Description

At Qualcomm, we are transforming the automotive industry with our Snapdragon Digital Chassis and building the next generation software defined vehicle (SDV).

Snapdragon Ride is an integral pillar of our Snapdragon Digital Chassis, and since its launch it has gained momentum with a growing number of global automakers and Tier1 suppliers. Snapdragon Ride aims to address the complexity of autonomous driving and ADAS by leveraging its high-performance, power-efficient SoC, industry-leading artificial intelligence (AI) technologies and pioneering vision and drive policy stack to deliver a comprehensive, cost and energy efficient systems solution.

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

We are looking for smart, innovative and motivated individuals with strong theory background in deep learning, advanced signal processing, probability & algorithms and good implementation skills in python/C++. Job responsibilities include design and development of novel algorithms for solving complex problems related to behavior prediction for autonomous driving, including trajectory and intention prediction. Develop novel deep learning models to predict trajectories for road users and optimize them to run-in real-time systems. Work closely with sensor fusion and planning team on defining requirements and KPIs. Work closely with test engineers to develop test plans for validating performance in simulations and real-world testing.

Minimum Qualifications: • Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 6+ years of Systems Engineering or related work experience. OR Master's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 5+ years of Systems Engineering or related work experience. OR PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 4+ years of Systems Engineering or related work experience.Preferred Qualifications: Ph.D + 2 years industry experience in behavior and trajectory prediction Proficient in variety of deep learning models like CNN, Transformer, RNN, LSTM, VAE, GraphCNN etc Experience working with NLP Deep Learning Networks Proficient in state of the art in machine learning tools (pytorch, tensor flow) 3+ years of experience with Programming Language such as C, C++, Python, etc. 3+ years Systems Engineering, or related work experience in the area of behavior and trajectory prediction. Experience working with, modifying, and creating advanced algorithms Analytical and scientific mindset, with the ability to solve complex problems. Experience in Autonomous driving, Robotics, XR/AR/VR Experience with robust software design for safety-critical systems Excellent written and verbal communication skills, ability to work with a cross-functional team


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Canberra/Australia


We are looking for new outstanding PhD students for the upcoming scholarship round (application is due on 31st August 2024) at the Australian National University (ANU is ranked #30 in the QS Ranking 2025) or possibly at another Australian universities.

We are looking for new PhD students to work on new problems that may span over (but are not limited to) "clever" adapting of Foundation Models, LLMs, diffusion models (LORAs etc.,), NERF, or design of Graph Neural Networks, design of new (multi-modal) Self-supervised Learning and Contrastive Learning Models (masked models, images, videos, text, graphs, time series, sequences, etc. ) or adversarial and/or federated learning or other contemporary fundamental/applied problems (e.g., learning without backprop, adapting FMs to be less resource hungry, planning and reasoning, hyperbolic geometry, protein property prediction, structured output generative models, visual relation inference, incremental/learning to learn problems, low shot, etc.)

To succeed, you need an outstanding publication record, e.g., one or more first-author papers in venues such CVPR, ICCV, ECCV, AAAI, ICLR, NeurIPS, ICML, IJCAI, ACM KDD, ACCV, BMVC, ACM MM, IEEE. Trans. On Image Processing, CVIU, IEEE TPAMI, or similar (the list is non-exhaustive). Non-first author papers will also help if they are in the mix. Some patents and/or professional experience in Computer Vision, Machine Learning or AI are a bonus. You also need a good GPA to succeed.

We are open to discussing your interests and topics, if you reach out, we can discuss what is possible. Yes, we have GPUs.

If you are interested, reach out for an informal chat with Dr. Koniusz. I am at CVPR if you want to chat?): piotr.koniusz@data61.csiro.au (or piotr.koniusz@anu.edu.au, www.koniusz.com)


Apply

Natick, MA, United States


The Company Cognex is a global leader in the exciting and growing field of machine vision.

The Team: Vision Algorithms, Advanced Vision Technology This position is in the Vision Algorithms Team of Advanced Vision Technology group, which is responsible for designing and developing the most sophisticated machine vision tools in the world. We combine custom hardware, specialized lighting, optics, and world-class vision algorithms to create software systems that are used to analyze imagery (intensity, color, density, Z-data, ID barcodes, etc.), to detect, identify and localize objects, to make measurements, to inspect for defects, and to read encoded data. Technology development is critical to the overall business to expand areas of application, improve performance, discover new algorithms, and to make use of new hardware and processing power. Engineers in this group typically have experience with image analysis, machine vision, or signal processing.

Job Summary: The Vision Algorithms team is looking for well-rounded, intelligent, creative, and motivated summer or fall intern with a passion for results! You will work with our senior engineers and technical leads on projects that advance our software development infrastructure and enhance our key technologies and customer experience. You will get mentorship on tackling technical challenges and opportunities to build a solid foundation for your career in Software Engineering, or Computer Vision and Artificial Intelligence.

Essential Functions: - Prototype and develop Vision (2D and ID) applications on top of Cognex products and technology. - Build internal tools or automated tests that can be used in software development or testing. - Understand our products and contribute to creating optimal solutions for customer applications in the automation industry. - High energy and motivated learner. Creative, motivated, and looking to work hard for a fast-moving company. - Strong analytical and problem-solving skills. - Strong programming skills in both C/C++ and Python are required. - Solid understanding of machine learning (ML) fundamentals and experience with ML frameworks like TensorFlow or PyTorch required. - Demonstrated projects or internships in AI/ML domain during academic or professional tenure is highly desirable. - Experience with embedded systems, Linux systems, vision/image-processing and optics all valued. - Background in 2D vision, 3D camera calibration & multi camera systems are preferred.

Minimum education and work experience required: Pursuing a MS, or Ph.D. from a top engineering school in EE, CS, or equivalent.

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Redmond, Washington, United States


Overview We are seeking a highly skilled and passionate Research Scientist to join our Responsible & OpenAI Research (ROAR) team in Azure Cognitive Services.

As a Research Scientist, you will play a key role in advancing the field of Responsible Artificial Intelligence (AI) to ensure safe releases of the rapidly advancing AI technologies, such as GPT-4, GPT-4V, DALL-E 3 and beyond, as well as to expand and enhance our standalone Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Contribute to the development of Responsible AI policies, guidelines, and best practices and ensure the practical implementation of these guidelines within various AI technology stacks across Microsoft, promoting a consistent approach to Responsible AI. Enable the safe release of new Azure OpenAI Service features, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Other: Embody our Culture and Values


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Redmond, Washington, United States


Overview The Azure AI Platform (AIP) provides organizations across the world with the tooling and infrastructure needed to build and host AI workloads. The AI Platform organization is scaling rapidly, and we are establishing a world-class data analytics platform to support data-driven decision making through the organization.

We are looking to hire a Senior Data Scientist to join the newly formed AI Platform Analytics team. This individual will be responsible for collaborating with teams across AI Platform to establish trustworthy data sets and provide actionable insights and analysis.

We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities

Apply your knowledge in quantitative analysis, data mining, and the presentation of data to inform decision-making. Build data manipulation, processing, and data visualization tools and share these tools and your knowledge across the team, Cloud and AI, and Microsoft. Handle large amounts of data using various tools, including your own. Ensure high-quality and reliable data. Drive end-to-end projects by utilizing, applying and analyzing data to associated business problems. Engage with Upper Level Management by making key business decisions. Mentor other team members. Contribute to data-driven culture by collaborating with product and engineering teams across Azure to establish and share best practices Embody our culture and values


Apply