Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Seattle, WA


Description Amazon's Compliance Shared Services (CoSS) is looking for a smart, energetic, and creative Sr Applied Scientist to extend and invent state-of-the-art research in multi-modal architectures, large language models across federated and continuous learning paradigms spread across multiple systems to join the Applied Research Science team in Seattle. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to deliver scientific innovations into production systems at Amazon-scale that increase automation accuracy and coverage, and extend and invent new research as a key author to deliver re-usable foundational capabilities for automation.

You will analyze and process large amounts of image, text and tabular data from product detail pages, combine them with additional external and internal sources of multi-modal data, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms in federated and continuous learning modes that can be integrated and launched across multiple systems. You will partner with engineers and product managers across multiple Amazon teams to design new ML solutions implemented across worldwide Amazon stores for the entire Amazon product catalog.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Location Seattle, WA


Description Futures Design is the advanced concept design and incubation team within Amazon’s Device and Services Design Group (DDG). We are responsible for exploring and defining think (very) big opportunities globally and locally — so that we can better understand how new products and services might enrich the lives of our customers and so that product teams and leaders can align on where we're going and why we're going there. We focus on a 3–10+ year time frame, with the runway to invent and design category-defining products and transformational customer experiences. Working with Amazon business and technology partners, we use research, design, and prototyping to guide early product development, bring greater clarity to engineering goals, and develop a UX-grounded point of view.

We're looking for a Principal Design Technologist to join the growing DDG Futures Design team. You thrive in ambiguity and paradigm shifts– remaking assumptions of how customers engage, devices operate, and builders create. You apply deep expertise that spans design, technology, and product, grounding state-of-the-art emerging technologies through storytelling and a maker mindset. You learn and adapt technology trends to enduring customer problems through customer empathy, code, and iterative experimentation.

You will wear multiple hats to quickly assimilate customer problems, convert them to hypotheses, and test them using efficient technologies and design methods to build stakeholder buy-in. You’ll help your peers unlock challenging scenarios and mature the design studio’s ability to deliver design at scale across a breadth of devices and interaction modalities. You will work around limitations and push capabilities through your work. Your curiosity will inspire those around you and facilitate team growth, while your hands-on, collaborative nature will build trust with your peers and studio partners.


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Location Seattle, WA Palo Alto, CA


Description Amazon’s product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on an AI-first initiative to continue to improve the way we do search through the use of large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced multi-modal deep-learning models on very large scale datasets, specifically through the use of advanced systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge Computer Vision and Deep Learning technologies and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: * How can multi-modal inputs in deep-learning models help us deliver delightful shopping experiences to millions of Amazon customers? * Can combining multi-modal data and very large scale deep-learning models help us provide a step-function improvement to the overall model understanding and reasoning capabilities? We are looking for exceptional scientists who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization.


Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you a passionate scientist in the computer vision area who is aspired to apply your skills to bring value to millions of customers? Here at Ring, we have a unique possibility to innovate and see how the results of our work improve the lives of millions of people and make neighborhoods safer. You will be part of a team committed to pushing the frontier of computer vision and machine learning technology to deliver the best experience for our neighbors. This is a great opportunity for you to innovate in this space by developing highly optimized algorithms that will work on scale. This position requires experience with developing Multi-modal LLMs and Vision Language Models. You will collaborate with different Amazon teams to make informed decisions on the best practices in machine learning to build highly-optimized integrated hardware and software platforms.


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply

Location San Diego


Description

At Qualcomm, we are transforming the automotive industry with our Snapdragon Digital Chassis and building the next generation software defined vehicle (SDV).

Snapdragon Ride is an integral pillar of our Snapdragon Digital Chassis, and since its launch it has gained momentum with a growing number of global automakers and Tier1 suppliers. Snapdragon Ride aims to address the complexity of autonomous driving and ADAS by leveraging its high-performance, power-efficient SoC, industry-leading artificial intelligence (AI) technologies and pioneering vision and drive policy stack to deliver a comprehensive, cost and energy efficient systems solution.

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

We are looking for smart, innovative and motivated individuals with strong theory background in deep learning, advanced signal processing, probability & algorithms and good implementation skills in python/C++. Job responsibilities include design and development of novel algorithms for solving complex problems related to behavior prediction for autonomous driving, including trajectory and intention prediction. Develop novel deep learning models to predict trajectories for road users and optimize them to run-in real-time systems. Work closely with sensor fusion and planning team on defining requirements and KPIs. Work closely with test engineers to develop test plans for validating performance in simulations and real-world testing.

Minimum Qualifications: • Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 6+ years of Systems Engineering or related work experience. OR Master's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 5+ years of Systems Engineering or related work experience. OR PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 4+ years of Systems Engineering or related work experience.Preferred Qualifications: Ph.D + 2 years industry experience in behavior and trajectory prediction Proficient in variety of deep learning models like CNN, Transformer, RNN, LSTM, VAE, GraphCNN etc Experience working with NLP Deep Learning Networks Proficient in state of the art in machine learning tools (pytorch, tensor flow) 3+ years of experience with Programming Language such as C, C++, Python, etc. 3+ years Systems Engineering, or related work experience in the area of behavior and trajectory prediction. Experience working with, modifying, and creating advanced algorithms Analytical and scientific mindset, with the ability to solve complex problems. Experience in Autonomous driving, Robotics, XR/AR/VR Experience with robust software design for safety-critical systems Excellent written and verbal communication skills, ability to work with a cross-functional team


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer on our Motion Planning team, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Develop our planner behavior and trajectories in collaboration with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole

Apply