Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Seattle, WA


Description Futures Design is the advanced concept design and incubation team within Amazon’s Device and Services Design Group (DDG). We are responsible for exploring and defining think (very) big opportunities globally and locally — so that we can better understand how new products and services might enrich the lives of our customers and so that product teams and leaders can align on where we're going and why we're going there. We focus on a 3–10+ year time frame, with the runway to invent and design category-defining products and transformational customer experiences. Working with Amazon business and technology partners, we use research, design, and prototyping to guide early product development, bring greater clarity to engineering goals, and develop a UX-grounded point of view.

We're looking for a Principal Design Technologist to join the growing DDG Futures Design team. You thrive in ambiguity and paradigm shifts– remaking assumptions of how customers engage, devices operate, and builders create. You apply deep expertise that spans design, technology, and product, grounding state-of-the-art emerging technologies through storytelling and a maker mindset. You learn and adapt technology trends to enduring customer problems through customer empathy, code, and iterative experimentation.

You will wear multiple hats to quickly assimilate customer problems, convert them to hypotheses, and test them using efficient technologies and design methods to build stakeholder buy-in. You’ll help your peers unlock challenging scenarios and mature the design studio’s ability to deliver design at scale across a breadth of devices and interaction modalities. You will work around limitations and push capabilities through your work. Your curiosity will inspire those around you and facilitate team growth, while your hands-on, collaborative nature will build trust with your peers and studio partners.


Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory comprised of over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking a Senior Researcher in the area of  Artificial Specialized Intelligence located in Vancouver, British Columbia, with a keen interest in developing cutting-edge large foundation models and post-training techniques for different domains and scenarios. This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.  
  Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.

Responsibilities Conduct cutting-edge research in large foundation models, focusing on applying large foundation models in specific domain. Collaborate with cross-functional teams to integrate solutions into Artificial Intelligence (AI) -driven system. Develop and maintain research prototypes and software tools, ensuring that they are well-documented and adhere to best practices in software development. Publish research findings in top-tier conferences and journals and present your work at industry events. Collaborate with other AI researchers and engineers, sharing knowledge and expertise to foster a culture of innovation and continuous learning within the team.


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Location Sunnyvale, CA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

Geomagical Labs is a 3D R&D lab, in partnership with IKEA. We create magical mixed-reality experiences for hundreds of millions of users, using computer vision, neural networks, graphics, and computational photography. Last year we launched IKEA Kreativ, and we’re excited for what’s next! We have an opening in our lab for a senior computer vision researcher, with 3D Reconstruction and Deep Learning expertise, to develop and improve the underlying algorithms powering our consumer products. We are looking for highly-motivated, creative, applied researchers with entrepreneurial drive, that are excited about building novel technologies and shipping them all the way to the hands of millions of customers!

Requirements: Ph.D. and 2+ years of experience, or Master's and 6+ years of experience, focused on 3D Computer Vision and Deep Learning. Experience in classical methods for 3D Reconstruction: SfM/SLAM, Multi-view Stereo, RGB-D Fusion. Experience in using Deep Learning for 3D Reconstruction and/or Scene Understanding, having worked in any of: Depth Estimation, Room Layout Estimation, NeRFs, Inverse Rendering, 3D Scene Understanding. Familiarity with Computer Graphics and Computational Photography. Expertise in ML frameworks and libraries, e.g. PyTorch. Highly productive in Python. Ability to architect and implement complex systems at the micro and macro level. Entrepreneurial: Adventurous, self-driven, comfortable under uncertainty, with a desire to make systems work end-to-end. Innovative; with a track record of patents and/or first-authored publications at leading workshops or conferences such as CVPR, ECCV/ICCV, SIGGRAPH, ISMAR, NeurIPS, ICLR etc. Experience in developing technologies that got integrated into products, as well as post-launch performance tracking and shipping improvements. [Bonus] Comfortable with C++.

Benefits: Join a mission-driven R&D lab, strategically backed by an influential global brand. Work in a dynamic team of computer vision, AI, computational photography, AR, graphics, and design professionals, and successful serial entrepreneurs. Opportunity to publish novel and relevant research. Fully remote work available to people living in the USA or Canada. Headquartered in downtown Palo Alto, California --- an easy walk from restaurants, coffee shops and Caltrain commuter rail. The USA base salary for this full-time position ranges from $180,000 to $250,000 determined by location, role, skill, and experience level. Geomagical Labs offers a comprehensive set of benefits, and for qualifying roles, substantial incentive grants, vesting annually.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

London


Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly.

Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

Where you will have an impact We're looking for an experienced Applied Scientist with expertise in Neural Radiance Fields (NeRFs) and Gaussian Splatting to join our Vision & Graphics team and advance our innovative neural simulator, Ghost Gym. This role is central to improving Ghost Gym's capabilities, utilizing state-of-the-art neural rendering techniques to craft photorealistic 4D worlds. You'll be at the forefront of developing and applying groundbreaking research to generate thousands of simulated scenarios. These scenarios are critical for training, testing, and debugging our end-to-end AI driving models, contributing significantly to the creation of safe and reliable AI driving technology. Your work will focus on improving the efficiency, realism, and dynamism of our simulations, especially for dynamic and outdoor environments, pushing the limits of current photorealistic visualization technologies.

Challenges you will own Conducting cutting-edge research in NeRFs, Gaussian splatting, and related technologies, with a focus on solving real-world challenges in 3D rendering Developing and implementing algorithms for efficient, high-quality 3D scene reconstruction and rendering, particularly for dynamic and outdoor environments Collaborating with cross-functional teams to integrate research findings into scalable, production-level solutions Staying abreast of the latest developments in the field, evaluating and incorporating state-of-the-art techniques into our workflows Potentially finding opportunities to publish research findings in top-tier journals and conferences, contributing to the scientific community and establishing Wayve as a leader in the field What you will bring to Wayve Essential Proven track record of research in NeRFs, Gaussian splatting, or closely related areas, demonstrated through publications or deployed applications Strong programming skills in Python with experience in deep learning frameworks such as PyTorch Solid foundation in mathematics and physics underlying 3D graphics and rendering techniques Excellent problem-solving skills and the ability to work independently as well as in a team environment Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment

Desirable Experience with dynamic scene reconstruction and rendering, particularly in outdoor environments Familiarity with parallel computing, GPU programming, and optimization techniques PhD or MSc in Computer Science, Computer Engineering, or a related field, with a focus on computer graphics, computer vision, or machine learning What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact Competitive compensation and benefits A dynamic and fast-paced work environment in which you will grow every day - learning on the job, from the brightest minds in our space, and with support for more formal learning opportunities too A culture that is ego-free, respectful and welcoming (of you and your dog) - we even eat lunch together every day


Apply