Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Senior Research Engineer – Generative AI to join their team in Redmond, WA and New York City, NY.

The mission of the AI Frontiers lab is to expand the pareto frontier of AI capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca), foundation models for actions (e.g., in gaming, robotics, and Office productivity tools) and Multi-Agent AI (e.g. AutoGen).

We are seeking Senior Research Engineers to join our team and contribute to the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Research Engineer, you will play a crucial role in developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of language models, action models and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring models to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

Microsoft Research (MSR) offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Embody our Culture and Values

Responsibilities As a Senior Research Engineer in AI Frontiers, you will design, develop, execute, and implement technology research projects in collaboration with other researchers, engineers, and product groups.

As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide.


Apply

San Jose, CA

The Media Analytics team at NEC Labs America is seeking outstanding researchers with backgrounds in computer vision or machine learning. Candidates must possess an exceptional track record of original research and passion to create high impact products. Our key research areas include autonomous driving, open vocabulary perception, prediction and planning, simulation, neural rendering, agentic LLMs and foundational vision-language models. We have a strong internship program and active collaborations with academia. The Media Analytics team publishes extensively at top-tier venues such as CVPR, ICCV or ECCV.

To check out our latest work, please visit: https://www.nec-labs.com/research/media-analytics/

Qualifications: 1. PhD in Computer Science (or equivalent) 2. Strong publication record at top-tier computer vision or machine learning venues 3. Motivation to conduct independent research from conception to implementation.


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Engineering at Pinterest

Our Engineering team is at the core of bringing our platform to life for Pinners worldwide. Working collaboratively and cross-functionally with teams across the company, our engineers tackle growth-driving challenges to build an inspired and inclusive platform for all.

Our future of work is PinFlex

At Pinterest, we know that our best work happens when we feel most inspired. PinFlex promotes flexibility while prioritizing in-person moments to celebrate our culture and drive inspiration. We know that some work can be performed anywhere, and we encourage employees to work where they choose within their country or region, whether that’s at home, at a Pinterest office, or another virtual location. We believe that there is value in a distributed workforce but there are essential touch points for in-person collaboration that will create a big impact for our business and for development and connection.

Stop by booth #2100 to learn more about our open roles and our in-house generative AI foundation model that leverages the full power of our visual search and taste graph! Our engineers and recruiters are excited to connect with you!


Apply

Location Amsterdam, Netherlands


Description

At Qualcomm AI Research, we are advancing AI to make its core capabilities – perception, reasoning, and action – ubiquitous across devices. Our mission is to make breakthroughs in fundamental AI research and scale them across industries. By bringing together some of the best minds in the field, we’re pushing the boundaries of what’s possible and shaping the future of AI.

As Principal Machine Learning Researcher at Qualcomm, you conduct innovative research in machine learning, deep learning, and AI that advances the state-of-the-art. · You develop and quickly iterate on innovative research ideas, and prototype and implement them in collaboration with other researchers and engineers. · You are on top of and actively shaping the latest research in the field and publish papers at top scientific conferences. · You help define and shape our research vision and planning within and across teams and are passionate at execution. · You engage with leads and stakeholders across business units on how to translate research progress into business impact. · You work in one or more of the following research areas: Generative AI, foundation models (LLMs, LVMs), reinforcement learning, neural network efficiency (e.g., quantization, conditional computation, efficient HW), on-device learning and personalization, and foundational AI research.

Working at Qualcomm means being part of a global company (headquartered in San Diego) that fosters a diverse workforce and puts emphasis on the learning opportunities and professional development of its employees. You will work closely with researchers that have published at major conferences, work on campus at the University of Amsterdam, where you have the opportunity to collaborate with academic researchers through university partnerships such as the QUVA lab, and live in a scenic, vibrant city with a healthy work/life balance and a diversity of cultural activities. In addition, you can join plenty of mentorship, learning, peer, and affinity group opportunities within the company. In this way you can easily develop personal and professional skills in your areas of interest. You’re empowered to start your own initiatives and, in doing so, collaborate with colleagues in offices across teams and countries.

Minimum qualifications: · PhD or Master’s degree in Machine Learning, Computer Vision, Physics, Mathematics, Electrical engineering or similar field, or equivalent practical experience. · 8+ years of experience in machine learning and AI, and experience in working in an academic or industry research lab. · Strong drive to continuously improve beyond the status quo in translating new ideas into innovative solutions. · Track record of scientific leadership by having published impactful work at major conferences in machine learning, computer vision, or NLP (e.g., NeurIPS, ICML, ICLR, CVPR, ICCV, ACL, EMNLP, NAACL, etc.). · Programming experience in Python and experience with standard deep learning toolkits.

Preferred qualifications: · Hands-on experience with foundation models (LLMs, LVMs) and reinforcement learning. · Proven experience in technology and team leadership, and experience with cross-functional stakeholder engagements. · Experience in writing clean and maintainable code for research-internal use (no product development). · Aptitude for guiding and mentoring more junior researchers.


Apply

※Location※ South Korea Seoul / Pangyo


※Description※ 1) Deep learning compression and optimization - Development of algorithms for compression and optimization of deep learning networks - Perform deep learning network embedding (requires understanding of HW platform)

2) AD vision recognition SW - Development of deep learning recognition technology based on sensors such as cameras - Development of pre- and post-processing algorithms and function output - Development of optimization of image recognition algorithm

3) AD decision/control SW - Development of information-based map generation technology recognized by many vehicles - Development of learning-based nearby object behavior prediction model - Development of driving mode determination and collision prevention function of Lv 3 autonomous driving system


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

We’re looking for engineers with a Machine Learning and Artificial Intelligence background to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • You will be driving fundamental and applied research in ML/AI. You will explore the boundaries of what is possible with the current technology set.
  • You will be combining industry best practices and a first-principles approach to design and build ML models.
  • Work in concert with product and infrastructure engineers to improve Figma’s design and collaboration tool through ML powered product features.
  • We'd love to hear from you if you have:
  • 5+ years of experience in programming languages (Python, C++, Java or R)
  • 3+ years of experience in one or more of the following areas: machine learning, natural language processing/understanding, computer vision, generative models.
  • Proven experience researching, building and/or fine-tuning ML models in production environments
  • Experience communicating and working across functions to drive solutions

While not required, It’s an added plus if you also have:

  • Proven track record of planning multi-year roadmap in which shorter-term projects ladder to the long-term vision.
  • Experience in mentoring/influencing senior engineers across organizations.

Apply

Location Mountain View, CA


Description Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role: We are seeking passionate Senior/Staff Software Engineers, who have strong fundamentals in software development practices and are experts in C++ language in production-oriented environment. The ideal candidate is a highly experienced C++ developer with a passion for enabling the world's first safe, reliable & efficient network of autonomous vehicles. You will partner with the research and software engineers to design, develop, test and validate AV features for our autonomous fleet.

This role will be onsite at our Mountain View office.

What you'll do: +Design, implement, integrate, and support real-time mission-critical software for the Gatik’s autonomy stack +Work with the research engineers to develop maintainable, testable and robust software designs +Architect and implement solutions to complex issues between components partitioned across the large software stack +Be at the forefront of guiding & ensuring best SDLC practices while contributing to improving the safety in the core autonomy stack +Collaborate with the Infrastructure and DevOps teams for efficient, secure and scalable software delivery to a network of Gatik’s autonomous fleet
+Guide and mentor autonomy researchers and algorithm developers to make sure their components are running efficiently and with optimal compute and memory usage +Review and refine technical requirements and translate them into high-level design & plans to support the development of safe AV technology +Conduct code and design reviews and advise on technical matters

Click the apply button below to see the full job description and apply


Apply

Location San Diego


Description

At Qualcomm, we are transforming the automotive industry with our Snapdragon Digital Chassis and building the next generation software defined vehicle (SDV).

Snapdragon Ride is an integral pillar of our Snapdragon Digital Chassis, and since its launch it has gained momentum with a growing number of global automakers and Tier1 suppliers. Snapdragon Ride aims to address the complexity of autonomous driving and ADAS by leveraging its high-performance, power-efficient SoC, industry-leading artificial intelligence (AI) technologies and pioneering vision and drive policy stack to deliver a comprehensive, cost and energy efficient systems solution.

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

We are looking for smart, innovative and motivated individuals with strong theory background in deep learning, advanced signal processing, probability & algorithms and good implementation skills in python/C++. Job responsibilities include design and development of novel algorithms for solving complex problems related to behavior prediction for autonomous driving, including trajectory and intention prediction. Develop novel deep learning models to predict trajectories for road users and optimize them to run-in real-time systems. Work closely with sensor fusion and planning team on defining requirements and KPIs. Work closely with test engineers to develop test plans for validating performance in simulations and real-world testing.

Minimum Qualifications: • Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 6+ years of Systems Engineering or related work experience. OR Master's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 5+ years of Systems Engineering or related work experience. OR PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 4+ years of Systems Engineering or related work experience.Preferred Qualifications: Ph.D + 2 years industry experience in behavior and trajectory prediction Proficient in variety of deep learning models like CNN, Transformer, RNN, LSTM, VAE, GraphCNN etc Experience working with NLP Deep Learning Networks Proficient in state of the art in machine learning tools (pytorch, tensor flow) 3+ years of experience with Programming Language such as C, C++, Python, etc. 3+ years Systems Engineering, or related work experience in the area of behavior and trajectory prediction. Experience working with, modifying, and creating advanced algorithms Analytical and scientific mindset, with the ability to solve complex problems. Experience in Autonomous driving, Robotics, XR/AR/VR Experience with robust software design for safety-critical systems Excellent written and verbal communication skills, ability to work with a cross-functional team


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply