Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly. Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience leading projects in AI applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas: Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural, oral and physical models of cars, people, and other dynamic agents

Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a technical leader within our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

Actively contributing to the Science’s technical leadership community, inclusive of proposing new projects, organising their work, and delivering substantial impact across Wayve. Leveraging our large, rich, and diverse sources of real-world driving data Architecting our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows Investigating learning algorithms to use (e.g. reinforcement learning, behavioural cloning) Leveraging simulation for controlled experimental insight, training data augmentation, and re-simulation Scaling models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborating with cross-functional, international teams to integrate research findings into scalable, production-level solutions Potentially contributing to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Experience leading a research agenda aligned with larger organisation or company goals Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact


Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location Seattle, WA


Description Amazon's Compliance Shared Services (CoSS) is looking for a smart, energetic, and creative Sr Applied Scientist to extend and invent state-of-the-art research in multi-modal architectures, large language models across federated and continuous learning paradigms spread across multiple systems to join the Applied Research Science team in Seattle. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to deliver scientific innovations into production systems at Amazon-scale that increase automation accuracy and coverage, and extend and invent new research as a key author to deliver re-usable foundational capabilities for automation.

You will analyze and process large amounts of image, text and tabular data from product detail pages, combine them with additional external and internal sources of multi-modal data, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms in federated and continuous learning modes that can be integrated and launched across multiple systems. You will partner with engineers and product managers across multiple Amazon teams to design new ML solutions implemented across worldwide Amazon stores for the entire Amazon product catalog.


Apply

Natick, MA, United States


The Company Cognex is a global leader in the exciting and growing field of machine vision.

The Team: Vision Algorithms, Advanced Vision Technology This position is in the Vision Algorithms Team of Advanced Vision Technology group, which is responsible for designing and developing the most sophisticated machine vision tools in the world. We combine custom hardware, specialized lighting, optics, and world-class vision algorithms to create software systems that are used to analyze imagery (intensity, color, density, Z-data, ID barcodes, etc.), to detect, identify and localize objects, to make measurements, to inspect for defects, and to read encoded data. Technology development is critical to the overall business to expand areas of application, improve performance, discover new algorithms, and to make use of new hardware and processing power. Engineers in this group typically have experience with image analysis, machine vision, or signal processing.

Job Summary: The Vision Algorithms team is looking for well-rounded, intelligent, creative, and motivated summer or fall intern with a passion for results! You will work with our senior engineers and technical leads on projects that advance our software development infrastructure and enhance our key technologies and customer experience. You will get mentorship on tackling technical challenges and opportunities to build a solid foundation for your career in Software Engineering, or Computer Vision and Artificial Intelligence.

Essential Functions: - Prototype and develop Vision (2D and ID) applications on top of Cognex products and technology. - Build internal tools or automated tests that can be used in software development or testing. - Understand our products and contribute to creating optimal solutions for customer applications in the automation industry. - High energy and motivated learner. Creative, motivated, and looking to work hard for a fast-moving company. - Strong analytical and problem-solving skills. - Strong programming skills in both C/C++ and Python are required. - Solid understanding of machine learning (ML) fundamentals and experience with ML frameworks like TensorFlow or PyTorch required. - Demonstrated projects or internships in AI/ML domain during academic or professional tenure is highly desirable. - Experience with embedded systems, Linux systems, vision/image-processing and optics all valued. - Background in 2D vision, 3D camera calibration & multi camera systems are preferred.

Minimum education and work experience required: Pursuing a MS, or Ph.D. from a top engineering school in EE, CS, or equivalent.

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Seattle, US


Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The Opportunity Photoshop ART is seeking a Research Scientist to join our inpainting R&D team focused on making significant progress in image generation/restoration, low level vision, image editing with an eventual posture toward productization. Individuals in this role are expected to be expert in identified research areas such as artificial intelligence, machine learning, computer vision, and image processing. The ideal candidate will have a keen interest in producing new science to advance Adobe products.

What you'll Do Work towards long-term results-oriented research goals, while identifying intermediate achievements. Contribute to research that can be applied to Adobe product development. Help integrating novel research work into Adobe’s product. Lead and collaborate on research projects across different Adobe divisions. What you need to succeed Ph.D. and solid publications in machine learning, AI, computer science, statistics, or scene semantic understanding. Experience communicating research for public audiences of peers. Experience working in teams. Knowledge in a programming language. Preferred Qualification 2 years of professional full-time experience preferred, but not required 2+ year(s) of internship with primary emphasis on AI research in image generation, low level vision, image restoration, and segmentation Experience in collaboration with a team with varied strengths. 4+ First-author publications at peer-reviewed AI conferences (e.g. NIPS, CVPR, ECCV, ICML, ICLR, ICCV, and ACL). Experience in developing and debugging in Python. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.

If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.

Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $129,400 -- $242,200 annually. Pay within this range varies by work location and may also depend on job-relate


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply

Location San Diego


Description

Qualcomm AI Research is looking for world-class algorithm engineers in general domain machine learning, especially deep learning, generative AI, LLM, LVM. Come join a high-caliber team of engineers building advanced machine learning technology, best-in-class solutions, and user friendly model optimization tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet) to enable state-of-the-art networks to run on devices with limited power, memory, and computation.

Members of our team enjoy the opportunity to participate in cutting edge research while simultaneously contributing technology that will be deployed worldwide in our industry-leading devices. You will be part of a multi-disciplinary talented team working on on-device generative AI optimization. Collaborate in a cross-functional environment spanning hardware, software and systems. See your design in action on industry-leading chips embedded in the next generation of smartphones, autonomous vehicles, robotics, and IOT devices.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The R&D work responsibility for this position focuses on the following: Algorithms research and development in the area of Generative AI, LVM, LLM, Multi-modality Efficient inference algorithms research and development, e.g. batching, KV caching, efficient attentions, long context, speculative decoding Advanced quantization algorithms research and development for complex generative models, e.g., gradient/non-gradient based optimization, equivalent/non-equivalent transformation, automatic mixed precision, hardware in loop Model compression, lossy or lossless, structural and neural search Optimization based learning and learning based optimization Generative AI system prototyping Apply solutions toward system innovations for model efficiency advancement on device as well as in the cloud Python, Pytorch programmer Preferred Qualifications: Master's degree in Computer Science, Engineering, Information Systems, or related field. PHD's degree is preferred. 2+ years of experience with Machine Learning algorithms or systems engineering or related work experience


Apply

Redmond, Washington, United States


Overview Are you interested in developing and optimizing deep learning systems? Are you interested in designing novel technology to accelerate their training and serving for cutting edge models and applications? Do you want to scale large Artificial Intelligence models to their limits on massive supercomputers? Are you interested in being part of an exciting open-source library for deep learning systems? The DeepSpeed team is hiring!

Microsoft's DeepSpeed is an open-source library built on the PyTorch (machine learning framework) ecosystem that combines numerous research innovations and technology advancements to make deep learning efficient and easier to use. DeepSpeed can parallelize across thousands of GPUs and train models with trillions of parameters. Our OSS (Open Source Software) has powered many advanced models like MT-530B and BLOOM, and it supports unprecedented scale and speed for both training and inference.

The DeepSpeed team is also part of the larger Microsoft AI at Scale initiative, which is pioneering the next-generation AI capabilities that are scaled across the company’s products and AI platforms.

The DeepSpeed team is looking for a Senior Researcher in Redmond, WA with passion for innovations and for building high-quality systems that will make significant impact inside and outside of Microsoft. Our team is highly collaborative, innovative, and end-user obsessed. We are looking for candidates with systems skills and passionate about driving innovations to improve the efficiency and effectiveness of deep learning systems. We value creativity, agility, accountability, and a desire to learn new technologies.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Excels in one or more subareas and gains expertise in a broad area of research. Identifies and articulates problems in an area of research that are academically novel and may directly or indirectly impact business opportunities. Collaborates with other relevant researchers or research groups to contribute to or advance a research agenda. Researches and develops an understanding of the state-of-the-art insights, tools, technologies, or methods being used in the research community. Expands collaborative relationships with relevant product and business groups inside or outside of Microsoft and provides expertise or technology to them.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply