Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

About the role You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Mountain View


Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly.

Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

Where you will have an impact We are looking for an experienced Research Engineer to help us in our journey to scale end-to-end neural networks for autonomous driving. You’ll be working across our research team to build, integrate, test and scale algorithms, tools, and machine learning solutions for autonomous driving.

What you will bring to Wayve 5+ years of software engineering experience in an industrial research environment Passion to work in a team on research ideas that have real world impact Strong software engineering experience in Python and other relevant languages (especially C++ and CUDA) Ideally with direct experience working in at least one of computer vision, robotics, simulation, graphics, large language models Ideally with several years working on large-scale machine learning algorithms and systems BS, MS, or above in Machine Learning, Computer Science, Engineering, or a related technical discipline or equivalent experience

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact Competitive compensation Fully employer-covered medical, dental and vision insurance! Further benefits such as catered lunch, yummy snacks, and variety of drinks, life insurance, employer contributed retirement account, therapy, yoga, office-wide socials and much more. A dynamic and fast-paced work environment in which you will grow every day - learning on the job, from the brightest minds in our space, and with support for more formal learning opportunities too A culture that is ego-free, respectful and welcoming (of you and your dog) - we even eat lunch together every day This is a full-time role based in our office in California. At Wayve we want the best of all worlds so we operate a hybrid working policy that combines time together in our offices and workshops to fuel innovation, culture, relationships and learning, and time spent working from home. We also operate core working hours so you can be where you need to be for family and loved ones too. Teams determine the routines that work best for them.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Redmond, Washington, United States


Overview We are seeking skilled and passionate Senior Research Scientist to join our Responsible & Open Ai Research (ROAR) team in Azure Cognitive Services at Redmond, WA.

As a Senior Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of the rapidly evolving multimodal, AI models such as GPT-4 Vision, DALL-E, Sora, and beyond, as well as to expand and enhance the Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Embody our Culture and Values


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory comprised of over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking  a Researcher in the area of  Artificial Specialized Intelligence located in Vancouver, British Columbia, with a keen interest in developing cutting-edge large foundation models and post-training techniques for different domains and scenarios. This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.  
  Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.   Responsibilities Conduct cutting-edge research in large foundation models, focusing on applying large foundation models in specific domain. Collaborate with cross-functional teams to integrate solutions into Artificial Intelligence (AI) -driven system. Develop and maintain research prototypes and software tools, ensuring that they are well-documented and adhere to best practices in software development. Publish research findings in top-tier conferences and journals and present your work at industry events. Collaborate with other AI researchers and engineers, sharing knowledge and expertise to foster a culture of innovation and continuous learning within the team.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

B GARAGE was founded in 2017 by two PhD graduates from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the co-founders set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

The B GARAGE team is always looking for an enthusiastic, proactive, and collaborative Robotics and Automation Engineers to support the launch of intelligent aerial robots and autonomously sustainable ecosystems.

If you're interested in joining the B Garage team but don't see a role open that fits your background, apply to the general application and we'll reach out to discuss your career goals.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

London


Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly.

Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

Where you will have an impact We're looking for an experienced Applied Scientist with expertise in Neural Radiance Fields (NeRFs) and Gaussian Splatting to join our Vision & Graphics team and advance our innovative neural simulator, Ghost Gym. This role is central to improving Ghost Gym's capabilities, utilizing state-of-the-art neural rendering techniques to craft photorealistic 4D worlds. You'll be at the forefront of developing and applying groundbreaking research to generate thousands of simulated scenarios. These scenarios are critical for training, testing, and debugging our end-to-end AI driving models, contributing significantly to the creation of safe and reliable AI driving technology. Your work will focus on improving the efficiency, realism, and dynamism of our simulations, especially for dynamic and outdoor environments, pushing the limits of current photorealistic visualization technologies.

Challenges you will own Conducting cutting-edge research in NeRFs, Gaussian splatting, and related technologies, with a focus on solving real-world challenges in 3D rendering Developing and implementing algorithms for efficient, high-quality 3D scene reconstruction and rendering, particularly for dynamic and outdoor environments Collaborating with cross-functional teams to integrate research findings into scalable, production-level solutions Staying abreast of the latest developments in the field, evaluating and incorporating state-of-the-art techniques into our workflows Potentially finding opportunities to publish research findings in top-tier journals and conferences, contributing to the scientific community and establishing Wayve as a leader in the field What you will bring to Wayve Essential Proven track record of research in NeRFs, Gaussian splatting, or closely related areas, demonstrated through publications or deployed applications Strong programming skills in Python with experience in deep learning frameworks such as PyTorch Solid foundation in mathematics and physics underlying 3D graphics and rendering techniques Excellent problem-solving skills and the ability to work independently as well as in a team environment Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment

Desirable Experience with dynamic scene reconstruction and rendering, particularly in outdoor environments Familiarity with parallel computing, GPU programming, and optimization techniques PhD or MSc in Computer Science, Computer Engineering, or a related field, with a focus on computer graphics, computer vision, or machine learning What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact Competitive compensation and benefits A dynamic and fast-paced work environment in which you will grow every day - learning on the job, from the brightest minds in our space, and with support for more formal learning opportunities too A culture that is ego-free, respectful and welcoming (of you and your dog) - we even eat lunch together every day


Apply