Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

We are seeking a highly motivated candidate for a fully funded postdoctoral researcher position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain. The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The postdoctoral researcher is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV). Other responsibilities include supporting graduate or undergraduate students with technical guidance and engagement in other research activities such as paper reviews, reading group, workshop organization, etc.

The start date of the position is August 01, 2024. Contract duration is 1 year with the option of renewing for a second year. The successful candidate will require the following skills and knowledge: • PhD in Computer Science or related fields; • Strong tracked records in 3D computer graphics, 3D computer vision; • Hands-on experience in training deep models and generative models is required; • Hands-on experience and relevant skills in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender is desirable; • Strong programming skills in C++, Python. Capability in implementing systems from research papers and open-source software. • Additional background in math, statistics, or physics is an advantage.

Applicants should provide the following information: • A comprehensive CV including a full list of publications; • The name and contact details of two referees. One of the referees should be the applicant’s PhD supervisor; • Two representative papers by the applicant. Interested candidates should email their applications to Binh-Son Hua (https://sonhua.github.io) directly. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

Location Sunnyvale, CA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

The Advanced Development Center at ASML in Wilton, Connecticut is seeking an Optical Data Analyst with expertise processing of images for metrology process development of ultra-high precision optics and ceramics. The Advanced Development Center (ADC) is a multi-disciplinary group of engineers and scientists focused on developing learning loop solutions, proto-typing of next generation wafer and reticle clamping systems and industrialization of proto-types that meet the system performance requirements.

Role and Responsibilities The main job function is to develop image processing, data analysis and machine learning algorithm and software to aid in development of wafer and reticle clamping systems to solve challenging engineering problems associated with achieving nanometer (nm) scale precision. You will be part of the larger Development and Engineering (DE) sector – where the design and engineering of ASML products happens.

As an Optical Data Analyst, you will: Develop/improve image processing algorithm to extract nm level information from scientific imaging equipment (e.g. interferometer, SEM, AFM, etc.) Integrate algorithms into image processing software package for analysis and process development cycles for engineering and manufacturing users Maintain version controlled software package for multiple product generations Perform software testing to identify application, algorithm and software bugs Validate/verify/regression/unit test software to ensure it meets the business and technical requirements Use machine learning models to predict trends and behaviors relating to lifetime and manufacturing improvements of the product Execute a plan of analysis, software and systems, to mitigate product and process risk and prevent software performance issues Collaborate with the design team in software analysis tool development to find solutions to difficult technical problems in an efficient manner Work with database structures and utilize capabilities Write software scripts to search, analyze and plot data from database Support query code to interrogate data for manufacturing and engineering needs Support image analysis on data and derive conclusions Travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply

※Location※ South Korea, Uiwang


※Description※ 1) AI Perception - RGB image based object/scene reconstruction (Nerf, GS, LRM) - Object detection / analysi - Image-Text multimodal model

2) Manipulation Vision - Development of vision-based Bimanual Manipulation using deep learning technology

3) On-Device AI - Development of lightweight deep learning model and on-device AI optimization technology

4) Mobile robot SLAM- Development of algorithms for Perception, SLAM, Motion control and Path planning


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Location Seattle, WA


Description Amazon Advertising is one of Amazon's fastest growing businesses. Amazon's advertising portfolio helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day!

The Creative X team within Amazon Advertising time aims to democratize access to high-quality creatives (images, videos) by building AI-driven solutions for advertisers. To accomplish this, we are investing in latent-diffusion models, large language models (LLM), computer vision (CV), reinforced learning (RL), and image + video and audio synthesis. You will be part of a close-knit team of applied scientists and product managers who are highly collaborative and at the top of their respective fields.

We are looking for talented Applied Scientists who are adept at a variety of skills, especially with computer vision, latent diffusion or related foundational models that will accelerate our plans to generate high-quality creatives on behalf of advertisers. Every member of the team is expected to build customer (advertiser) facing features, contribute to the collaborative spirit within the team, publish, patent, and bring cutting edge research to raise the bar within the team.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

The Autonomy Software Metrics team is responsible for providing engineers and leadership at Zoox with tools to evaluate the behavior of Zoox’s autonomy stack using simulation. The team collaborates with experts across the organization to ensure a high safety bar, great customer experience, and rapid feedback to developers. The metrics team is responsible for evaluating the complete end-to-end customer experience through simulation, evaluating factors that impact safety, comfort, legality, road citizenship, progress, and more. You’ll be part of a passionate team making transportation safer, smarter, and more sustainable. This role gives you high visibility within the company and is critical for successfully launching our autonomous driving software.


Apply