Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Tokyo, Tokyo-to, Japan


Overview As one of the world's leading industrial research laboratories, Microsoft Research (MSR) has more than 1,000 researchers and engineers working across the globe. In the past 30 years, Microsoft scientists have not only carried out world-class computer science research, but also transferred the advanced technologies into our products and services that have changed millions of people’s lives and ensured that Microsoft is at the forefront of digital transformation.

Part of Microsoft Research, Microsoft Research Asia (MSR Asia), established in 1998, is a leading research lab with major sites in Beijing, Shanghai and Vancouver. Over the years, technologies developed by MSR Asia have made a significant impact within Microsoft and also around the world, and new, innovative technologies are constantly being born from the lab. As one of the world-class research labs, MSRA offers an exhilarating, supportive, open and inclusive environment for top talents to create the future through their disruptive and cutting-edge research. (More information about Microsoft Research Lab - Asia - Microsoft Research).

Along with business growth, Microsoft Research Asia (MSRA) is increasing its presence in Japan, and looking for a Principal Research Manager who specializes in AI with an emphasis on Embodied AI and Robotics, AI Model innovations (NLP, vision, multi-modality), Societal AI, Wireless sensing, and Wellbeing. This is a unique opportunity to lead an ambitious research agenda and work with various teams to explore new applications of those research areas.

Responsibilities •As a leading and accomplished expert in a broad research area (e.g., Embodied AI and Robotics, AI Model, Multimedia and Vision), has a comprehensive understanding of the relevant literature, research methods, and business and academic context. •Defines and articulates a clear long-term research vision that is in line with MSRA strategic focus and drive research agenda landing with planned schedule •As a local representative, fosters cooperative relationships with local governments, academic communities, industry partners and business groups within Microsoft to establish MSRA presence locally and support future growth •Creates synergy among MSRA research groups in multiple locations to enable collaboration and creativity • As a people manager, hires and retains top talents. Deliveries success through empowerment and accountability


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly. Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience leading projects in AI applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas: Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural, oral and physical models of cars, people, and other dynamic agents

Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a technical leader within our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

Actively contributing to the Science’s technical leadership community, inclusive of proposing new projects, organising their work, and delivering substantial impact across Wayve. Leveraging our large, rich, and diverse sources of real-world driving data Architecting our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows Investigating learning algorithms to use (e.g. reinforcement learning, behavioural cloning) Leveraging simulation for controlled experimental insight, training data augmentation, and re-simulation Scaling models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborating with cross-functional, international teams to integrate research findings into scalable, production-level solutions Potentially contributing to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Experience leading a research agenda aligned with larger organisation or company goals Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies?

The Human-AI eXperiences (HAX) team at Microsoft Research AI Frontiers is looking for exceptional candidates to advance the state-of-the-art in human-AI collaboration with a focus on leveraging the capabilities of people and foundation model-based agents and systems to solve real problems.

As a Principal Researcher on our team, you will:

Work on challenging and impactful projects in areas such as human-AI collaboration and teaming, foundation model-based systems, multi-agent systems, next generation AI experiences and responsible AI. Apply your skills and knowledge to build practical solutions to solve real problems and impact the world. Collaborate with other researchers and engineers across the company to amplify your impact and grow your career in a supportive and stimulating environment. We are looking for candidates who have:

A drive for real world impact, demonstrated by a passion to build and release prototypes or OSS frameworks.
Demonstrated track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, UIST). Demonstrated interdisciplinary experience in applied machine learning, natural language processing, and human computer interaction including experience doing offline and online evaluations, and conducting user studies and user-centered research. Exceptional coding experience and hands-on experience working with large foundation models and related frameworks and toolkits. Familiarity with LLMs such as the OpenAI Generative Pre-Trained Transformers (GPT) models, LLAMA etc., model finetuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT etc.) and model evaluation is beneficial.
A passion for innovation and creativity, evidenced by the ability to generate novel ideas, approaches, and solutions. A team player mindset, characterized by effective communication, collaboration, and feedback skills.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Perform cutting-edge research to solve real-world problems. Collaborate closely with other researchers, engineers, and product group partners on high-impact projects that deliver real-world impact to people and society. Embody our culture and values.


Apply

Location San Diego


Description

Artificial Intelligence is changing the world for the benefit of human beings and societies. QUALCOMM, as the world's leading mobile computing platform provider, is committed to enable the wide deployment of intelligent solutions on all possible devices – like smart phones, autonomous vehicles, robotics and IOT devices. Qualcomm is creating building blocks for the intelligent edge.

We are part of Qualcomm AI Research, and we focus on advancing Edge AI machine learning technology – including model fine tuning, hardware acceleration, model quantization, model compression, network architecture search (NAS), edge inference and related fields. Come join us on this exciting journey. In this particular role, you will work in a dynamic research environment, be part of a multi-disciplinary team of researchers and software engineers who work with cutting edge AI frameworks and tools. You will architect, design, develop, test, and deploy on- and off-device benchmarking workflows for model zoos.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The successful applicant should have a strong theoretical background and proven hands-on experience with AI as modern software-, web-, and cloud-engineering.

Must have experience and skills: Strong theoretical background in AI and general ML techniques Proven hands-on experience with model training, inference, and evaluation. Proven hands-on experience with PyTorch, ONNX, TensorFlow, CUDA, and others. Experience developing data pipelines for ML/AI training and inferencing in the cloud. Prior experience in deploying containerized (web-) applications to IAAS environments such as AWS (preferred), Azure or GCP, backed by Dev-Ops and CI/CD technologies. Strong Linux command line skills. Strong experience with Docker and Git. Strong general analytical and debugging skills. Prior experience working in agile environments. Prior experience in collaborating with multi-disciplinary teams across time zones. Strong team player, communicator, presenter, mentor, and teacher. Preferred extra experience and skills: Prior experience with model quantization, profiling and running models on edge devices. Prior experience in developing full stack web applications using frameworks such as Ruby-on-Rails (preferred), Django, Phoenix/Elixir, Spring, Node.js or others. Knowledge of relational database design and optimization, hands on experience with running Postgres (preferred), MySQL or other relational databases in production Preferred qualifications: Bachelor's, Master's and/or PhD degree in Computer Science, Engineering, Information Systems, or related field and 2-5 years of work experience in Software Engineering, Systems Engineering, Hardware Engineering or related.


Apply

Location Niskayuna, NY


Description Job Description Summary At GE Aerospace Research, our team develops advanced embedded systems technology for the future of flight. Our technology will enable sustainable air travel and next generation aviation systems for use in commercial as well as military applications. As a Lead Embedded Software Engineer, you will architect and develop state-of-the-art embedded systems for real-time controls and communication applications. You will lead and contribute to advanced research and development programs for GE Aerospace as well as with U.S. Government Agencies. You will collaborate with fellow researchers from a range of technology disciplines, contributing to projects across the breadth of GE Aerospace programs. Job Description Essential Responsibilities: As a Lead Embedded Software Engineer, you will:

Work independently as well as with a team to develop and apply advanced software technologies for embedded controls and communication systems for GE Aerospace products Interact with hardware suppliers and engineering tool providers to identify the best solutions for the most challenging applications Lead small to medium-sized projects or tasks Be responsible for documenting technology and results through patent applications, technical reports, and publications Expand your expertise staying current with advances in embedded software to seek out new ideas and applications Collaborate in a team environment with colleagues across GE Aerospace and government agencies

Qualifications/Requirements:

Bachelor’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 7 years of industry experience OR a master’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 5 years of industry experience OR a Ph.D. in Electrical Engineering, Computer Science, or related disciplines with a minimum of 3 years of industry experience. Strong background in software development for embedded systems (e.g., x86, ARM) Strong embedded programming skills such as: C/C++, Python, and Rust Familiarity with CNSA and NIST cryptographic algorithms Willingness to travel at a minimum of 2 weeks per year Ability to obtain and maintain US Government Security Clearance US Citizenship required Must be willing to work out of an office located in Niskayuna, NY You must submit your application for employment on the careers page at www.gecareers.com to be considered Ideal Candidate Characteristics:

Coding experience with Bash, Python, C#, MATLAB, ARMv8 assembly, RISCV assembly Experience with embedded devices from Intel, AMD, Xilinx, NXP, etc. Experience with hardware-based security (e.g., UEFI, TPM, ARM TrustZone, Secure Boot) Understanding of embedded system security requirements and security techniques Experience with Linux OS and Linux security Experience with OpenSSL and/or wolfSSL Experience with wired and wireless networking protocols or network security Knowledge of 802.1, 802.3, and/or 802.11 standards Experience in software defined networks (SDN) and relevant software such as OpenFlow, Open vSwitch, or Mininet Hands-on experience with embedded hardware (such as protoboards) or networking equipment (such as switches and analyzers) in a laboratory setting Experience with embedded development in an RTOS environment (e.g., VxWorks, FreeRTOS) Demonstrated ability to take an innovative idea from a concept to a product Experience with the Agile methodology of program management The base pay range for this position is 90,000 - 175,000 USD Annually. The specific pay offered may be influenced by a variety of factors, including the candidate’s experience, education, and skill set. This position is also eligible for an annual discretionary bonus based on a percentage of your base salary. This posting is expected to close on June 16, 2024


Apply

About the role You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

London


Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning, computer vision and reinforcement learning. Leveraging our multi-national world-class team of researchers and engineers, we’re using data to learn more intelligent algorithms to bring autonomy for everyone, everywhere. We aim to be the future of self-driving cars, learning from experience and data.

Where you’ll have an impact

We are currently looking for people with research expertise in AI applied to autonomous driving or similar robotics or decision making domain, inclusive, but not limited to the following specific areas:

Foundation models for robotics Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioral and physical models of cars, people, and other dynamic agents You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team to achieve this.

What you’ll bring to Wayve

Thorough knowledge of and 5+ years applied experience in AI research, computer vision, deep learning, reinforcement learning or robotics Ability to deliver high quality code and familiarity with deep learning frameworks (Python and Pytorch preferred) Experience leading a research agenda aligned with larger goals Industrial and / or academic experience in deep learning, software engineering, automotive or robotics Experience working with training data, metrics, visualisation tools, and in-depth analysis of results Ability to understand, author and critique cutting-edge research papers Familiarity with code-reviewing, C++, Linux, Git is a plus PhD in a relevant area and / or track records of delivering value through machine learning are a big plus. What we offer you

Attractive compensation with salary and equity Immersion in a team of world-class researchers, engineers and entrepreneurs A unique position to shape the future of autonomy and tackle the biggest challenge of our time Bespoke learning and development opportunities Relocation support with visa sponsorship Flexible working hours - we trust you to do your job well, at times that suit you and your time Benefits such as an onsite chef, workplace nursery scheme, private health insurance, therapy, daily yoga, onsite bar, large social budgets, unlimited L&D requests, enhanced parental leave, and more!


Apply

Location Madrid, ESP


Description Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site.

The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content?

We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Seattle, WA; Costa Mesa, CA; Boston, MA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

THE ROLE We build Lattice, the foundation for everything we do as a defense technology company. Our engineers are talented and hard-working, motivated to see their work rapidly deployed on the front lines. Our team is not just building an experiment in waiting, we deploy what we build on the Southern border, Iraq, Ukraine and more.

We have open roles across Platform Engineering, ranging from core infrastructure to distributed systems, web development, networking and more. We hire self-motivated people, those who hold a higher bar for themselves than anyone else could hold for them. If you love building infrastructure, platform services, or just working in high performing engineering cultures we invite you to apply!

YOU SHOULD APPLY IF YOU: -At least 3+ years working with a variety of programming languages such as Rust, Go, C++, Java, Python, JavaScript/TypeScript, etc. -Have experience working with customers to deliver novel software capabilities -Want to work on building and integrating model/software/hardware-in-the-loop components by leveraging first and third party technologies (related to simulation, data management, compute infrastructure, networking, and more). -Love building platform and infrastructure tooling that enables other software engineers to scale their output -Enjoy collaborating with team members and partners in the autonomy domain, and building technologies and processes which enable users to safely and rapidly develop and deploy autonomous systems at scale. -Are a U.S. Person because of required access to U.S. export controlled information

Note: The above bullet points describe the ideal candidate. None of us matched all of these at once when we first joined Anduril. We encourage you to apply even if you believe you meet only part of our wish list.

NICE TO HAVE -You've built or invented something: an app, a website, game, startup -Previous experience working in an engineering setting: a startup (or startup-like environment), engineering school, etc. If you've succeeded in a low structure, high autonomy environment you'll succeed here! -Professional software development lifecycle experience using tools such as version control, CICD systems, etc. -A deep, demonstrated understanding of how computers and networks work, from a single desktop to a multi-cluster cloud node -Experience building scalable backend software systems with various data storage and processing requirements -Experience with industry standard cloud platforms (AWS, Azure), CI/CD tools, and software infrastructure fundamentals (networking, security, distributed systems) -Ability to quickly understand and navigate complex systems and established code bases -Experience implementing robot or autonomous vehicle testing frameworks in a software-in-the-loop or hardware-in-the-loop (HITL) environment -Experience with modern build and deployment tooling (e.g. NixOS, Terraform) -Experience designing complex software systems, and iterating upon designs via a technical design review process -Familiarity with industry standard monitoring, logging, and data management tools and best practices -A bias towards rapid delivery and iteration


Apply