Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply

Location Seattle, WA


Description Amazon's Compliance Shared Services (CoSS) is looking for a smart, energetic, and creative Sr Applied Scientist to extend and invent state-of-the-art research in multi-modal architectures, large language models across federated and continuous learning paradigms spread across multiple systems to join the Applied Research Science team in Seattle. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to deliver scientific innovations into production systems at Amazon-scale that increase automation accuracy and coverage, and extend and invent new research as a key author to deliver re-usable foundational capabilities for automation.

You will analyze and process large amounts of image, text and tabular data from product detail pages, combine them with additional external and internal sources of multi-modal data, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms in federated and continuous learning modes that can be integrated and launched across multiple systems. You will partner with engineers and product managers across multiple Amazon teams to design new ML solutions implemented across worldwide Amazon stores for the entire Amazon product catalog.


Apply

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you a passionate scientist in the computer vision area who is aspired to apply your skills to bring value to millions of customers? Here at Ring, we have a unique possibility to innovate and see how the results of our work improve the lives of millions of people and make neighborhoods safer. You will be part of a team committed to pushing the frontier of computer vision and machine learning technology to deliver the best experience for our neighbors. This is a great opportunity for you to innovate in this space by developing highly optimized algorithms that will work on scale. This position requires experience with developing Multi-modal LLMs and Vision Language Models. You will collaborate with different Amazon teams to make informed decisions on the best practices in machine learning to build highly-optimized integrated hardware and software platforms.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer for Optimization, you will focus on research and development related to the optimization of ML models on GPU’s or AI accelerators. You will use your judgment in complex scenarios and apply optimization techniques to a wide variety of technical problems. Specifically, you will:

  • Research, prototype and evaluate state of the art model optimization techniques and algorithms
  • Characterize neural network quality and performance based on research, experiment and performance data and profiling
  • Incorporate optimizations and model development best practices into existing ML development lifecycle and workflow.
  • Define the technical vision and roadmap for DL model optimizations
  • Write technical reports indicating qualitative and quantitative results to colleagues and customers
  • Develop, deploy and optimize deep learning (DL) models on various GPU and AI accelerator chipsets/platforms

Apply

Location Madrid, ESP


Description Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site.

The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content?

We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables.


Apply

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Principal Researcher – Generative AI to join their team.

The mission of the AI Frontiers lab is to expand the pareto frontier of Artificial Intelligence (AI) capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca) and Multi-Agent AI (e.g. AutoGen).

We are seeking a Principal Researcher – Generative AI to join our team and lead efforts on the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Principal Researcher – Generative AI, you will play a crucial role in leading, developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of small language and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring LLMs to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

MSR offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.   

Responsibilities You will perform cutting-edge research in collaboration with other researchers, engineers, and product groups.
As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide. Embody our culture and values.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly. Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience leading projects in AI applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas: Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural, oral and physical models of cars, people, and other dynamic agents

Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a technical leader within our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

Actively contributing to the Science’s technical leadership community, inclusive of proposing new projects, organising their work, and delivering substantial impact across Wayve. Leveraging our large, rich, and diverse sources of real-world driving data Architecting our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows Investigating learning algorithms to use (e.g. reinforcement learning, behavioural cloning) Leveraging simulation for controlled experimental insight, training data augmentation, and re-simulation Scaling models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborating with cross-functional, international teams to integrate research findings into scalable, production-level solutions Potentially contributing to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Experience leading a research agenda aligned with larger organisation or company goals Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies?

The Human-AI eXperiences (HAX) team at Microsoft Research AI Frontiers is looking for exceptional candidates to advance the state-of-the-art in human-AI collaboration with a focus on leveraging the capabilities of people and foundation model-based agents and systems to solve real problems.

As a Principal Researcher on our team, you will:

Work on challenging and impactful projects in areas such as human-AI collaboration and teaming, foundation model-based systems, multi-agent systems, next generation AI experiences and responsible AI. Apply your skills and knowledge to build practical solutions to solve real problems and impact the world. Collaborate with other researchers and engineers across the company to amplify your impact and grow your career in a supportive and stimulating environment. We are looking for candidates who have:

A drive for real world impact, demonstrated by a passion to build and release prototypes or OSS frameworks.
Demonstrated track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, UIST). Demonstrated interdisciplinary experience in applied machine learning, natural language processing, and human computer interaction including experience doing offline and online evaluations, and conducting user studies and user-centered research. Exceptional coding experience and hands-on experience working with large foundation models and related frameworks and toolkits. Familiarity with LLMs such as the OpenAI Generative Pre-Trained Transformers (GPT) models, LLAMA etc., model finetuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT etc.) and model evaluation is beneficial.
A passion for innovation and creativity, evidenced by the ability to generate novel ideas, approaches, and solutions. A team player mindset, characterized by effective communication, collaboration, and feedback skills.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Perform cutting-edge research to solve real-world problems. Collaborate closely with other researchers, engineers, and product group partners on high-impact projects that deliver real-world impact to people and society. Embody our culture and values.


Apply