Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Redmond, Washington, United States


Overview Within AI Platform, the Cognitive Services team empowers developers and data scientists around the world and of all skill levels to easily add AI capabilities to their apps. #aiplatform

We are looking for a Research Scientist with a background in Computer Vision, Natural Language Processing and/or Artificial Intelligence, including topics like layout analysis, chart understanding, multi-page multi-document question answering, novel ways of leveraging large language models for document understanding and solving problems inherent to large language models (grounding, retrieval-based generation, etc.). Familiarity with modern large language models is a plus, but not required.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Your responsibilities will include:

Conduct pioneering research to propel the state-of-the-art in various tasks in document understanding. Work closely with fellow Research Scientists and Product Engineering teams to translate research outcomes into practical solutions. Provide expertise and support to the engineering team on various challenges, fostering collaboration between research and practical application. Take charge of the research agenda from problem definition to algorithm and model development.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location San Diego


Description

Artificial Intelligence is changing the world for the benefit of human beings and societies. QUALCOMM, as the world's leading mobile computing platform provider, is committed to enable the wide deployment of intelligent solutions on all possible devices – like smart phones, autonomous vehicles, robotics and IOT devices. Qualcomm is creating building blocks for the intelligent edge.

We are part of Qualcomm AI Research, and we focus on advancing Edge AI machine learning technology – including model fine tuning, hardware acceleration, model quantization, model compression, network architecture search (NAS), edge inference and related fields. Come join us on this exciting journey. In this particular role, you will work in a dynamic research environment, be part of a multi-disciplinary team of researchers and software engineers who work with cutting edge AI frameworks and tools. You will architect, design, develop, test, and deploy on- and off-device benchmarking workflows for model zoos.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The successful applicant should have a strong theoretical background and proven hands-on experience with AI as modern software-, web-, and cloud-engineering.

Must have experience and skills: Strong theoretical background in AI and general ML techniques Proven hands-on experience with model training, inference, and evaluation. Proven hands-on experience with PyTorch, ONNX, TensorFlow, CUDA, and others. Experience developing data pipelines for ML/AI training and inferencing in the cloud. Prior experience in deploying containerized (web-) applications to IAAS environments such as AWS (preferred), Azure or GCP, backed by Dev-Ops and CI/CD technologies. Strong Linux command line skills. Strong experience with Docker and Git. Strong general analytical and debugging skills. Prior experience working in agile environments. Prior experience in collaborating with multi-disciplinary teams across time zones. Strong team player, communicator, presenter, mentor, and teacher. Preferred extra experience and skills: Prior experience with model quantization, profiling and running models on edge devices. Prior experience in developing full stack web applications using frameworks such as Ruby-on-Rails (preferred), Django, Phoenix/Elixir, Spring, Node.js or others. Knowledge of relational database design and optimization, hands on experience with running Postgres (preferred), MySQL or other relational databases in production Preferred qualifications: Bachelor's, Master's and/or PhD degree in Computer Science, Engineering, Information Systems, or related field and 2-5 years of work experience in Software Engineering, Systems Engineering, Hardware Engineering or related.


Apply

Engineering at Pinterest

Our Engineering team is at the core of bringing our platform to life for Pinners worldwide. Working collaboratively and cross-functionally with teams across the company, our engineers tackle growth-driving challenges to build an inspired and inclusive platform for all.

Our future of work is PinFlex

At Pinterest, we know that our best work happens when we feel most inspired. PinFlex promotes flexibility while prioritizing in-person moments to celebrate our culture and drive inspiration. We know that some work can be performed anywhere, and we encourage employees to work where they choose within their country or region, whether that’s at home, at a Pinterest office, or another virtual location. We believe that there is value in a distributed workforce but there are essential touch points for in-person collaboration that will create a big impact for our business and for development and connection.

Stop by booth #2100 to learn more about our open roles and our in-house generative AI foundation model that leverages the full power of our visual search and taste graph! Our engineers and recruiters are excited to connect with you!


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

London


Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning, computer vision and reinforcement learning. Leveraging our multi-national world-class team of researchers and engineers, we’re using data to learn more intelligent algorithms to bring autonomy for everyone, everywhere. We aim to be the future of self-driving cars, learning from experience and data.

Where you’ll have an impact

We are currently looking for people with research expertise in AI applied to autonomous driving or similar robotics or decision making domain, inclusive, but not limited to the following specific areas:

Foundation models for robotics Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioral and physical models of cars, people, and other dynamic agents You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team to achieve this.

What you’ll bring to Wayve

Thorough knowledge of and 5+ years applied experience in AI research, computer vision, deep learning, reinforcement learning or robotics Ability to deliver high quality code and familiarity with deep learning frameworks (Python and Pytorch preferred) Experience leading a research agenda aligned with larger goals Industrial and / or academic experience in deep learning, software engineering, automotive or robotics Experience working with training data, metrics, visualisation tools, and in-depth analysis of results Ability to understand, author and critique cutting-edge research papers Familiarity with code-reviewing, C++, Linux, Git is a plus PhD in a relevant area and / or track records of delivering value through machine learning are a big plus. What we offer you

Attractive compensation with salary and equity Immersion in a team of world-class researchers, engineers and entrepreneurs A unique position to shape the future of autonomy and tackle the biggest challenge of our time Bespoke learning and development opportunities Relocation support with visa sponsorship Flexible working hours - we trust you to do your job well, at times that suit you and your time Benefits such as an onsite chef, workplace nursery scheme, private health insurance, therapy, daily yoga, onsite bar, large social budgets, unlimited L&D requests, enhanced parental leave, and more!


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Zoox is looking for a software engineer to join our Perception team and help us build novel architectures for classifying and understanding the complex and dynamic environments in our cities. In this role, you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms. We are creating new algorithms for segmentation, tracking, classification, and high-level scene understanding, and you could work on any (or all!) of these components.

We're looking for engineers with advanced degrees and experience building perception pipelines that work with real data in rapidly changing and uncertain environments.


Apply

Gothenburg, Sweden

This fully-funded PhD position offers an opportunity to delve into the area of geometric deep learning within the broader landscape of machine learning and 3D computer vision. As a candidate, you'll have the chance to develop theoretical concepts and innovative methodologies while contributing to real-world imaging applications. Moreover, you will enjoy working in a diverse, collaborative, supportive and internationally recognized environment.

The PhD project centers on understanding and improving deep learning methods for 3D scene analysis and 3D generative diffusion models. We aim to explore new ways of encoding symmetries in deep learning models in order to scale up computations, a necessity for realizing truly 3D generative models for general scenes. We aim to explore the application of these models in key problems involving novel view synthesis and self-supervised learning.

If you are interested and present at CVPR, then feel free to reach out to Prof. Fredrik Kahl, head of the Computer Vision Group.


Apply

Open to Seattle, WA; Costa Mesa, CA; or Washington, DC

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

WHY WE’RE HERE The Mission Software Engineering team builds, deploys, integrates, extends, and scales Anduril's software to deliver mission-critical capabilities to our customers. As the software engineers closest to Anduril customers and end-users, Mission Software Engineers solve technical challenges of operational scenarios while owning the end-to-end delivery of winning capabilities such as Counter Intrusion, Joint All Domain Command & Control, and Counter-Unmanned Aircraft Systems.

As a Mission Software Engineer, you will solve a wide variety of problems involving networking, autonomy, systems integration, robotics, and more, while making pragmatic engineering tradeoffs along the way. Your efforts will ensure that Anduril products seamlessly work together to achieve a variety of critical outcomes. Above all, Mission Software Engineers are driven by a “Whatever It Takes” mindset—executing in an expedient, scalable, and pragmatic way while keeping the mission top-of-mind and making sound engineering decisions to deliver successful outcomes correctly, on-time, and with high quality.

WHAT YOU’LL DO -Own the software solutions that are deployed to customers -Write code to improve products and scale the mission capability to more customers -Collaborate across multiple teams to plan, build, and test complex functionality -Create and analyze metrics that are leveraged for debugging and monitoring -Triage issues, root cause failures, and coordinate next-steps -Partner with end-users to turn needs into features while balancing user experience with engineering constraints -Travel up to 30% of time to build, test, and deploy capabilities in the real world

CORE REQUIREMENTS -Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics. -At least 2-5+ years working with a variety of programming languages such as Java, Python, Rust, Go, JavaScript, etc. (We encourage all levels to apply) -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -A desire to work on critical software that has a real-world impact -Must be eligible to obtain and maintain a U.S. TS clearance

Desired Requirements -Strong background with focus in Physics, Mathematics, and/or Motion Planning to inform modeling & simulation (M&S) and physical systems -Developing and testing multi-agent autonomous systems and deploying in real-world environments. -Feature and algorithm development with an understanding of behavior trees. -Developing software/hardware for flight systems and safety critical functionality. -Distributed communication networks and message standards -Knowledge of military systems and operational tactics

WHAT WE VALUE IN MISSION SOFTWARE Customer Facing - Mission Software Engineers are the software engineers closest to Anduril customers, end-users, and the technical challenges of operational scenarios. Mission First - Above all, MSEs execute their mission in an expedient, scalable, and pragmatic way. They keep the mission top-


Apply

Tokyo, Tokyo-to, Japan


Overview As one of the world's leading industrial research laboratories, Microsoft Research (MSR) has more than 1,000 researchers and engineers working across the globe. In the past 30 years, Microsoft scientists have not only carried out world-class computer science research, but also transferred the advanced technologies into our products and services that have changed millions of people’s lives and ensured that Microsoft is at the forefront of digital transformation.

Part of Microsoft Research, Microsoft Research Asia (MSR Asia), established in 1998, is a leading research lab with major sites in Beijing, Shanghai and Vancouver. Over the years, technologies developed by MSR Asia have made a significant impact within Microsoft and also around the world, and new, innovative technologies are constantly being born from the lab. As one of the world-class research labs, MSRA offers an exhilarating, supportive, open and inclusive environment for top talents to create the future through their disruptive and cutting-edge research. (More information about Microsoft Research Lab - Asia - Microsoft Research).

Along with business growth, Microsoft Research Asia (MSRA) is increasing its presence in Japan, and looking for a Principal Research Manager who specializes in AI with an emphasis on Embodied AI and Robotics, AI Model innovations (NLP, vision, multi-modality), Societal AI, Wireless sensing, and Wellbeing. This is a unique opportunity to lead an ambitious research agenda and work with various teams to explore new applications of those research areas.

Responsibilities •As a leading and accomplished expert in a broad research area (e.g., Embodied AI and Robotics, AI Model, Multimedia and Vision), has a comprehensive understanding of the relevant literature, research methods, and business and academic context. •Defines and articulates a clear long-term research vision that is in line with MSRA strategic focus and drive research agenda landing with planned schedule •As a local representative, fosters cooperative relationships with local governments, academic communities, industry partners and business groups within Microsoft to establish MSRA presence locally and support future growth •Creates synergy among MSRA research groups in multiple locations to enable collaboration and creativity • As a people manager, hires and retains top talents. Deliveries success through empowerment and accountability


Apply