Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Mountain View


Who we are Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning. With our multi-national world-class technical team, we’re building things differently.

We don’t think it’s scalable to tell an algorithm how to drive through hand-coded rules and expensive HD maps. Instead, we believe that using experience and data will allow our algorithms to be more intelligent: capable of easily adapting to new environments. Our aim is to be the future of self-driving cars: the first to deploy in 100 cities across the world bringing autonomy to everyone, everywhere.

Where you will have an impact: Science is the team that is advancing our end-to-end autonomous driving research. The team’s mission is to accelerate our journey to AV2.0 and ensure the future success of Wayve by incubating and investing in new ideas that have the potential to become game-changing technological advances for the company.

As the first Research Manager in our Mountain View office, you will be responsible for managing & scaling a strong Science team which is building our Wayve Foundational Model in collaboration with other Wayve science teams in London and Vancouver. You will provide coaching and guidance to each of the researchers and engineers within your team and work with leaders across the company to ensure sustainable career growth for your team during a period of growth in the company. You will participate in our project-based operating model where your focus will be unlocking the potential of your team and its technical leaders to drive industry-leading impact. As part of your work, you will help identify the right projects to invest in, ensure the right allocation of resources to those projects, keep the team in good health, provide technical feedback to your team, share progress to build momentum, and build alignment and strong collaboration across the wider Science organisation. We are actively hiring and aim to substantially grow our research team over the next two years and you will be at the heart of this.

What you’ll bring to Wayve Essential: Prior experience as a manager of research teams (10-15+ people) with a clear career interest towards management Passionate about fostering personal and professional growth in individual team members Experience with roadmap planning, stakeholder management, requirements gathering and alignment with peers towards milestones and deliverables Strong knowledge of Machine Learning and related areas, such as Deep Learning, Natural Language Processing, Computer Vision, etc. Industry experience with machine learning technology development which has had real-world product impact Experience driving a team and technical project through the full lifecycle, ideally within the language, vision or multimodal space Passionate about bringing research concepts through to product Research and engineering fundamentals MS or PhD in Computer Science, Engineering, or similar experience

Desirable: Experience managing the execution of a technical product Good experience working in a project-based (“matrix”) operating environment Proven track record of successfully delivering research projects and publications Experience working with robotics, self-driving, AR/VR, or LLMs Our offer Competitive compensation, on-site chef and bar, lots of fun socials, workplace nursery scheme, comprehensive private health insurance and more! Immersion in a team of world-class researchers, engineers and entrepreneurs. A position to shape the future of autonomous driving, and thus bring about a real world deployment of a breakthrough technology. Help relocating/travelling to London, with visa sponsorship. Flexible working hours - we trust you to do your job well, at times that suit you and your team.


Apply

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory, comprises over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking Principal Researcher in the area of Artificial Specialized Intelligence and artificial general intelligence located in Vancouver, British Columbia.

This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.

Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Identifying and driving new research directions, creating new technologies and collaborating with Microsoft product groups and external partners to deploy them in real-world settings. Stay current with the latest trends, research, and developments in AI, machine learning, and system architecture to ensure our systems remain at the forefront of innovation. Evaluate the performance of AI-centric systems and provide recommendations for improvement and optimization. Publish research findings in peer-reviewed journals, conferences, and other relevant venues, and present research results to internal and external stakeholders. Mentor and guide researchers and engineers in their research and development efforts. Collaborate with industry partners and academic institutions to drive joint research projects and initiatives.


Apply

Location Amsterdam, Netherlands


Description

At Qualcomm AI Research, we are advancing AI to make its core capabilities – perception, reasoning, and action – ubiquitous across devices. Our mission is to make breakthroughs in fundamental AI research and scale them across industries. By bringing together some of the best minds in the field, we’re pushing the boundaries of what’s possible and shaping the future of AI.

As Principal Machine Learning Researcher at Qualcomm, you conduct innovative research in machine learning, deep learning, and AI that advances the state-of-the-art. · You develop and quickly iterate on innovative research ideas, and prototype and implement them in collaboration with other researchers and engineers. · You are on top of and actively shaping the latest research in the field and publish papers at top scientific conferences. · You help define and shape our research vision and planning within and across teams and are passionate at execution. · You engage with leads and stakeholders across business units on how to translate research progress into business impact. · You work in one or more of the following research areas: Generative AI, foundation models (LLMs, LVMs), reinforcement learning, neural network efficiency (e.g., quantization, conditional computation, efficient HW), on-device learning and personalization, and foundational AI research.

Working at Qualcomm means being part of a global company (headquartered in San Diego) that fosters a diverse workforce and puts emphasis on the learning opportunities and professional development of its employees. You will work closely with researchers that have published at major conferences, work on campus at the University of Amsterdam, where you have the opportunity to collaborate with academic researchers through university partnerships such as the QUVA lab, and live in a scenic, vibrant city with a healthy work/life balance and a diversity of cultural activities. In addition, you can join plenty of mentorship, learning, peer, and affinity group opportunities within the company. In this way you can easily develop personal and professional skills in your areas of interest. You’re empowered to start your own initiatives and, in doing so, collaborate with colleagues in offices across teams and countries.

Minimum qualifications: · PhD or Master’s degree in Machine Learning, Computer Vision, Physics, Mathematics, Electrical engineering or similar field, or equivalent practical experience. · 8+ years of experience in machine learning and AI, and experience in working in an academic or industry research lab. · Strong drive to continuously improve beyond the status quo in translating new ideas into innovative solutions. · Track record of scientific leadership by having published impactful work at major conferences in machine learning, computer vision, or NLP (e.g., NeurIPS, ICML, ICLR, CVPR, ICCV, ACL, EMNLP, NAACL, etc.). · Programming experience in Python and experience with standard deep learning toolkits.

Preferred qualifications: · Hands-on experience with foundation models (LLMs, LVMs) and reinforcement learning. · Proven experience in technology and team leadership, and experience with cross-functional stakeholder engagements. · Experience in writing clean and maintainable code for research-internal use (no product development). · Aptitude for guiding and mentoring more junior researchers.


Apply

Location Seattle, WA Arlington, VA New York, NY San Francisco, CA


Description Join us at the cutting edge of Amazon's sustainability initiatives to work on environmental and social advancements to support Amazon's long term worldwide sustainability strategy. At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people.

The Worldwide Sustainability (WWS) organization capitalizes on Amazon’s scale & speed to build a more resilient and sustainable company. We manage our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable.

Sustainability Science and Innovation (SSI) is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise. We use this expertise and skills to identify, develop and evaluate the science and innovations necessary for Amazon, customers and partners to meet their long-term sustainability goals and commitments.

We’re seeking a Senior Principal Scientist for Sustainability and Climate AI to drive technical strategy and innovation for our long-term sustainability and climate commitments through AI & ML. You will serve as the strategic technical advisor to science, emerging tech, and climate pledge partners operating at the Director, VPs, and SVP level. You will set the next generation modeling standards for the team and tackle the most immature/complex modeling problems following the latest sustainability/climate sciences. Staying hyper current with emergent sustainability/climate science and machine learning trends, you'll be trusted to translate recommendations to leadership and be the voice of our interpretation. You will nurture a continuous delivery culture to embed informed, science-based decision-making into existing mechanisms, such as decarbonization strategies, ESG compliance, and risk management. You will also have the opportunity to collaborate with the Climate Pledge team to define strategies based on emergent science/tech trends and influence investment strategy. As a leader on this team, you'll play a key role in worldwide sustainability organizational planning, hiring, mentorship and leadership development.

If you see yourself as a thought leader and innovator at the intersection of climate science and tech, we’d like to connect with you.


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

B GARAGE was founded in 2017 by two PhD graduates from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the co-founders set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

The B GARAGE team is always looking for an enthusiastic, proactive, and collaborative Robotics and Automation Engineers to support the launch of intelligent aerial robots and autonomously sustainable ecosystems.

If you're interested in joining the B Garage team but don't see a role open that fits your background, apply to the general application and we'll reach out to discuss your career goals.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

The Autonomy Software Metrics team is responsible for providing engineers and leadership at Zoox with tools to evaluate the behavior of Zoox’s autonomy stack using simulation. The team collaborates with experts across the organization to ensure a high safety bar, great customer experience, and rapid feedback to developers. The metrics team is responsible for evaluating the complete end-to-end customer experience through simulation, evaluating factors that impact safety, comfort, legality, road citizenship, progress, and more. You’ll be part of a passionate team making transportation safer, smarter, and more sustainable. This role gives you high visibility within the company and is critical for successfully launching our autonomous driving software.


Apply