Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

B GARAGE was founded in 2017 by two PhD graduates from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the co-founders set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

The B GARAGE team is always looking for an enthusiastic, proactive, and collaborative Robotics and Automation Engineers to support the launch of intelligent aerial robots and autonomously sustainable ecosystems.

If you're interested in joining the B Garage team but don't see a role open that fits your background, apply to the general application and we'll reach out to discuss your career goals.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory comprised of over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking  a Researcher in the area of  Artificial Specialized Intelligence located in Vancouver, British Columbia, with a keen interest in developing cutting-edge large foundation models and post-training techniques for different domains and scenarios. This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.  
  Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.   Responsibilities Conduct cutting-edge research in large foundation models, focusing on applying large foundation models in specific domain. Collaborate with cross-functional teams to integrate solutions into Artificial Intelligence (AI) -driven system. Develop and maintain research prototypes and software tools, ensuring that they are well-documented and adhere to best practices in software development. Publish research findings in top-tier conferences and journals and present your work at industry events. Collaborate with other AI researchers and engineers, sharing knowledge and expertise to foster a culture of innovation and continuous learning within the team.


Apply

Location Niskayuna, NY


Description Job Description Summary At GE Aerospace Research, our team develops advanced embedded systems technology for the future of flight. Our technology will enable sustainable air travel and next generation aviation systems for use in commercial as well as military applications. As a Lead Embedded Software Engineer, you will architect and develop state-of-the-art embedded systems for real-time controls and communication applications. You will lead and contribute to advanced research and development programs for GE Aerospace as well as with U.S. Government Agencies. You will collaborate with fellow researchers from a range of technology disciplines, contributing to projects across the breadth of GE Aerospace programs. Job Description Essential Responsibilities: As a Lead Embedded Software Engineer, you will:

Work independently as well as with a team to develop and apply advanced software technologies for embedded controls and communication systems for GE Aerospace products Interact with hardware suppliers and engineering tool providers to identify the best solutions for the most challenging applications Lead small to medium-sized projects or tasks Be responsible for documenting technology and results through patent applications, technical reports, and publications Expand your expertise staying current with advances in embedded software to seek out new ideas and applications Collaborate in a team environment with colleagues across GE Aerospace and government agencies

Qualifications/Requirements:

Bachelor’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 7 years of industry experience OR a master’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 5 years of industry experience OR a Ph.D. in Electrical Engineering, Computer Science, or related disciplines with a minimum of 3 years of industry experience. Strong background in software development for embedded systems (e.g., x86, ARM) Strong embedded programming skills such as: C/C++, Python, and Rust Familiarity with CNSA and NIST cryptographic algorithms Willingness to travel at a minimum of 2 weeks per year Ability to obtain and maintain US Government Security Clearance US Citizenship required Must be willing to work out of an office located in Niskayuna, NY You must submit your application for employment on the careers page at www.gecareers.com to be considered Ideal Candidate Characteristics:

Coding experience with Bash, Python, C#, MATLAB, ARMv8 assembly, RISCV assembly Experience with embedded devices from Intel, AMD, Xilinx, NXP, etc. Experience with hardware-based security (e.g., UEFI, TPM, ARM TrustZone, Secure Boot) Understanding of embedded system security requirements and security techniques Experience with Linux OS and Linux security Experience with OpenSSL and/or wolfSSL Experience with wired and wireless networking protocols or network security Knowledge of 802.1, 802.3, and/or 802.11 standards Experience in software defined networks (SDN) and relevant software such as OpenFlow, Open vSwitch, or Mininet Hands-on experience with embedded hardware (such as protoboards) or networking equipment (such as switches and analyzers) in a laboratory setting Experience with embedded development in an RTOS environment (e.g., VxWorks, FreeRTOS) Demonstrated ability to take an innovative idea from a concept to a product Experience with the Agile methodology of program management The base pay range for this position is 90,000 - 175,000 USD Annually. The specific pay offered may be influenced by a variety of factors, including the candidate’s experience, education, and skill set. This position is also eligible for an annual discretionary bonus based on a percentage of your base salary. This posting is expected to close on June 16, 2024


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

We’re looking for engineers with a Machine Learning and Artificial Intelligence background to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • You will be driving fundamental and applied research in ML/AI. You will explore the boundaries of what is possible with the current technology set.
  • You will be combining industry best practices and a first-principles approach to design and build ML models.
  • Work in concert with product and infrastructure engineers to improve Figma’s design and collaboration tool through ML powered product features.
  • We'd love to hear from you if you have:
  • 5+ years of experience in programming languages (Python, C++, Java or R)
  • 3+ years of experience in one or more of the following areas: machine learning, natural language processing/understanding, computer vision, generative models.
  • Proven experience researching, building and/or fine-tuning ML models in production environments
  • Experience communicating and working across functions to drive solutions

While not required, It’s an added plus if you also have:

  • Proven track record of planning multi-year roadmap in which shorter-term projects ladder to the long-term vision.
  • Experience in mentoring/influencing senior engineers across organizations.

Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole
  • Work closely with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms

Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

As a systems engineer for perception safety, your primary responsibility will be to define and ensure the safety performance of the perception system. You will be working in close collaboration with perception algorithm and sensor hardware development teams.


Apply

London


Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning, computer vision and reinforcement learning. Leveraging our multi-national world-class team of researchers and engineers, we’re using data to learn more intelligent algorithms to bring autonomy for everyone, everywhere. We aim to be the future of self-driving cars, learning from experience and data.

Where you’ll have an impact

We are currently looking for people with research expertise in AI applied to autonomous driving or similar robotics or decision making domain, inclusive, but not limited to the following specific areas:

Foundation models for robotics Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioral and physical models of cars, people, and other dynamic agents You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team to achieve this.

What you’ll bring to Wayve

Thorough knowledge of and 5+ years applied experience in AI research, computer vision, deep learning, reinforcement learning or robotics Ability to deliver high quality code and familiarity with deep learning frameworks (Python and Pytorch preferred) Experience leading a research agenda aligned with larger goals Industrial and / or academic experience in deep learning, software engineering, automotive or robotics Experience working with training data, metrics, visualisation tools, and in-depth analysis of results Ability to understand, author and critique cutting-edge research papers Familiarity with code-reviewing, C++, Linux, Git is a plus PhD in a relevant area and / or track records of delivering value through machine learning are a big plus. What we offer you

Attractive compensation with salary and equity Immersion in a team of world-class researchers, engineers and entrepreneurs A unique position to shape the future of autonomy and tackle the biggest challenge of our time Bespoke learning and development opportunities Relocation support with visa sponsorship Flexible working hours - we trust you to do your job well, at times that suit you and your time Benefits such as an onsite chef, workplace nursery scheme, private health insurance, therapy, daily yoga, onsite bar, large social budgets, unlimited L&D requests, enhanced parental leave, and more!


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply