Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

New York, United States


Overview Microsoft Research New York City (MSR NYC) is seeking applicants for a senior researcher position focusing on representation learning and efficient decision making with learned representations in the broader area of machine learning (ML) and artificial intelligence (AI), and in particular in the areas of interactive learning, this include deep learning with large foundation models over actions, and reinforcement learning.

Researchers in the ML/AI group cover a breadth of focus areas and research methodologies/approaches, spanning theoretical and empirical ML. We appreciate candidates with the potential to leverage/enhance the work of others in the group.

As a senior researcher, you will interact with our group's diverse array of researchers and practitioners, and contribute to ongoing research projects. We collaborate extensively with groups at other MSR locations and across Microsoft.

Microsoft Research (MSR) offers an exhilarating and supportive environment for cutting-edge, multidisciplinary research, both theoretical and empirical, with access to an extraordinary diversity of data sources, an open publications policy, and close links to top academic institutions around the world.

Applicants should have an established research track record, evidenced by conference or journal publications (or equivalent pieces of writing) and broader contributions to the research community. Applicants must have fulfilled their PhD degree requirements, including submission of their dissertation, prior to joining MSR NYC.

We are committed to building an inclusive, diverse, and pluralistic research environment and encourage applications from people of all backgrounds. We work collectively to make Microsoft Research a welcoming and productive space for all researchers.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more, and we are dedicated to this mission across every aspect of our company. Our culture is centered on embracing a growth mindset and encouraging teams and leaders to bring their best each day. Join us and help shape the future of the world.

Responsibilities As a senior researcher, you define your own research agenda in collaboration with other researchers, driving forward an effective program of basic, fundamental, and applied research. We highly value collaboration and building new ideas with members of the group and others. You may also have the direct opportunity to realize your ideas in products and services used worldwide.


Apply

Location Seattle, WA


Description Futures Design is the advanced concept design and incubation team within Amazon’s Device and Services Design Group (DDG). We are responsible for exploring and defining think (very) big opportunities globally and locally — so that we can better understand how new products and services might enrich the lives of our customers and so that product teams and leaders can align on where we're going and why we're going there. We focus on a 3–10+ year time frame, with the runway to invent and design category-defining products and transformational customer experiences. Working with Amazon business and technology partners, we use research, design, and prototyping to guide early product development, bring greater clarity to engineering goals, and develop a UX-grounded point of view.

We're looking for a Principal Design Technologist to join the growing DDG Futures Design team. You thrive in ambiguity and paradigm shifts– remaking assumptions of how customers engage, devices operate, and builders create. You apply deep expertise that spans design, technology, and product, grounding state-of-the-art emerging technologies through storytelling and a maker mindset. You learn and adapt technology trends to enduring customer problems through customer empathy, code, and iterative experimentation.

You will wear multiple hats to quickly assimilate customer problems, convert them to hypotheses, and test them using efficient technologies and design methods to build stakeholder buy-in. You’ll help your peers unlock challenging scenarios and mature the design studio’s ability to deliver design at scale across a breadth of devices and interaction modalities. You will work around limitations and push capabilities through your work. Your curiosity will inspire those around you and facilitate team growth, while your hands-on, collaborative nature will build trust with your peers and studio partners.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Redmond, Washington, United States


Overview We are seeking skilled and passionate Senior Research Scientist to join our Responsible & Open Ai Research (ROAR) team in Azure Cognitive Services at Redmond, WA.

As a Senior Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of the rapidly evolving multimodal, AI models such as GPT-4 Vision, DALL-E, Sora, and beyond, as well as to expand and enhance the Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Embody our Culture and Values


Apply

Location Seattle, WA New York, NY


Description We are looking for an Applied Scientist to join our Seattle team. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. Our team solves a broad range of problems ranging from natural knowledge understanding of third-party shoppable content, product and content recommendation to social media influencers and their audiences, determining optimal compensation for creators, and mitigating fraud. We generate deep semantic understanding of the photos, and videos in shoppable content created by our creators for efficient processing and appropriate placements for the best customer experience. For example, you may lead the development of reinforcement learning models such as MAB to rank content/product to be shown to influencers. To achieve this, a deep understanding of the quality and relevance of content must be established through ML models that provide those contexts for ranking.

In order to be successful in our team, you need a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as SageMaker, S3, and EC2 with a variety of skillset in shallow and deep learning ML models, particularly in NLP and CV. You will bring knowledge in many of these domains along with your own specialties.


Apply

Location Mountain View, CA


Description Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role: We are seeking passionate Senior/Staff Software Engineers, who have strong fundamentals in software development practices and are experts in C++ language in production-oriented environment. The ideal candidate is a highly experienced C++ developer with a passion for enabling the world's first safe, reliable & efficient network of autonomous vehicles. You will partner with the research and software engineers to design, develop, test and validate AV features for our autonomous fleet.

This role will be onsite at our Mountain View office.

What you'll do: +Design, implement, integrate, and support real-time mission-critical software for the Gatik’s autonomy stack +Work with the research engineers to develop maintainable, testable and robust software designs +Architect and implement solutions to complex issues between components partitioned across the large software stack +Be at the forefront of guiding & ensuring best SDLC practices while contributing to improving the safety in the core autonomy stack +Collaborate with the Infrastructure and DevOps teams for efficient, secure and scalable software delivery to a network of Gatik’s autonomous fleet
+Guide and mentor autonomy researchers and algorithm developers to make sure their components are running efficiently and with optimal compute and memory usage +Review and refine technical requirements and translate them into high-level design & plans to support the development of safe AV technology +Conduct code and design reviews and advise on technical matters

Click the apply button below to see the full job description and apply


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

Location San Diego


Description

Qualcomm AI Research is looking for world-class algorithm engineers in general domain machine learning, especially deep learning, generative AI, LLM, LVM. Come join a high-caliber team of engineers building advanced machine learning technology, best-in-class solutions, and user friendly model optimization tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet) to enable state-of-the-art networks to run on devices with limited power, memory, and computation.

Members of our team enjoy the opportunity to participate in cutting edge research while simultaneously contributing technology that will be deployed worldwide in our industry-leading devices. You will be part of a multi-disciplinary talented team working on on-device generative AI optimization. Collaborate in a cross-functional environment spanning hardware, software and systems. See your design in action on industry-leading chips embedded in the next generation of smartphones, autonomous vehicles, robotics, and IOT devices.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The R&D work responsibility for this position focuses on the following: Algorithms research and development in the area of Generative AI, LVM, LLM, Multi-modality Efficient inference algorithms research and development, e.g. batching, KV caching, efficient attentions, long context, speculative decoding Advanced quantization algorithms research and development for complex generative models, e.g., gradient/non-gradient based optimization, equivalent/non-equivalent transformation, automatic mixed precision, hardware in loop Model compression, lossy or lossless, structural and neural search Optimization based learning and learning based optimization Generative AI system prototyping Apply solutions toward system innovations for model efficiency advancement on device as well as in the cloud Python, Pytorch programmer Preferred Qualifications: Master's degree in Computer Science, Engineering, Information Systems, or related field. PHD's degree is preferred. 2+ years of experience with Machine Learning algorithms or systems engineering or related work experience


Apply

Location Seattle, WA


Description To help a growing organization quickly deliver more efficient features to Prime Video customers, Prime Video’s READI organization is innovating on behalf of our global software development team consisting of thousands of engineers. The READI organization is building a team specialized in forecasting and recommendations. This team will apply supervised learning algorithms for forecasting multi-dimensional related time series using recurrent neural networks. The team will develop forecasts on key business dimensions and recommendations on performance and efficiency opportunities across our global software environment.

As a member of the team, you will apply your deep knowledge of machine learning to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them for customers, where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into designs with development teams and developing ready-to-use learning models. You consistently bring strong, data-driven business and technical judgment to decisions.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Gothenburg, Sweden

This fully-funded PhD position offers an opportunity to delve into the area of geometric deep learning within the broader landscape of machine learning and 3D computer vision. As a candidate, you'll have the chance to develop theoretical concepts and innovative methodologies while contributing to real-world imaging applications. Moreover, you will enjoy working in a diverse, collaborative, supportive and internationally recognized environment.

The PhD project centers on understanding and improving deep learning methods for 3D scene analysis and 3D generative diffusion models. We aim to explore new ways of encoding symmetries in deep learning models in order to scale up computations, a necessity for realizing truly 3D generative models for general scenes. We aim to explore the application of these models in key problems involving novel view synthesis and self-supervised learning.

If you are interested and present at CVPR, then feel free to reach out to Prof. Fredrik Kahl, head of the Computer Vision Group.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply