Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

The Advanced Development Center at ASML in Wilton, Connecticut is seeking an Optical Data Analyst with expertise processing of images for metrology process development of ultra-high precision optics and ceramics. The Advanced Development Center (ADC) is a multi-disciplinary group of engineers and scientists focused on developing learning loop solutions, proto-typing of next generation wafer and reticle clamping systems and industrialization of proto-types that meet the system performance requirements.

Role and Responsibilities The main job function is to develop image processing, data analysis and machine learning algorithm and software to aid in development of wafer and reticle clamping systems to solve challenging engineering problems associated with achieving nanometer (nm) scale precision. You will be part of the larger Development and Engineering (DE) sector – where the design and engineering of ASML products happens.

As an Optical Data Analyst, you will: Develop/improve image processing algorithm to extract nm level information from scientific imaging equipment (e.g. interferometer, SEM, AFM, etc.) Integrate algorithms into image processing software package for analysis and process development cycles for engineering and manufacturing users Maintain version controlled software package for multiple product generations Perform software testing to identify application, algorithm and software bugs Validate/verify/regression/unit test software to ensure it meets the business and technical requirements Use machine learning models to predict trends and behaviors relating to lifetime and manufacturing improvements of the product Execute a plan of analysis, software and systems, to mitigate product and process risk and prevent software performance issues Collaborate with the design team in software analysis tool development to find solutions to difficult technical problems in an efficient manner Work with database structures and utilize capabilities Write software scripts to search, analyze and plot data from database Support query code to interrogate data for manufacturing and engineering needs Support image analysis on data and derive conclusions Travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Natick, MA, United States


The Company: Cognex is a global leader in the exciting and growing field of machine vision. This position is a hybrid role in our Natick, MA corporate HQ.

The Team: This position is for an experienced Software Engineer in the Core Vision Technology team at Cognex, focused on architecting and productizing the best-in-class computer vision algorithms and AI models that power Cognex’s industrial barcode readers and 2D vision tools with a mission to innovate on behalf of customers and make this technology accessible to a broad range of users and platforms. Our products combine custom hardware, specialized lighting and optics, and world-class vision algorithms/models to create embedded systems that can find and read high-density symbols on package labels or marked directly on a variety of industrial parts, including aircraft engines, electronics substrates, and pharmaceutical test equipment. Our devices need to read hundreds of codes per second, so speed-optimized hardware and software work together to create best in class technology. Companies around the world rely on Cognex vision tools and technology to guide assembly, automate inspection, and speed up production and distribution.

Job Summary: The Core Vision Technology team is seeking an experienced developer with deep knowledge of the software development life cycle, creative problem solving skills and solid design thinking, with a focus on productization of AI technology on embedded platforms. You will play the critical role of ** a chief architect **, who will lead the development and productization of computer vision AI models and algorithms on multiple Cognex products; with the goal of making the technology modular and available to a broad range of users and platforms. In this role, you will interface with machine vision experts in R&D, product, hardware, and other software engineering teams at Cognex. A successful individual will lead design discussions, make sound architectural choices for the future on different embedded platforms, advocate for engineering excellence, mentor junior engineers and extend technical influence across teams. Prior experience with productization of AI technology is essential for this position.

Essential Functions: -Develop and productize innovative vision algorithms, including AI models developed by the R&D team for detecting and reading challenging 1D and 2D barcodes, and vision tools for gauging, inspection, guiding, and identifying industrial parts. -Lead software and API design discussions and make scalable technology choices meeting current and future business needs.
-More details in the link below

Minimum education and work experience required: MS or PhD from a top engineering school in EE, CS or equivalent 7+ years relevant, high tech work experience

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

London


Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning, computer vision and reinforcement learning. Leveraging our multi-national world-class team of researchers and engineers, we’re using data to learn more intelligent algorithms to bring autonomy for everyone, everywhere. We aim to be the future of self-driving cars, learning from experience and data.

Where you’ll have an impact

We are currently looking for people with research expertise in AI applied to autonomous driving or similar robotics or decision making domain, inclusive, but not limited to the following specific areas:

Foundation models for robotics Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioral and physical models of cars, people, and other dynamic agents You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team to achieve this.

What you’ll bring to Wayve

Thorough knowledge of and 5+ years applied experience in AI research, computer vision, deep learning, reinforcement learning or robotics Ability to deliver high quality code and familiarity with deep learning frameworks (Python and Pytorch preferred) Experience leading a research agenda aligned with larger goals Industrial and / or academic experience in deep learning, software engineering, automotive or robotics Experience working with training data, metrics, visualisation tools, and in-depth analysis of results Ability to understand, author and critique cutting-edge research papers Familiarity with code-reviewing, C++, Linux, Git is a plus PhD in a relevant area and / or track records of delivering value through machine learning are a big plus. What we offer you

Attractive compensation with salary and equity Immersion in a team of world-class researchers, engineers and entrepreneurs A unique position to shape the future of autonomy and tackle the biggest challenge of our time Bespoke learning and development opportunities Relocation support with visa sponsorship Flexible working hours - we trust you to do your job well, at times that suit you and your time Benefits such as an onsite chef, workplace nursery scheme, private health insurance, therapy, daily yoga, onsite bar, large social budgets, unlimited L&D requests, enhanced parental leave, and more!


Apply

As a systems engineer for perception safety, your primary responsibility will be to define and ensure the safety performance of the perception system. You will be working in close collaboration with perception algorithm and sensor hardware development teams.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole
  • Work closely with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms

Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies?

The Human-AI eXperiences (HAX) team at Microsoft Research AI Frontiers is looking for exceptional candidates to advance the state-of-the-art in human-AI collaboration with a focus on leveraging the capabilities of people and foundation model-based agents and systems to solve real problems.

As a Principal Researcher on our team, you will:

Work on challenging and impactful projects in areas such as human-AI collaboration and teaming, foundation model-based systems, multi-agent systems, next generation AI experiences and responsible AI. Apply your skills and knowledge to build practical solutions to solve real problems and impact the world. Collaborate with other researchers and engineers across the company to amplify your impact and grow your career in a supportive and stimulating environment. We are looking for candidates who have:

A drive for real world impact, demonstrated by a passion to build and release prototypes or OSS frameworks.
Demonstrated track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, UIST). Demonstrated interdisciplinary experience in applied machine learning, natural language processing, and human computer interaction including experience doing offline and online evaluations, and conducting user studies and user-centered research. Exceptional coding experience and hands-on experience working with large foundation models and related frameworks and toolkits. Familiarity with LLMs such as the OpenAI Generative Pre-Trained Transformers (GPT) models, LLAMA etc., model finetuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT etc.) and model evaluation is beneficial.
A passion for innovation and creativity, evidenced by the ability to generate novel ideas, approaches, and solutions. A team player mindset, characterized by effective communication, collaboration, and feedback skills.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Perform cutting-edge research to solve real-world problems. Collaborate closely with other researchers, engineers, and product group partners on high-impact projects that deliver real-world impact to people and society. Embody our culture and values.


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply