Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

San Jose, CA

The Media Analytics team at NEC Labs America is seeking outstanding researchers with backgrounds in computer vision or machine learning. Candidates must possess an exceptional track record of original research and passion to create high impact products. Our key research areas include autonomous driving, open vocabulary perception, prediction and planning, simulation, neural rendering, agentic LLMs and foundational vision-language models. We have a strong internship program and active collaborations with academia. The Media Analytics team publishes extensively at top-tier venues such as CVPR, ICCV or ECCV.

To check out our latest work, please visit: https://www.nec-labs.com/research/media-analytics/

Qualifications: 1. PhD in Computer Science (or equivalent) 2. Strong publication record at top-tier computer vision or machine learning venues 3. Motivation to conduct independent research from conception to implementation.


Apply

Natick, MA, United States


The Company: Cognex is a global leader in the exciting and growing field of machine vision. This position is a hybrid role in our Natick, MA corporate HQ.

The Team: This position is for an experienced Software Engineer in the Core Vision Technology team at Cognex, focused on architecting and productizing the best-in-class computer vision algorithms and AI models that power Cognex’s industrial barcode readers and 2D vision tools with a mission to innovate on behalf of customers and make this technology accessible to a broad range of users and platforms. Our products combine custom hardware, specialized lighting and optics, and world-class vision algorithms/models to create embedded systems that can find and read high-density symbols on package labels or marked directly on a variety of industrial parts, including aircraft engines, electronics substrates, and pharmaceutical test equipment. Our devices need to read hundreds of codes per second, so speed-optimized hardware and software work together to create best in class technology. Companies around the world rely on Cognex vision tools and technology to guide assembly, automate inspection, and speed up production and distribution.

Job Summary: The Core Vision Technology team is seeking an experienced developer with deep knowledge of the software development life cycle, creative problem solving skills and solid design thinking, with a focus on productization of AI technology on embedded platforms. You will play the critical role of ** a chief architect **, who will lead the development and productization of computer vision AI models and algorithms on multiple Cognex products; with the goal of making the technology modular and available to a broad range of users and platforms. In this role, you will interface with machine vision experts in R&D, product, hardware, and other software engineering teams at Cognex. A successful individual will lead design discussions, make sound architectural choices for the future on different embedded platforms, advocate for engineering excellence, mentor junior engineers and extend technical influence across teams. Prior experience with productization of AI technology is essential for this position.

Essential Functions: -Develop and productize innovative vision algorithms, including AI models developed by the R&D team for detecting and reading challenging 1D and 2D barcodes, and vision tools for gauging, inspection, guiding, and identifying industrial parts. -Lead software and API design discussions and make scalable technology choices meeting current and future business needs.
-More details in the link below

Minimum education and work experience required: MS or PhD from a top engineering school in EE, CS or equivalent 7+ years relevant, high tech work experience

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Senior Research Engineer – Generative AI to join their team in Redmond, WA and New York City, NY.

The mission of the AI Frontiers lab is to expand the pareto frontier of AI capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca), foundation models for actions (e.g., in gaming, robotics, and Office productivity tools) and Multi-Agent AI (e.g. AutoGen).

We are seeking Senior Research Engineers to join our team and contribute to the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Research Engineer, you will play a crucial role in developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of language models, action models and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring models to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

Microsoft Research (MSR) offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Embody our Culture and Values

Responsibilities As a Senior Research Engineer in AI Frontiers, you will design, develop, execute, and implement technology research projects in collaboration with other researchers, engineers, and product groups.

As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide.


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

The Autonomy Software Metrics team is responsible for providing engineers and leadership at Zoox with tools to evaluate the behavior of Zoox’s autonomy stack using simulation. The team collaborates with experts across the organization to ensure a high safety bar, great customer experience, and rapid feedback to developers. The metrics team is responsible for evaluating the complete end-to-end customer experience through simulation, evaluating factors that impact safety, comfort, legality, road citizenship, progress, and more. You’ll be part of a passionate team making transportation safer, smarter, and more sustainable. This role gives you high visibility within the company and is critical for successfully launching our autonomous driving software.


Apply

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply

At Zoox, you will collaborate with a team of world-class engineers with diverse backgrounds in areas such as AI, robotics, mechatronics, planning, control, localization, computer vision, rendering, simulation, distributed computing, design, and automated testing. You’ll master new technologies while working on code, algorithms, and research in your area of expertise to create and refine key systems and move Zoox forward.

Working at a startup gives you the chance to manifest your creativity and highly impact the final product.


Apply

Location San Diego


Description

At Qualcomm, we are transforming the automotive industry with our Snapdragon Digital Chassis and building the next generation software defined vehicle (SDV).

Snapdragon Ride is an integral pillar of our Snapdragon Digital Chassis, and since its launch it has gained momentum with a growing number of global automakers and Tier1 suppliers. Snapdragon Ride aims to address the complexity of autonomous driving and ADAS by leveraging its high-performance, power-efficient SoC, industry-leading artificial intelligence (AI) technologies and pioneering vision and drive policy stack to deliver a comprehensive, cost and energy efficient systems solution.

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

We are looking for smart, innovative and motivated individuals with strong theory background in deep learning, advanced signal processing, probability & algorithms and good implementation skills in python/C++. Job responsibilities include design and development of novel algorithms for solving complex problems related to behavior prediction for autonomous driving, including trajectory and intention prediction. Develop novel deep learning models to predict trajectories for road users and optimize them to run-in real-time systems. Work closely with sensor fusion and planning team on defining requirements and KPIs. Work closely with test engineers to develop test plans for validating performance in simulations and real-world testing.

Minimum Qualifications: • Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 6+ years of Systems Engineering or related work experience. OR Master's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 5+ years of Systems Engineering or related work experience. OR PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 4+ years of Systems Engineering or related work experience.Preferred Qualifications: Ph.D + 2 years industry experience in behavior and trajectory prediction Proficient in variety of deep learning models like CNN, Transformer, RNN, LSTM, VAE, GraphCNN etc Experience working with NLP Deep Learning Networks Proficient in state of the art in machine learning tools (pytorch, tensor flow) 3+ years of experience with Programming Language such as C, C++, Python, etc. 3+ years Systems Engineering, or related work experience in the area of behavior and trajectory prediction. Experience working with, modifying, and creating advanced algorithms Analytical and scientific mindset, with the ability to solve complex problems. Experience in Autonomous driving, Robotics, XR/AR/VR Experience with robust software design for safety-critical systems Excellent written and verbal communication skills, ability to work with a cross-functional team


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies?

The Human-AI eXperiences (HAX) team at Microsoft Research AI Frontiers is looking for exceptional candidates to advance the state-of-the-art in human-AI collaboration with a focus on leveraging the capabilities of people and foundation model-based agents and systems to solve real problems.

As a Principal Researcher on our team, you will:

Work on challenging and impactful projects in areas such as human-AI collaboration and teaming, foundation model-based systems, multi-agent systems, next generation AI experiences and responsible AI. Apply your skills and knowledge to build practical solutions to solve real problems and impact the world. Collaborate with other researchers and engineers across the company to amplify your impact and grow your career in a supportive and stimulating environment. We are looking for candidates who have:

A drive for real world impact, demonstrated by a passion to build and release prototypes or OSS frameworks.
Demonstrated track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, UIST). Demonstrated interdisciplinary experience in applied machine learning, natural language processing, and human computer interaction including experience doing offline and online evaluations, and conducting user studies and user-centered research. Exceptional coding experience and hands-on experience working with large foundation models and related frameworks and toolkits. Familiarity with LLMs such as the OpenAI Generative Pre-Trained Transformers (GPT) models, LLAMA etc., model finetuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT etc.) and model evaluation is beneficial.
A passion for innovation and creativity, evidenced by the ability to generate novel ideas, approaches, and solutions. A team player mindset, characterized by effective communication, collaboration, and feedback skills.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Perform cutting-edge research to solve real-world problems. Collaborate closely with other researchers, engineers, and product group partners on high-impact projects that deliver real-world impact to people and society. Embody our culture and values.


Apply