Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Seattle, WA; Costa Mesa, CA; Boston, MA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

THE ROLE We build Lattice, the foundation for everything we do as a defense technology company. Our engineers are talented and hard-working, motivated to see their work rapidly deployed on the front lines. Our team is not just building an experiment in waiting, we deploy what we build on the Southern border, Iraq, Ukraine and more.

We have open roles across Platform Engineering, ranging from core infrastructure to distributed systems, web development, networking and more. We hire self-motivated people, those who hold a higher bar for themselves than anyone else could hold for them. If you love building infrastructure, platform services, or just working in high performing engineering cultures we invite you to apply!

YOU SHOULD APPLY IF YOU: -At least 3+ years working with a variety of programming languages such as Rust, Go, C++, Java, Python, JavaScript/TypeScript, etc. -Have experience working with customers to deliver novel software capabilities -Want to work on building and integrating model/software/hardware-in-the-loop components by leveraging first and third party technologies (related to simulation, data management, compute infrastructure, networking, and more). -Love building platform and infrastructure tooling that enables other software engineers to scale their output -Enjoy collaborating with team members and partners in the autonomy domain, and building technologies and processes which enable users to safely and rapidly develop and deploy autonomous systems at scale. -Are a U.S. Person because of required access to U.S. export controlled information

Note: The above bullet points describe the ideal candidate. None of us matched all of these at once when we first joined Anduril. We encourage you to apply even if you believe you meet only part of our wish list.

NICE TO HAVE -You've built or invented something: an app, a website, game, startup -Previous experience working in an engineering setting: a startup (or startup-like environment), engineering school, etc. If you've succeeded in a low structure, high autonomy environment you'll succeed here! -Professional software development lifecycle experience using tools such as version control, CICD systems, etc. -A deep, demonstrated understanding of how computers and networks work, from a single desktop to a multi-cluster cloud node -Experience building scalable backend software systems with various data storage and processing requirements -Experience with industry standard cloud platforms (AWS, Azure), CI/CD tools, and software infrastructure fundamentals (networking, security, distributed systems) -Ability to quickly understand and navigate complex systems and established code bases -Experience implementing robot or autonomous vehicle testing frameworks in a software-in-the-loop or hardware-in-the-loop (HITL) environment -Experience with modern build and deployment tooling (e.g. NixOS, Terraform) -Experience designing complex software systems, and iterating upon designs via a technical design review process -Familiarity with industry standard monitoring, logging, and data management tools and best practices -A bias towards rapid delivery and iteration


Apply

New York, United States


Overview Microsoft Research New York City (MSR NYC) is seeking applicants for a senior researcher position focusing on representation learning and efficient decision making with learned representations in the broader area of machine learning (ML) and artificial intelligence (AI), and in particular in the areas of interactive learning, this include deep learning with large foundation models over actions, and reinforcement learning.

Researchers in the ML/AI group cover a breadth of focus areas and research methodologies/approaches, spanning theoretical and empirical ML. We appreciate candidates with the potential to leverage/enhance the work of others in the group.

As a senior researcher, you will interact with our group's diverse array of researchers and practitioners, and contribute to ongoing research projects. We collaborate extensively with groups at other MSR locations and across Microsoft.

Microsoft Research (MSR) offers an exhilarating and supportive environment for cutting-edge, multidisciplinary research, both theoretical and empirical, with access to an extraordinary diversity of data sources, an open publications policy, and close links to top academic institutions around the world.

Applicants should have an established research track record, evidenced by conference or journal publications (or equivalent pieces of writing) and broader contributions to the research community. Applicants must have fulfilled their PhD degree requirements, including submission of their dissertation, prior to joining MSR NYC.

We are committed to building an inclusive, diverse, and pluralistic research environment and encourage applications from people of all backgrounds. We work collectively to make Microsoft Research a welcoming and productive space for all researchers.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more, and we are dedicated to this mission across every aspect of our company. Our culture is centered on embracing a growth mindset and encouraging teams and leaders to bring their best each day. Join us and help shape the future of the world.

Responsibilities As a senior researcher, you define your own research agenda in collaboration with other researchers, driving forward an effective program of basic, fundamental, and applied research. We highly value collaboration and building new ideas with members of the group and others. You may also have the direct opportunity to realize your ideas in products and services used worldwide.


Apply

Location Seattle, WA


Description To help a growing organization quickly deliver more efficient features to Prime Video customers, Prime Video’s READI organization is innovating on behalf of our global software development team consisting of thousands of engineers. The READI organization is building a team specialized in forecasting and recommendations. This team will apply supervised learning algorithms for forecasting multi-dimensional related time series using recurrent neural networks. The team will develop forecasts on key business dimensions and recommendations on performance and efficiency opportunities across our global software environment.

As a member of the team, you will apply your deep knowledge of machine learning to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them for customers, where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into designs with development teams and developing ready-to-use learning models. You consistently bring strong, data-driven business and technical judgment to decisions.


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

The Advanced Development Center at ASML in Wilton, Connecticut is seeking an Optical Data Analyst with expertise processing of images for metrology process development of ultra-high precision optics and ceramics. The Advanced Development Center (ADC) is a multi-disciplinary group of engineers and scientists focused on developing learning loop solutions, proto-typing of next generation wafer and reticle clamping systems and industrialization of proto-types that meet the system performance requirements.

Role and Responsibilities The main job function is to develop image processing, data analysis and machine learning algorithm and software to aid in development of wafer and reticle clamping systems to solve challenging engineering problems associated with achieving nanometer (nm) scale precision. You will be part of the larger Development and Engineering (DE) sector – where the design and engineering of ASML products happens.

As an Optical Data Analyst, you will: Develop/improve image processing algorithm to extract nm level information from scientific imaging equipment (e.g. interferometer, SEM, AFM, etc.) Integrate algorithms into image processing software package for analysis and process development cycles for engineering and manufacturing users Maintain version controlled software package for multiple product generations Perform software testing to identify application, algorithm and software bugs Validate/verify/regression/unit test software to ensure it meets the business and technical requirements Use machine learning models to predict trends and behaviors relating to lifetime and manufacturing improvements of the product Execute a plan of analysis, software and systems, to mitigate product and process risk and prevent software performance issues Collaborate with the design team in software analysis tool development to find solutions to difficult technical problems in an efficient manner Work with database structures and utilize capabilities Write software scripts to search, analyze and plot data from database Support query code to interrogate data for manufacturing and engineering needs Support image analysis on data and derive conclusions Travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

※Location※ South Korea Seoul / Pangyo


※Description※ 1) Deep learning compression and optimization - Development of algorithms for compression and optimization of deep learning networks - Perform deep learning network embedding (requires understanding of HW platform)

2) AD vision recognition SW - Development of deep learning recognition technology based on sensors such as cameras - Development of pre- and post-processing algorithms and function output - Development of optimization of image recognition algorithm

3) AD decision/control SW - Development of information-based map generation technology recognized by many vehicles - Development of learning-based nearby object behavior prediction model - Development of driving mode determination and collision prevention function of Lv 3 autonomous driving system


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

At Zoox, you will collaborate with a team of world-class engineers with diverse backgrounds in areas such as AI, robotics, mechatronics, planning, control, localization, computer vision, rendering, simulation, distributed computing, design, and automated testing. You’ll master new technologies while working on code, algorithms, and research in your area of expertise to create and refine key systems and move Zoox forward.

Working at a startup gives you the chance to manifest your creativity and highly impact the final product.


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Seattle, US


Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The Opportunity Photoshop ART is seeking a Research Scientist to join our inpainting R&D team focused on making significant progress in image generation/restoration, low level vision, image editing with an eventual posture toward productization. Individuals in this role are expected to be expert in identified research areas such as artificial intelligence, machine learning, computer vision, and image processing. The ideal candidate will have a keen interest in producing new science to advance Adobe products.

What you'll Do Work towards long-term results-oriented research goals, while identifying intermediate achievements. Contribute to research that can be applied to Adobe product development. Help integrating novel research work into Adobe’s product. Lead and collaborate on research projects across different Adobe divisions. What you need to succeed Ph.D. and solid publications in machine learning, AI, computer science, statistics, or scene semantic understanding. Experience communicating research for public audiences of peers. Experience working in teams. Knowledge in a programming language. Preferred Qualification 2 years of professional full-time experience preferred, but not required 2+ year(s) of internship with primary emphasis on AI research in image generation, low level vision, image restoration, and segmentation Experience in collaboration with a team with varied strengths. 4+ First-author publications at peer-reviewed AI conferences (e.g. NIPS, CVPR, ECCV, ICML, ICLR, ICCV, and ACL). Experience in developing and debugging in Python. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.

If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.

Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $129,400 -- $242,200 annually. Pay within this range varies by work location and may also depend on job-relate


Apply

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

As a systems engineer for perception safety, your primary responsibility will be to define and ensure the safety performance of the perception system. You will be working in close collaboration with perception algorithm and sensor hardware development teams.


Apply