Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Engineering at Pinterest

Our Engineering team is at the core of bringing our platform to life for Pinners worldwide. Working collaboratively and cross-functionally with teams across the company, our engineers tackle growth-driving challenges to build an inspired and inclusive platform for all.

Our future of work is PinFlex

At Pinterest, we know that our best work happens when we feel most inspired. PinFlex promotes flexibility while prioritizing in-person moments to celebrate our culture and drive inspiration. We know that some work can be performed anywhere, and we encourage employees to work where they choose within their country or region, whether that’s at home, at a Pinterest office, or another virtual location. We believe that there is value in a distributed workforce but there are essential touch points for in-person collaboration that will create a big impact for our business and for development and connection.

Stop by booth #2100 to learn more about our open roles and our in-house generative AI foundation model that leverages the full power of our visual search and taste graph! Our engineers and recruiters are excited to connect with you!


Apply

San Jose, CA

The Media Analytics team at NEC Labs America is seeking outstanding researchers with backgrounds in computer vision or machine learning. Candidates must possess an exceptional track record of original research and passion to create high impact products. Our key research areas include autonomous driving, open vocabulary perception, prediction and planning, simulation, neural rendering, agentic LLMs and foundational vision-language models. We have a strong internship program and active collaborations with academia. The Media Analytics team publishes extensively at top-tier venues such as CVPR, ICCV or ECCV.

To check out our latest work, please visit: https://www.nec-labs.com/research/media-analytics/

Qualifications: 1. PhD in Computer Science (or equivalent) 2. Strong publication record at top-tier computer vision or machine learning venues 3. Motivation to conduct independent research from conception to implementation.


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Seattle, US


Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The Opportunity Photoshop ART is seeking a Research Scientist to join our inpainting R&D team focused on making significant progress in image generation/restoration, low level vision, image editing with an eventual posture toward productization. Individuals in this role are expected to be expert in identified research areas such as artificial intelligence, machine learning, computer vision, and image processing. The ideal candidate will have a keen interest in producing new science to advance Adobe products.

What you'll Do Work towards long-term results-oriented research goals, while identifying intermediate achievements. Contribute to research that can be applied to Adobe product development. Help integrating novel research work into Adobe’s product. Lead and collaborate on research projects across different Adobe divisions. What you need to succeed Ph.D. and solid publications in machine learning, AI, computer science, statistics, or scene semantic understanding. Experience communicating research for public audiences of peers. Experience working in teams. Knowledge in a programming language. Preferred Qualification 2 years of professional full-time experience preferred, but not required 2+ year(s) of internship with primary emphasis on AI research in image generation, low level vision, image restoration, and segmentation Experience in collaboration with a team with varied strengths. 4+ First-author publications at peer-reviewed AI conferences (e.g. NIPS, CVPR, ECCV, ICML, ICLR, ICCV, and ACL). Experience in developing and debugging in Python. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.

If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.

Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $129,400 -- $242,200 annually. Pay within this range varies by work location and may also depend on job-relate


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Location Seattle, WA New York, NY


Description We are looking for an Applied Scientist to join our Seattle team. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. Our team solves a broad range of problems ranging from natural knowledge understanding of third-party shoppable content, product and content recommendation to social media influencers and their audiences, determining optimal compensation for creators, and mitigating fraud. We generate deep semantic understanding of the photos, and videos in shoppable content created by our creators for efficient processing and appropriate placements for the best customer experience. For example, you may lead the development of reinforcement learning models such as MAB to rank content/product to be shown to influencers. To achieve this, a deep understanding of the quality and relevance of content must be established through ML models that provide those contexts for ranking.

In order to be successful in our team, you need a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as SageMaker, S3, and EC2 with a variety of skillset in shallow and deep learning ML models, particularly in NLP and CV. You will bring knowledge in many of these domains along with your own specialties.


Apply

Location Seattle, WA Palo Alto, CA


Description Amazon’s product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on an AI-first initiative to continue to improve the way we do search through the use of large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced multi-modal deep-learning models on very large scale datasets, specifically through the use of advanced systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge Computer Vision and Deep Learning technologies and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: * How can multi-modal inputs in deep-learning models help us deliver delightful shopping experiences to millions of Amazon customers? * Can combining multi-modal data and very large scale deep-learning models help us provide a step-function improvement to the overall model understanding and reasoning capabilities? We are looking for exceptional scientists who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Mountain View


Who we are Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning. With our multi-national world-class technical team, we’re building things differently.

We don’t think it’s scalable to tell an algorithm how to drive through hand-coded rules and expensive HD maps. Instead, we believe that using experience and data will allow our algorithms to be more intelligent: capable of easily adapting to new environments. Our aim is to be the future of self-driving cars: the first to deploy in 100 cities across the world bringing autonomy to everyone, everywhere.

Where you will have an impact: Science is the team that is advancing our end-to-end autonomous driving research. The team’s mission is to accelerate our journey to AV2.0 and ensure the future success of Wayve by incubating and investing in new ideas that have the potential to become game-changing technological advances for the company.

As the first Research Manager in our Mountain View office, you will be responsible for managing & scaling a strong Science team which is building our Wayve Foundational Model in collaboration with other Wayve science teams in London and Vancouver. You will provide coaching and guidance to each of the researchers and engineers within your team and work with leaders across the company to ensure sustainable career growth for your team during a period of growth in the company. You will participate in our project-based operating model where your focus will be unlocking the potential of your team and its technical leaders to drive industry-leading impact. As part of your work, you will help identify the right projects to invest in, ensure the right allocation of resources to those projects, keep the team in good health, provide technical feedback to your team, share progress to build momentum, and build alignment and strong collaboration across the wider Science organisation. We are actively hiring and aim to substantially grow our research team over the next two years and you will be at the heart of this.

What you’ll bring to Wayve Essential: Prior experience as a manager of research teams (10-15+ people) with a clear career interest towards management Passionate about fostering personal and professional growth in individual team members Experience with roadmap planning, stakeholder management, requirements gathering and alignment with peers towards milestones and deliverables Strong knowledge of Machine Learning and related areas, such as Deep Learning, Natural Language Processing, Computer Vision, etc. Industry experience with machine learning technology development which has had real-world product impact Experience driving a team and technical project through the full lifecycle, ideally within the language, vision or multimodal space Passionate about bringing research concepts through to product Research and engineering fundamentals MS or PhD in Computer Science, Engineering, or similar experience

Desirable: Experience managing the execution of a technical product Good experience working in a project-based (“matrix”) operating environment Proven track record of successfully delivering research projects and publications Experience working with robotics, self-driving, AR/VR, or LLMs Our offer Competitive compensation, on-site chef and bar, lots of fun socials, workplace nursery scheme, comprehensive private health insurance and more! Immersion in a team of world-class researchers, engineers and entrepreneurs. A position to shape the future of autonomous driving, and thus bring about a real world deployment of a breakthrough technology. Help relocating/travelling to London, with visa sponsorship. Flexible working hours - we trust you to do your job well, at times that suit you and your team.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

The Advanced Development Center at ASML in Wilton, Connecticut is seeking an Optical Data Analyst with expertise processing of images for metrology process development of ultra-high precision optics and ceramics. The Advanced Development Center (ADC) is a multi-disciplinary group of engineers and scientists focused on developing learning loop solutions, proto-typing of next generation wafer and reticle clamping systems and industrialization of proto-types that meet the system performance requirements.

Role and Responsibilities The main job function is to develop image processing, data analysis and machine learning algorithm and software to aid in development of wafer and reticle clamping systems to solve challenging engineering problems associated with achieving nanometer (nm) scale precision. You will be part of the larger Development and Engineering (DE) sector – where the design and engineering of ASML products happens.

As an Optical Data Analyst, you will: Develop/improve image processing algorithm to extract nm level information from scientific imaging equipment (e.g. interferometer, SEM, AFM, etc.) Integrate algorithms into image processing software package for analysis and process development cycles for engineering and manufacturing users Maintain version controlled software package for multiple product generations Perform software testing to identify application, algorithm and software bugs Validate/verify/regression/unit test software to ensure it meets the business and technical requirements Use machine learning models to predict trends and behaviors relating to lifetime and manufacturing improvements of the product Execute a plan of analysis, software and systems, to mitigate product and process risk and prevent software performance issues Collaborate with the design team in software analysis tool development to find solutions to difficult technical problems in an efficient manner Work with database structures and utilize capabilities Write software scripts to search, analyze and plot data from database Support query code to interrogate data for manufacturing and engineering needs Support image analysis on data and derive conclusions Travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Redmond, Washington, United States


Overview In Mixed Reality, people—not devices—are at the center of everything we do. Our tech moves beyond screens and pixels, creating a new reality aimed at bringing us closer together—whether that’s scientists “meeting” on the surface of a virtual Mars or some yet undreamt-of possibility. To get there, we’re incorporating diverse groundbreaking technologies, from the revolutionary Holographic Processing Unit to computer vision, machine learning, human-computer interaction, and more. We’re a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox. We believe there is a better way. If you do too, we need you! 

You are drawn to work on the latest and most innovative products in the world. You seek projects that will transform how people interact with technology. You have a drive to grow your skillset by finding unique challenges that have yet to be solved. We are looking for Senior Software Engineer to come and join us in delivering the next wave of holographic experiences in an exciting new project. 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Incorporate the latest Artificial Intelligence, Machine Learning, Computer Vision and Sensor Fusion capabilities into the design of our Products and Services Characterize various sensors like inertial measurement units, magnetometers, visual light cameras, depth cameras, GPS etc. to understand their properties and how they relate to the accuracy of tracking, mapping, and localization. Implement and design innovative measurement solutions to quantify the accuracy and reliability of Mixed Reality devices against industry gold standard ground-truth systems Partner with engineers, designers, and program managers to deliver solid technical designs. Embody our Culture and Values


Apply