Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Niskayuna, NY


Description Job Description Summary At GE Aerospace Research, our team develops advanced embedded systems technology for the future of flight. Our technology will enable sustainable air travel and next generation aviation systems for use in commercial as well as military applications. As a Lead Embedded Software Engineer, you will architect and develop state-of-the-art embedded systems for real-time controls and communication applications. You will lead and contribute to advanced research and development programs for GE Aerospace as well as with U.S. Government Agencies. You will collaborate with fellow researchers from a range of technology disciplines, contributing to projects across the breadth of GE Aerospace programs. Job Description Essential Responsibilities: As a Lead Embedded Software Engineer, you will:

Work independently as well as with a team to develop and apply advanced software technologies for embedded controls and communication systems for GE Aerospace products Interact with hardware suppliers and engineering tool providers to identify the best solutions for the most challenging applications Lead small to medium-sized projects or tasks Be responsible for documenting technology and results through patent applications, technical reports, and publications Expand your expertise staying current with advances in embedded software to seek out new ideas and applications Collaborate in a team environment with colleagues across GE Aerospace and government agencies

Qualifications/Requirements:

Bachelor’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 7 years of industry experience OR a master’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 5 years of industry experience OR a Ph.D. in Electrical Engineering, Computer Science, or related disciplines with a minimum of 3 years of industry experience. Strong background in software development for embedded systems (e.g., x86, ARM) Strong embedded programming skills such as: C/C++, Python, and Rust Familiarity with CNSA and NIST cryptographic algorithms Willingness to travel at a minimum of 2 weeks per year Ability to obtain and maintain US Government Security Clearance US Citizenship required Must be willing to work out of an office located in Niskayuna, NY You must submit your application for employment on the careers page at www.gecareers.com to be considered Ideal Candidate Characteristics:

Coding experience with Bash, Python, C#, MATLAB, ARMv8 assembly, RISCV assembly Experience with embedded devices from Intel, AMD, Xilinx, NXP, etc. Experience with hardware-based security (e.g., UEFI, TPM, ARM TrustZone, Secure Boot) Understanding of embedded system security requirements and security techniques Experience with Linux OS and Linux security Experience with OpenSSL and/or wolfSSL Experience with wired and wireless networking protocols or network security Knowledge of 802.1, 802.3, and/or 802.11 standards Experience in software defined networks (SDN) and relevant software such as OpenFlow, Open vSwitch, or Mininet Hands-on experience with embedded hardware (such as protoboards) or networking equipment (such as switches and analyzers) in a laboratory setting Experience with embedded development in an RTOS environment (e.g., VxWorks, FreeRTOS) Demonstrated ability to take an innovative idea from a concept to a product Experience with the Agile methodology of program management The base pay range for this position is 90,000 - 175,000 USD Annually. The specific pay offered may be influenced by a variety of factors, including the candidate’s experience, education, and skill set. This position is also eligible for an annual discretionary bonus based on a percentage of your base salary. This posting is expected to close on June 16, 2024


Apply

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

Location New York, NY Seattle, WA Boston, MA


Description Climate Pledge Friendly helps customers discover and shop for products that are more sustainable. We partner with trusted sustainability certifications to highlight products that meet strict standards and help preserve the natural world. By shifting customer demand towards more sustainable products, we incentivize selling partners to build better selection, creating a virtuous cycle that yields significant environmental benefit at scale.

We are seeking a Senior Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. You will take the lead in conceptualizing, building, and launching models that significantly improve our shopping experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology.

You will work with business leaders, scientists, and engineers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed ML models and services. The types of initiatives you can expect to work on include a) personalized recommendations that help our customers find the right sustainable products on each shopping journey, b) automated solutions that combine ML/LLM and data mining to identify products that align with our sustainability goals and resonate with our customers' values, and c) models to measure the environmental and econometric impacts of sustainable shopping.


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer on our Motion Planning team, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Develop our planner behavior and trajectories in collaboration with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole

Apply

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

Engineering at Pinterest

Our Engineering team is at the core of bringing our platform to life for Pinners worldwide. Working collaboratively and cross-functionally with teams across the company, our engineers tackle growth-driving challenges to build an inspired and inclusive platform for all.

Our future of work is PinFlex

At Pinterest, we know that our best work happens when we feel most inspired. PinFlex promotes flexibility while prioritizing in-person moments to celebrate our culture and drive inspiration. We know that some work can be performed anywhere, and we encourage employees to work where they choose within their country or region, whether that’s at home, at a Pinterest office, or another virtual location. We believe that there is value in a distributed workforce but there are essential touch points for in-person collaboration that will create a big impact for our business and for development and connection.

Stop by booth #2100 to learn more about our open roles and our in-house generative AI foundation model that leverages the full power of our visual search and taste graph! Our engineers and recruiters are excited to connect with you!


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Seattle, WA


Description Interested in solving challenging problems using latest developments in Large Language Models and Artificial Intelligence (AI)? Amazon's Consumer Electronics Technology (CE Tech) organization is redefining shopping experiences leveraging state of the art AI technologies. We are looking for a talented Sr. Applied Scientist with a solid background in the design and development of scalable AI and ML systems and services, deep passion for building ML-powered products, a proven track record of executing complex projects, and delivering high business and customer impact. You will help us shape the future of shopping experiences. As a member of our team, you'll work on cutting-edge projects that directly impact millions of customers, selling partners, and employees every single day. This role will provide exposure to state-of-the-art innovations in AI/ML systems (including GenAI). Technologies you will have exposure to, and/or will work with, include AWS Bedrock, Amazon Q, SageMaker, and Foundational Models such as Anthropic’s Claude / Mistral, among others.


Apply

As a systems engineer for perception safety, your primary responsibility will be to define and ensure the safety performance of the perception system. You will be working in close collaboration with perception algorithm and sensor hardware development teams.


Apply