Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Seattle, WA New York, NY


Description We are looking for an Applied Scientist to join our Seattle team. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. Our team solves a broad range of problems ranging from natural knowledge understanding of third-party shoppable content, product and content recommendation to social media influencers and their audiences, determining optimal compensation for creators, and mitigating fraud. We generate deep semantic understanding of the photos, and videos in shoppable content created by our creators for efficient processing and appropriate placements for the best customer experience. For example, you may lead the development of reinforcement learning models such as MAB to rank content/product to be shown to influencers. To achieve this, a deep understanding of the quality and relevance of content must be established through ML models that provide those contexts for ranking.

In order to be successful in our team, you need a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as SageMaker, S3, and EC2 with a variety of skillset in shallow and deep learning ML models, particularly in NLP and CV. You will bring knowledge in many of these domains along with your own specialties.


Apply

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Principal Researcher – Generative AI to join their team.

The mission of the AI Frontiers lab is to expand the pareto frontier of Artificial Intelligence (AI) capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca) and Multi-Agent AI (e.g. AutoGen).

We are seeking a Principal Researcher – Generative AI to join our team and lead efforts on the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Principal Researcher – Generative AI, you will play a crucial role in leading, developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of small language and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring LLMs to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

MSR offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.   

Responsibilities You will perform cutting-edge research in collaboration with other researchers, engineers, and product groups.
As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide. Embody our culture and values.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer on our Motion Planning team, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Develop our planner behavior and trajectories in collaboration with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole

Apply

Location Seattle, WA Palo Alto, CA


Description Amazon’s product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on an AI-first initiative to continue to improve the way we do search through the use of large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced multi-modal deep-learning models on very large scale datasets, specifically through the use of advanced systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge Computer Vision and Deep Learning technologies and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: * How can multi-modal inputs in deep-learning models help us deliver delightful shopping experiences to millions of Amazon customers? * Can combining multi-modal data and very large scale deep-learning models help us provide a step-function improvement to the overall model understanding and reasoning capabilities? We are looking for exceptional scientists who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization.


Apply

Location Sunnyvale, CA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Geomagical Labs is a 3D R&D lab, in partnership with IKEA. We create magical mixed-reality experiences for hundreds of millions of users, using computer vision, neural networks, graphics, and computational photography. Last year we launched IKEA Kreativ, and we’re excited for what’s next! We have an opening in our lab for a senior computer vision researcher, with 3D Reconstruction and Deep Learning expertise, to develop and improve the underlying algorithms powering our consumer products. We are looking for highly-motivated, creative, applied researchers with entrepreneurial drive, that are excited about building novel technologies and shipping them all the way to the hands of millions of customers!

Requirements: Ph.D. and 2+ years of experience, or Master's and 6+ years of experience, focused on 3D Computer Vision and Deep Learning. Experience in classical methods for 3D Reconstruction: SfM/SLAM, Multi-view Stereo, RGB-D Fusion. Experience in using Deep Learning for 3D Reconstruction and/or Scene Understanding, having worked in any of: Depth Estimation, Room Layout Estimation, NeRFs, Inverse Rendering, 3D Scene Understanding. Familiarity with Computer Graphics and Computational Photography. Expertise in ML frameworks and libraries, e.g. PyTorch. Highly productive in Python. Ability to architect and implement complex systems at the micro and macro level. Entrepreneurial: Adventurous, self-driven, comfortable under uncertainty, with a desire to make systems work end-to-end. Innovative; with a track record of patents and/or first-authored publications at leading workshops or conferences such as CVPR, ECCV/ICCV, SIGGRAPH, ISMAR, NeurIPS, ICLR etc. Experience in developing technologies that got integrated into products, as well as post-launch performance tracking and shipping improvements. [Bonus] Comfortable with C++.

Benefits: Join a mission-driven R&D lab, strategically backed by an influential global brand. Work in a dynamic team of computer vision, AI, computational photography, AR, graphics, and design professionals, and successful serial entrepreneurs. Opportunity to publish novel and relevant research. Fully remote work available to people living in the USA or Canada. Headquartered in downtown Palo Alto, California --- an easy walk from restaurants, coffee shops and Caltrain commuter rail. The USA base salary for this full-time position ranges from $180,000 to $250,000 determined by location, role, skill, and experience level. Geomagical Labs offers a comprehensive set of benefits, and for qualifying roles, substantial incentive grants, vesting annually.


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Location Seattle, WA Arlington, VA New York, NY San Francisco, CA


Description Join us at the cutting edge of Amazon's sustainability initiatives to work on environmental and social advancements to support Amazon's long term worldwide sustainability strategy. At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people.

The Worldwide Sustainability (WWS) organization capitalizes on Amazon’s scale & speed to build a more resilient and sustainable company. We manage our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable.

Sustainability Science and Innovation (SSI) is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise. We use this expertise and skills to identify, develop and evaluate the science and innovations necessary for Amazon, customers and partners to meet their long-term sustainability goals and commitments.

We’re seeking a Senior Principal Scientist for Sustainability and Climate AI to drive technical strategy and innovation for our long-term sustainability and climate commitments through AI & ML. You will serve as the strategic technical advisor to science, emerging tech, and climate pledge partners operating at the Director, VPs, and SVP level. You will set the next generation modeling standards for the team and tackle the most immature/complex modeling problems following the latest sustainability/climate sciences. Staying hyper current with emergent sustainability/climate science and machine learning trends, you'll be trusted to translate recommendations to leadership and be the voice of our interpretation. You will nurture a continuous delivery culture to embed informed, science-based decision-making into existing mechanisms, such as decarbonization strategies, ESG compliance, and risk management. You will also have the opportunity to collaborate with the Climate Pledge team to define strategies based on emergent science/tech trends and influence investment strategy. As a leader on this team, you'll play a key role in worldwide sustainability organizational planning, hiring, mentorship and leadership development.

If you see yourself as a thought leader and innovator at the intersection of climate science and tech, we’d like to connect with you.


Apply

Location Palo Alto, CA


Description Amazon is looking for talented Postdoctoral Scientists to join our Stores Foundational AI team for a one-year, full-time research position.

The Stores Foundational AI team builds foundation models for multiple Amazon entities, such as ASIN, customer, seller and brand. These foundation models are used in downstream applications by various partner teams in Stores. Our team also invest in building foundation model for image generation, optimized for product image generation. We leverage the latest development to create our solutions and innovate to push state of the art.

The Postdoc is expected to conduct research and build state-of-the-art algorithms in video understanding and representation learning in the era of LLMs. Specifically, Designing efficient algorithms to learn accurate representations for videos. Building extensive video understanding capabilities including various content classification tasks. Designing algorithms that can generate high-quality videos from set of product images. Improve the quality of our foundation models along the following dimensions: robustness, interpretability, fairness, sustainability, and privacy.


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Location San Francisco, CA


Description Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.

You will be managing a team within the Music Machine Learning and Personalization organization that is responsible for developing, training, serving and iterating on models used for personalized candidate generation for both Music and Podcasts.


Apply