Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Redmond, Washington, United States


Overview The Azure AI Platform (AIP) provides organizations across the world with the tooling and infrastructure needed to build and host AI workloads. The AI Platform organization is scaling rapidly, and we are establishing a world-class data analytics platform to support data-driven decision making through the organization.

We are looking to hire a Senior Data Scientist to join the newly formed AI Platform Analytics team. This individual will be responsible for collaborating with teams across AI Platform to establish trustworthy data sets and provide actionable insights and analysis.

We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities

Apply your knowledge in quantitative analysis, data mining, and the presentation of data to inform decision-making. Build data manipulation, processing, and data visualization tools and share these tools and your knowledge across the team, Cloud and AI, and Microsoft. Handle large amounts of data using various tools, including your own. Ensure high-quality and reliable data. Drive end-to-end projects by utilizing, applying and analyzing data to associated business problems. Engage with Upper Level Management by making key business decisions. Mentor other team members. Contribute to data-driven culture by collaborating with product and engineering teams across Azure to establish and share best practices Embody our culture and values


Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Zoox is looking for a software engineer to join our Perception team and help us build novel architectures for classifying and understanding the complex and dynamic environments in our cities. In this role, you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms. We are creating new algorithms for segmentation, tracking, classification, and high-level scene understanding, and you could work on any (or all!) of these components.

We're looking for engineers with advanced degrees and experience building perception pipelines that work with real data in rapidly changing and uncertain environments.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Sr. Fullstack Engineer, you will work on our platform engineering team playing a crucial role in enabling our research engineers to fine-tune our foundation models and streamline the machine learning process for our autonomous technology. You will work on developing products that empower our internal teams to maximize efficiency and innovation in our product. Specifically, you will:

  • Build mission-critical tools for improving observability and scaling the entire machine-learning process.
  • Use modern technologies to serve huge amounts of data, visualize key metrics, manage our data inventory, trigger backend data processing pipelines, and more.
  • Work closely with people across the company to create a seamless UI experience.

Apply

Location San Diego


Description

Artificial Intelligence is changing the world for the benefit of human beings and societies. QUALCOMM, as the world's leading mobile computing platform provider, is committed to enable the wide deployment of intelligent solutions on all possible devices – like smart phones, autonomous vehicles, robotics and IOT devices. Qualcomm is creating building blocks for the intelligent edge.

We are part of Qualcomm AI Research, and we focus on advancing Edge AI machine learning technology – including model fine tuning, hardware acceleration, model quantization, model compression, network architecture search (NAS), edge inference and related fields. Come join us on this exciting journey. In this particular role, you will work in a dynamic research environment, be part of a multi-disciplinary team of researchers and software engineers who work with cutting edge AI frameworks and tools. You will architect, design, develop, test, and deploy on- and off-device benchmarking workflows for model zoos.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The successful applicant should have a strong theoretical background and proven hands-on experience with AI as modern software-, web-, and cloud-engineering.

Must have experience and skills: Strong theoretical background in AI and general ML techniques Proven hands-on experience with model training, inference, and evaluation. Proven hands-on experience with PyTorch, ONNX, TensorFlow, CUDA, and others. Experience developing data pipelines for ML/AI training and inferencing in the cloud. Prior experience in deploying containerized (web-) applications to IAAS environments such as AWS (preferred), Azure or GCP, backed by Dev-Ops and CI/CD technologies. Strong Linux command line skills. Strong experience with Docker and Git. Strong general analytical and debugging skills. Prior experience working in agile environments. Prior experience in collaborating with multi-disciplinary teams across time zones. Strong team player, communicator, presenter, mentor, and teacher. Preferred extra experience and skills: Prior experience with model quantization, profiling and running models on edge devices. Prior experience in developing full stack web applications using frameworks such as Ruby-on-Rails (preferred), Django, Phoenix/Elixir, Spring, Node.js or others. Knowledge of relational database design and optimization, hands on experience with running Postgres (preferred), MySQL or other relational databases in production Preferred qualifications: Bachelor's, Master's and/or PhD degree in Computer Science, Engineering, Information Systems, or related field and 2-5 years of work experience in Software Engineering, Systems Engineering, Hardware Engineering or related.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Sunnyvale, CA Bellevue, WA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Redwood City, CA; or Remote, US


We help make autonomous technologies more efficient, safer, and accessible.

Helm.ai builds AI software for autonomous driving and robotics. Our "Deep Teaching" methodology is uniquely data and capital efficient, allowing us to surpass traditional approaches. Our unsupervised learning software can train neural networks without the need for human annotation or simulation and is hardware-agnostic. We work with some of the world's largest automotive manufacturers and we've raised over $100M from Honda, Goodyear Ventures, Mando, and others to help us scale.

Our team is made up of people with a diverse set of experiences in software and academia. We work together towards one common goal: to integrate the software you'll help us build into hundreds of millions of vehicles.

We offer: - Competitive health insurance options - 401K plan management - Remote-friendly and flexible team culture - Free lunch and fully-stocked kitchen in our South Bay office - Additional perks: monthly wellness stipend, office set up allowance, company retreats, and more to come as we scale - The opportunity to work on one of the most interesting, impactful problems of the decade

Visit our website to apply for a position.


Apply