Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Gothenburg, Sweden

This fully-funded PhD position offers an opportunity to delve into the area of geometric deep learning within the broader landscape of machine learning and 3D computer vision. As a candidate, you'll have the chance to develop theoretical concepts and innovative methodologies while contributing to real-world imaging applications. Moreover, you will enjoy working in a diverse, collaborative, supportive and internationally recognized environment.

The PhD project centers on understanding and improving deep learning methods for 3D scene analysis and 3D generative diffusion models. We aim to explore new ways of encoding symmetries in deep learning models in order to scale up computations, a necessity for realizing truly 3D generative models for general scenes. We aim to explore the application of these models in key problems involving novel view synthesis and self-supervised learning.

If you are interested and present at CVPR, then feel free to reach out to Prof. Fredrik Kahl, head of the Computer Vision Group.


Apply

The Autonomy Software Metrics team is responsible for providing engineers and leadership at Zoox with tools to evaluate the behavior of Zoox’s autonomy stack using simulation. The team collaborates with experts across the organization to ensure a high safety bar, great customer experience, and rapid feedback to developers. The metrics team is responsible for evaluating the complete end-to-end customer experience through simulation, evaluating factors that impact safety, comfort, legality, road citizenship, progress, and more. You’ll be part of a passionate team making transportation safer, smarter, and more sustainable. This role gives you high visibility within the company and is critical for successfully launching our autonomous driving software.


Apply

Location Seattle, WA


Description Amazon's Compliance Shared Services (CoSS) is looking for a smart, energetic, and creative Sr Applied Scientist to extend and invent state-of-the-art research in multi-modal architectures, large language models across federated and continuous learning paradigms spread across multiple systems to join the Applied Research Science team in Seattle. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to deliver scientific innovations into production systems at Amazon-scale that increase automation accuracy and coverage, and extend and invent new research as a key author to deliver re-usable foundational capabilities for automation.

You will analyze and process large amounts of image, text and tabular data from product detail pages, combine them with additional external and internal sources of multi-modal data, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms in federated and continuous learning modes that can be integrated and launched across multiple systems. You will partner with engineers and product managers across multiple Amazon teams to design new ML solutions implemented across worldwide Amazon stores for the entire Amazon product catalog.


Apply

Location New York, NY Seattle, WA Boston, MA


Description Climate Pledge Friendly helps customers discover and shop for products that are more sustainable. We partner with trusted sustainability certifications to highlight products that meet strict standards and help preserve the natural world. By shifting customer demand towards more sustainable products, we incentivize selling partners to build better selection, creating a virtuous cycle that yields significant environmental benefit at scale.

We are seeking a Senior Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. You will take the lead in conceptualizing, building, and launching models that significantly improve our shopping experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology.

You will work with business leaders, scientists, and engineers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed ML models and services. The types of initiatives you can expect to work on include a) personalized recommendations that help our customers find the right sustainable products on each shopping journey, b) automated solutions that combine ML/LLM and data mining to identify products that align with our sustainability goals and resonate with our customers' values, and c) models to measure the environmental and econometric impacts of sustainable shopping.


Apply

Location Amsterdam, Netherlands


Description

At Qualcomm AI Research, we are advancing AI to make its core capabilities – perception, reasoning, and action – ubiquitous across devices. Our mission is to make breakthroughs in fundamental AI research and scale them across industries. By bringing together some of the best minds in the field, we’re pushing the boundaries of what’s possible and shaping the future of AI.

As Principal Machine Learning Researcher at Qualcomm, you conduct innovative research in machine learning, deep learning, and AI that advances the state-of-the-art. · You develop and quickly iterate on innovative research ideas, and prototype and implement them in collaboration with other researchers and engineers. · You are on top of and actively shaping the latest research in the field and publish papers at top scientific conferences. · You help define and shape our research vision and planning within and across teams and are passionate at execution. · You engage with leads and stakeholders across business units on how to translate research progress into business impact. · You work in one or more of the following research areas: Generative AI, foundation models (LLMs, LVMs), reinforcement learning, neural network efficiency (e.g., quantization, conditional computation, efficient HW), on-device learning and personalization, and foundational AI research.

Working at Qualcomm means being part of a global company (headquartered in San Diego) that fosters a diverse workforce and puts emphasis on the learning opportunities and professional development of its employees. You will work closely with researchers that have published at major conferences, work on campus at the University of Amsterdam, where you have the opportunity to collaborate with academic researchers through university partnerships such as the QUVA lab, and live in a scenic, vibrant city with a healthy work/life balance and a diversity of cultural activities. In addition, you can join plenty of mentorship, learning, peer, and affinity group opportunities within the company. In this way you can easily develop personal and professional skills in your areas of interest. You’re empowered to start your own initiatives and, in doing so, collaborate with colleagues in offices across teams and countries.

Minimum qualifications: · PhD or Master’s degree in Machine Learning, Computer Vision, Physics, Mathematics, Electrical engineering or similar field, or equivalent practical experience. · 8+ years of experience in machine learning and AI, and experience in working in an academic or industry research lab. · Strong drive to continuously improve beyond the status quo in translating new ideas into innovative solutions. · Track record of scientific leadership by having published impactful work at major conferences in machine learning, computer vision, or NLP (e.g., NeurIPS, ICML, ICLR, CVPR, ICCV, ACL, EMNLP, NAACL, etc.). · Programming experience in Python and experience with standard deep learning toolkits.

Preferred qualifications: · Hands-on experience with foundation models (LLMs, LVMs) and reinforcement learning. · Proven experience in technology and team leadership, and experience with cross-functional stakeholder engagements. · Experience in writing clean and maintainable code for research-internal use (no product development). · Aptitude for guiding and mentoring more junior researchers.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Location San Diego


Description

Artificial Intelligence is changing the world for the benefit of human beings and societies. QUALCOMM, as the world's leading mobile computing platform provider, is committed to enable the wide deployment of intelligent solutions on all possible devices – like smart phones, autonomous vehicles, robotics and IOT devices. Qualcomm is creating building blocks for the intelligent edge.

We are part of Qualcomm AI Research, and we focus on advancing Edge AI machine learning technology – including model fine tuning, hardware acceleration, model quantization, model compression, network architecture search (NAS), edge inference and related fields. Come join us on this exciting journey. In this particular role, you will work in a dynamic research environment, be part of a multi-disciplinary team of researchers and software engineers who work with cutting edge AI frameworks and tools. You will architect, design, develop, test, and deploy on- and off-device benchmarking workflows for model zoos.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The successful applicant should have a strong theoretical background and proven hands-on experience with AI as modern software-, web-, and cloud-engineering.

Must have experience and skills: Strong theoretical background in AI and general ML techniques Proven hands-on experience with model training, inference, and evaluation. Proven hands-on experience with PyTorch, ONNX, TensorFlow, CUDA, and others. Experience developing data pipelines for ML/AI training and inferencing in the cloud. Prior experience in deploying containerized (web-) applications to IAAS environments such as AWS (preferred), Azure or GCP, backed by Dev-Ops and CI/CD technologies. Strong Linux command line skills. Strong experience with Docker and Git. Strong general analytical and debugging skills. Prior experience working in agile environments. Prior experience in collaborating with multi-disciplinary teams across time zones. Strong team player, communicator, presenter, mentor, and teacher. Preferred extra experience and skills: Prior experience with model quantization, profiling and running models on edge devices. Prior experience in developing full stack web applications using frameworks such as Ruby-on-Rails (preferred), Django, Phoenix/Elixir, Spring, Node.js or others. Knowledge of relational database design and optimization, hands on experience with running Postgres (preferred), MySQL or other relational databases in production Preferred qualifications: Bachelor's, Master's and/or PhD degree in Computer Science, Engineering, Information Systems, or related field and 2-5 years of work experience in Software Engineering, Systems Engineering, Hardware Engineering or related.


Apply

Location: Sunnyvale, California, USA


Are you a gamer? Are you passionate about the cutting edge of foundation models and Multiodal LLM Agent for 3D world creation for future gaming?

Cybever.ai is on the lookout for an innovative AI Research Scientist to join our dynamic team and revolutionize the world of generative AI for 3D content.

What You'll Be Doing:

  • Research and Development: Lead groundbreaking research in multimodal large language models (LLMs) and AI agents.
  • 3D Content Creation: Develop advanced models and algorithms to create large-scale 3D assets and environments from text or images, enhancing our AI-powered creative suite.
  • Collaboration and Integration: Work closely with the engineering team to integrate new AI capabilities into our existing products, ensuring they meet the needs of game developers, movie productions, and 3D artists.
  • Innovation in AI: Stay ahead of the curve by publishing research, attending conferences, participating in open source projects, and collaborating with the global AI research community.

You're Probably a Match If You Have:

  • Strong Research Background: Ph.D. or equivalent experience in AI, machine learning, computer vision, computer graphics, or related fields.
  • Technical Skills: Proficiency in Python, PyTorch, TensorFlow, or similar frameworks.
  • 3D Software Experience: Hands-on experience with tools like Blender, Houdini, Unreal Engine, or Unity, or a willingness to learn.
  • Relevant Experience: Demonstrated work in computer vision or computer graphics, with a portfolio of projects or publications to showcase your expertise.

About Cybever:

Cybever, headquartered in the heart of Silicon Valley and founded by ex-Googlers, is a pioneer in the generative AI space, transforming how game developers and artists create 3D content. Our innovative tools enable creating large-scale, high-fidelity, and interactive 3D environments in minutes, freeing up creators to focus on what they do best. With partnerships with industry leaders like Unreal Engine, we are at the forefront of integrating AI into the creative process, empowering developers worldwide to realize their visions easier and faster.

Employment Type:

  • Full-Time Employment: This is a full-time position with potential for H1B and OPT sponsorship.
  • International Opportunities: We are also open to hiring international contractors who meet our qualifications.
  • Research Intern or Residency: Ideal for graduate students, this is a part-time or full-time opportunity to gain hands-on research experience while completing your studies.

Join us at Cybever and be a part of a team that's shaping the future of 3D creation. If you're ready to push the boundaries of what's possible with AI, we want to hear from you!


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

Redmond, Washington, United States


Overview In Mixed Reality, people—not devices—are at the center of everything we do. Our tech moves beyond screens and pixels, creating a new reality aimed at bringing us closer together—whether that’s scientists “meeting” on the surface of a virtual Mars or some yet undreamt-of possibility. To get there, we’re incorporating diverse groundbreaking technologies, from the revolutionary Holographic Processing Unit to computer vision, machine learning, human-computer interaction, and more. We’re a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox. We believe there is a better way. If you do too, we need you! 

You are drawn to work on the latest and most innovative products in the world. You seek projects that will transform how people interact with technology. You have a drive to grow your skillset by finding unique challenges that have yet to be solved. We are looking for Senior Software Engineer to come and join us in delivering the next wave of holographic experiences in an exciting new project. 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Incorporate the latest Artificial Intelligence, Machine Learning, Computer Vision and Sensor Fusion capabilities into the design of our Products and Services Characterize various sensors like inertial measurement units, magnetometers, visual light cameras, depth cameras, GPS etc. to understand their properties and how they relate to the accuracy of tracking, mapping, and localization. Implement and design innovative measurement solutions to quantify the accuracy and reliability of Mixed Reality devices against industry gold standard ground-truth systems Partner with engineers, designers, and program managers to deliver solid technical designs. Embody our Culture and Values


Apply

You will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on creating highly realistic, emotional and life-like Synthetic humans through text-to-video. Within the team you’ll have the opportunity to work with different research teams and squads across multiple areas led by our Director of Science, Prof. Vittorio Ferrari, and directly impact our solutions that are used worldwide by over 55,000 businesses.

If you have seen the full ML lifecycle from ideation through implementation, testing and release, and you have a passion for large data, large model training and building solutions with clean code, this is your chance. This is an opportunity to work for a company that is impacting businesses at a rapid pace across the globe.


Apply

Redmond, Washington, United States


Overview Microsoft Research (MSR) AI Frontiers lab is seeking applications for the position of Senior Research Engineer – Generative AI to join their team in Redmond, WA and New York City, NY.

The mission of the AI Frontiers lab is to expand the pareto frontier of AI capabilities, efficiency, and safety through innovations in foundation models and learning agent platforms. Some of our projects include work on Small Language Models (e.g. Phi, Orca), foundation models for actions (e.g., in gaming, robotics, and Office productivity tools) and Multi-Agent AI (e.g. AutoGen).

We are seeking Senior Research Engineers to join our team and contribute to the advancement of Generative AI and Large Language Models (LLMs) technologies. As a Research Engineer, you will play a crucial role in developing, improving, and exploring the capabilities of Generative AI models. Your work will have a significant impact on the development of cutting-edge technologies, advancing state-of-the-art and providing practical solutions to real-world problems.  

Our ongoing research areas encompass but are not limited to:

Pre-training: especially of language models, action models and multimodal models Alignment and Post-training: e.g., Instruction tuning and reinforcement learning from feedback Continual Learning: Enabling LLMs to evolve and adapt over time and learn from previous experiences human interactions Specialization: Tailoring models to meet application-specific requirements Orchestration and multi-agent systems: automated orchestration between multiple agents incorporating human feedback and oversight

Microsoft Research (MSR) offers a vibrant environment for cutting-edge, multidisciplinary, research, including access to diverse, real-world problems and data, opportunities for experimentation and real-world impact, an open publication policy, and close links to top academic institutions around the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Embody our Culture and Values

Responsibilities As a Senior Research Engineer in AI Frontiers, you will design, develop, execute, and implement technology research projects in collaboration with other researchers, engineers, and product groups.

As a member of a word-class research organization, you will be a part of research breakthroughs in the field and will be given an opportunity to realize your ideas in products and services used worldwide.


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply