Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location San Diego


Description

Artificial Intelligence is changing the world for the benefit of human beings and societies. QUALCOMM, as the world's leading mobile computing platform provider, is committed to enable the wide deployment of intelligent solutions on all possible devices – like smart phones, autonomous vehicles, robotics and IOT devices. Qualcomm is creating building blocks for the intelligent edge.

We are part of Qualcomm AI Research, and we focus on advancing Edge AI machine learning technology – including model fine tuning, hardware acceleration, model quantization, model compression, network architecture search (NAS), edge inference and related fields. Come join us on this exciting journey. In this particular role, you will work in a dynamic research environment, be part of a multi-disciplinary team of researchers and software engineers who work with cutting edge AI frameworks and tools. You will architect, design, develop, test, and deploy on- and off-device benchmarking workflows for model zoos.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The successful applicant should have a strong theoretical background and proven hands-on experience with AI as modern software-, web-, and cloud-engineering.

Must have experience and skills: Strong theoretical background in AI and general ML techniques Proven hands-on experience with model training, inference, and evaluation. Proven hands-on experience with PyTorch, ONNX, TensorFlow, CUDA, and others. Experience developing data pipelines for ML/AI training and inferencing in the cloud. Prior experience in deploying containerized (web-) applications to IAAS environments such as AWS (preferred), Azure or GCP, backed by Dev-Ops and CI/CD technologies. Strong Linux command line skills. Strong experience with Docker and Git. Strong general analytical and debugging skills. Prior experience working in agile environments. Prior experience in collaborating with multi-disciplinary teams across time zones. Strong team player, communicator, presenter, mentor, and teacher. Preferred extra experience and skills: Prior experience with model quantization, profiling and running models on edge devices. Prior experience in developing full stack web applications using frameworks such as Ruby-on-Rails (preferred), Django, Phoenix/Elixir, Spring, Node.js or others. Knowledge of relational database design and optimization, hands on experience with running Postgres (preferred), MySQL or other relational databases in production Preferred qualifications: Bachelor's, Master's and/or PhD degree in Computer Science, Engineering, Information Systems, or related field and 2-5 years of work experience in Software Engineering, Systems Engineering, Hardware Engineering or related.


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer for Optimization, you will focus on research and development related to the optimization of ML models on GPU’s or AI accelerators. You will use your judgment in complex scenarios and apply optimization techniques to a wide variety of technical problems. Specifically, you will:

  • Research, prototype and evaluate state of the art model optimization techniques and algorithms
  • Characterize neural network quality and performance based on research, experiment and performance data and profiling
  • Incorporate optimizations and model development best practices into existing ML development lifecycle and workflow.
  • Define the technical vision and roadmap for DL model optimizations
  • Write technical reports indicating qualitative and quantitative results to colleagues and customers
  • Develop, deploy and optimize deep learning (DL) models on various GPU and AI accelerator chipsets/platforms

Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Location San Diego


Description

At Qualcomm, we are transforming the automotive industry with our Snapdragon Digital Chassis and building the next generation software defined vehicle (SDV).

Snapdragon Ride is an integral pillar of our Snapdragon Digital Chassis, and since its launch it has gained momentum with a growing number of global automakers and Tier1 suppliers. Snapdragon Ride aims to address the complexity of autonomous driving and ADAS by leveraging its high-performance, power-efficient SoC, industry-leading artificial intelligence (AI) technologies and pioneering vision and drive policy stack to deliver a comprehensive, cost and energy efficient systems solution.

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

We are looking for smart, innovative and motivated individuals with strong theory background in deep learning, advanced signal processing, probability & algorithms and good implementation skills in python/C++. Job responsibilities include design and development of novel algorithms for solving complex problems related to behavior prediction for autonomous driving, including trajectory and intention prediction. Develop novel deep learning models to predict trajectories for road users and optimize them to run-in real-time systems. Work closely with sensor fusion and planning team on defining requirements and KPIs. Work closely with test engineers to develop test plans for validating performance in simulations and real-world testing.

Minimum Qualifications: • Bachelor's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 6+ years of Systems Engineering or related work experience. OR Master's degree in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 5+ years of Systems Engineering or related work experience. OR PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or related field and 4+ years of Systems Engineering or related work experience.Preferred Qualifications: Ph.D + 2 years industry experience in behavior and trajectory prediction Proficient in variety of deep learning models like CNN, Transformer, RNN, LSTM, VAE, GraphCNN etc Experience working with NLP Deep Learning Networks Proficient in state of the art in machine learning tools (pytorch, tensor flow) 3+ years of experience with Programming Language such as C, C++, Python, etc. 3+ years Systems Engineering, or related work experience in the area of behavior and trajectory prediction. Experience working with, modifying, and creating advanced algorithms Analytical and scientific mindset, with the ability to solve complex problems. Experience in Autonomous driving, Robotics, XR/AR/VR Experience with robust software design for safety-critical systems Excellent written and verbal communication skills, ability to work with a cross-functional team


Apply

Natick, MA, United States


The Company Cognex is a global leader in the exciting and growing field of machine vision.

The Team: Vision Algorithms, Advanced Vision Technology This position is in the Vision Algorithms Team of Advanced Vision Technology group, which is responsible for designing and developing the most sophisticated machine vision tools in the world. We combine custom hardware, specialized lighting, optics, and world-class vision algorithms to create software systems that are used to analyze imagery (intensity, color, density, Z-data, ID barcodes, etc.), to detect, identify and localize objects, to make measurements, to inspect for defects, and to read encoded data. Technology development is critical to the overall business to expand areas of application, improve performance, discover new algorithms, and to make use of new hardware and processing power. Engineers in this group typically have experience with image analysis, machine vision, or signal processing.

Job Summary: The Vision Algorithms team is looking for well-rounded, intelligent, creative, and motivated summer or fall intern with a passion for results! You will work with our senior engineers and technical leads on projects that advance our software development infrastructure and enhance our key technologies and customer experience. You will get mentorship on tackling technical challenges and opportunities to build a solid foundation for your career in Software Engineering, or Computer Vision and Artificial Intelligence.

Essential Functions: - Prototype and develop Vision (2D and ID) applications on top of Cognex products and technology. - Build internal tools or automated tests that can be used in software development or testing. - Understand our products and contribute to creating optimal solutions for customer applications in the automation industry. - High energy and motivated learner. Creative, motivated, and looking to work hard for a fast-moving company. - Strong analytical and problem-solving skills. - Strong programming skills in both C/C++ and Python are required. - Solid understanding of machine learning (ML) fundamentals and experience with ML frameworks like TensorFlow or PyTorch required. - Demonstrated projects or internships in AI/ML domain during academic or professional tenure is highly desirable. - Experience with embedded systems, Linux systems, vision/image-processing and optics all valued. - Background in 2D vision, 3D camera calibration & multi camera systems are preferred.

Minimum education and work experience required: Pursuing a MS, or Ph.D. from a top engineering school in EE, CS, or equivalent.

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Location Sunnyvale, CA


Description Are you fueled by a passion for computer vision, machine learning and AI, and are eager to leverage your skills to enrich the lives of millions across the globe? Join us at Ring AI team, where we're not just offering a job, but an opportunity to revolutionize safety and convenience in our neighborhoods through cutting-edge innovation.

You will be part of a dynamic team dedicated to pushing the boundaries of computer vision, machine learning and AI to deliver an unparalleled user experience for our neighbors. This position presents an exceptional opportunity for you to pioneer and innovate in AI, making a profound impact on millions of customers worldwide. You will partner with world-class AI scientists, engineers, product managers and other experts to develop industry-leading AI algorithms and systems for a diverse array of Ring and Blink products, enhancing the lives of millions of customers globally. Join us in shaping the future of AI innovation at Ring and Blink, where exciting challenges await!


Apply

Redmond, Washington, United States


Overview Are you interested in developing and optimizing deep learning systems? Are you interested in designing novel technology to accelerate their training and serving for cutting edge models and applications? Do you want to scale large Artificial Intelligence models to their limits on massive supercomputers? Are you interested in being part of an exciting open-source library for deep learning systems? The DeepSpeed team is hiring!

Microsoft's DeepSpeed is an open-source library built on the PyTorch (machine learning framework) ecosystem that combines numerous research innovations and technology advancements to make deep learning efficient and easier to use. DeepSpeed can parallelize across thousands of GPUs and train models with trillions of parameters. Our OSS (Open Source Software) has powered many advanced models like MT-530B and BLOOM, and it supports unprecedented scale and speed for both training and inference.

The DeepSpeed team is also part of the larger Microsoft AI at Scale initiative, which is pioneering the next-generation AI capabilities that are scaled across the company’s products and AI platforms.

The DeepSpeed team is looking for a Senior Researcher in Redmond, WA with passion for innovations and for building high-quality systems that will make significant impact inside and outside of Microsoft. Our team is highly collaborative, innovative, and end-user obsessed. We are looking for candidates with systems skills and passionate about driving innovations to improve the efficiency and effectiveness of deep learning systems. We value creativity, agility, accountability, and a desire to learn new technologies.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Excels in one or more subareas and gains expertise in a broad area of research. Identifies and articulates problems in an area of research that are academically novel and may directly or indirectly impact business opportunities. Collaborates with other relevant researchers or research groups to contribute to or advance a research agenda. Researches and develops an understanding of the state-of-the-art insights, tools, technologies, or methods being used in the research community. Expands collaborative relationships with relevant product and business groups inside or outside of Microsoft and provides expertise or technology to them.


Apply

We are seeking a highly motivated candidate for a fully funded PhD position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain.

The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The PhD student is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV).

The start date of the position is September 01, 2024. The position is fully funded for 4 years by Science Foundation Ireland.

The successful candidate will require the following skills and knowledge: • Bachelor or Master in Computer Science or related fields; • Strong competence in computer graphics, computer vision; • Solid experience in academic research and publications is an advantage; • Additional background in math, statistics, or physics is an advantage. • Hands-on experience in training deep models; • Hands-on experience in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender; • Strong programming skills in C++, Python. Capability in implementing systems based on open-source software.

Applicants should provide the following information: • A comprehensive CV; • Academic transcripts of Bachelor and Master’s degree; • The name and contact details of two referees.

Interested candidates can email Binh-Son Hua (https://sonhua.github.io) for an informal discussion of the position. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Redwood City, CA; or Remote, US


We help make autonomous technologies more efficient, safer, and accessible.

Helm.ai builds AI software for autonomous driving and robotics. Our "Deep Teaching" methodology is uniquely data and capital efficient, allowing us to surpass traditional approaches. Our unsupervised learning software can train neural networks without the need for human annotation or simulation and is hardware-agnostic. We work with some of the world's largest automotive manufacturers and we've raised over $100M from Honda, Goodyear Ventures, Mando, and others to help us scale.

Our team is made up of people with a diverse set of experiences in software and academia. We work together towards one common goal: to integrate the software you'll help us build into hundreds of millions of vehicles.

We offer: - Competitive health insurance options - 401K plan management - Remote-friendly and flexible team culture - Free lunch and fully-stocked kitchen in our South Bay office - Additional perks: monthly wellness stipend, office set up allowance, company retreats, and more to come as we scale - The opportunity to work on one of the most interesting, impactful problems of the decade

Visit our website to apply for a position.


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply

Redmond, Washington, United States


Overview We are seeking highly skilled and passionate research scientists to join Responsible & Open Ai Research (ROAR) in Azure Cognitive Services in Redmond, WA.

As a Principal Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of GenAI models such as GPT-4o, DALL-E, Sora, and beyond, as well as to expand and enhance the capability of Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Conduct cutting-edge, deployment-driven research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of textual and multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues.

Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection/mitigation technologies in text and multimodal content. Develop innovative approaches to address AI safety challenges for diverse customer scenarios.

Review business and product requirements and incorporate state-of-the-art research to formulate plans that will meet business goals. Identifies gaps and determines which tools, technologies, and methods to incorporate to ensure quality and scientific rigor. Proactively provides mentorship and coaching to less experienced and mid-level team members.


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply