Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Redmond, Washington, United States


Overview We are seeking a highly skilled and passionate Research Scientist to join our Responsible & OpenAI Research (ROAR) team in Azure Cognitive Services.

As a Research Scientist, you will play a key role in advancing the field of Responsible Artificial Intelligence (AI) to ensure safe releases of the rapidly advancing AI technologies, such as GPT-4, GPT-4V, DALL-E 3 and beyond, as well as to expand and enhance our standalone Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Contribute to the development of Responsible AI policies, guidelines, and best practices and ensure the practical implementation of these guidelines within various AI technology stacks across Microsoft, promoting a consistent approach to Responsible AI. Enable the safe release of new Azure OpenAI Service features, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Other: Embody our Culture and Values


Apply

The Autonomy Software Metrics team is responsible for providing engineers and leadership at Zoox with tools to evaluate the behavior of Zoox’s autonomy stack using simulation. The team collaborates with experts across the organization to ensure a high safety bar, great customer experience, and rapid feedback to developers. The metrics team is responsible for evaluating the complete end-to-end customer experience through simulation, evaluating factors that impact safety, comfort, legality, road citizenship, progress, and more. You’ll be part of a passionate team making transportation safer, smarter, and more sustainable. This role gives you high visibility within the company and is critical for successfully launching our autonomous driving software.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As a Research Engineer, you will work collaboratively to improve our models and iterate on novel research directions, sometimes in just days. We're looking for talented engineers who would enjoy applying their skills to deeply complex and novel AI problems. Specifically, you will:

  • Apply and extend the Helm proprietary algorithmic toolkit for unsupervised learning and perception problems at scale
  • Carefully execute the development and maintenance of tools used for deep learning experiments designed to provide new functionality for customers or address relevant corner cases in the system as a whole
  • Work closely with software and autonomous vehicle engineers to deploy algorithms on internal and customer vehicle platforms

Apply

Location Sunnyvale, CA Bellevue, WA Seattle, WA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems.

As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision.


Apply

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative computer vision technologies, with a focus on creating highly realistic digital faces, bodies, avatars.

Strive to set new standards in the realism of 3D digital human appearance, movement, and personality, ensuring that generated content closely resembles real-life scenarios.

Implement techniques to achieve high-quality results in zero-shot or few-shot settings, as well as customized avatars for different use cases while maintaining speed and accuracy.

Develop innovative solutions to enable comprehensive customization of video content, including the creation of digital people, modifying scenes, and manipulating actions and speech within videos.

Preferred Qualifications:

PhD in computer science (or related field) and/ or 5+ years of industry experience.

Strong academic background with a focus on computer vision and transformers, specializing in NeRFs, Gaussian Splatting, Diffusion, GANs or related areas.

Publication Record: Highly relevant publication history, with a focus on generating or manipulating realistic digital faces, bodies, expressions, body movements, etc. Ideal candidates will have served as the primary author on these publications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for multimodal tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Engineering at Pinterest

Our Engineering team is at the core of bringing our platform to life for Pinners worldwide. Working collaboratively and cross-functionally with teams across the company, our engineers tackle growth-driving challenges to build an inspired and inclusive platform for all.

Our future of work is PinFlex

At Pinterest, we know that our best work happens when we feel most inspired. PinFlex promotes flexibility while prioritizing in-person moments to celebrate our culture and drive inspiration. We know that some work can be performed anywhere, and we encourage employees to work where they choose within their country or region, whether that’s at home, at a Pinterest office, or another virtual location. We believe that there is value in a distributed workforce but there are essential touch points for in-person collaboration that will create a big impact for our business and for development and connection.

Stop by booth #2100 to learn more about our open roles and our in-house generative AI foundation model that leverages the full power of our visual search and taste graph! Our engineers and recruiters are excited to connect with you!


Apply

London


Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning, computer vision and reinforcement learning. Leveraging our multi-national world-class team of researchers and engineers, we’re using data to learn more intelligent algorithms to bring autonomy for everyone, everywhere. We aim to be the future of self-driving cars, learning from experience and data.

Where you’ll have an impact

We are currently looking for people with research expertise in AI applied to autonomous driving or similar robotics or decision making domain, inclusive, but not limited to the following specific areas:

Foundation models for robotics Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioral and physical models of cars, people, and other dynamic agents You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a key member of our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

How to leverage our large, rich, and diverse sources of real-world driving data How to architect our models to best employ the latest advances in foundation models, transformers, world models, etc. Which learning algorithms to use (e.g. reinforcement learning, behavioural cloning) How to leverage simulation for controlled experimental insight, training data augmentation, and re-simulation How to scale models efficiently across data, model size, and compute, while maintaining efficient deployment on the car You also have the potential to contribute to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team to achieve this.

What you’ll bring to Wayve

Thorough knowledge of and 5+ years applied experience in AI research, computer vision, deep learning, reinforcement learning or robotics Ability to deliver high quality code and familiarity with deep learning frameworks (Python and Pytorch preferred) Experience leading a research agenda aligned with larger goals Industrial and / or academic experience in deep learning, software engineering, automotive or robotics Experience working with training data, metrics, visualisation tools, and in-depth analysis of results Ability to understand, author and critique cutting-edge research papers Familiarity with code-reviewing, C++, Linux, Git is a plus PhD in a relevant area and / or track records of delivering value through machine learning are a big plus. What we offer you

Attractive compensation with salary and equity Immersion in a team of world-class researchers, engineers and entrepreneurs A unique position to shape the future of autonomy and tackle the biggest challenge of our time Bespoke learning and development opportunities Relocation support with visa sponsorship Flexible working hours - we trust you to do your job well, at times that suit you and your time Benefits such as an onsite chef, workplace nursery scheme, private health insurance, therapy, daily yoga, onsite bar, large social budgets, unlimited L&D requests, enhanced parental leave, and more!


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Location Seattle, WA


Description Futures Design is the advanced concept design and incubation team within Amazon’s Device and Services Design Group (DDG). We are responsible for exploring and defining think (very) big opportunities globally and locally — so that we can better understand how new products and services might enrich the lives of our customers and so that product teams and leaders can align on where we're going and why we're going there. We focus on a 3–10+ year time frame, with the runway to invent and design category-defining products and transformational customer experiences. Working with Amazon business and technology partners, we use research, design, and prototyping to guide early product development, bring greater clarity to engineering goals, and develop a UX-grounded point of view.

We're looking for a Principal Design Technologist to join the growing DDG Futures Design team. You thrive in ambiguity and paradigm shifts– remaking assumptions of how customers engage, devices operate, and builders create. You apply deep expertise that spans design, technology, and product, grounding state-of-the-art emerging technologies through storytelling and a maker mindset. You learn and adapt technology trends to enduring customer problems through customer empathy, code, and iterative experimentation.

You will wear multiple hats to quickly assimilate customer problems, convert them to hypotheses, and test them using efficient technologies and design methods to build stakeholder buy-in. You’ll help your peers unlock challenging scenarios and mature the design studio’s ability to deliver design at scale across a breadth of devices and interaction modalities. You will work around limitations and push capabilities through your work. Your curiosity will inspire those around you and facilitate team growth, while your hands-on, collaborative nature will build trust with your peers and studio partners.


Apply

Open to Seattle, WA; Costa Mesa, CA; or Washington, DC

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

WHY WE’RE HERE The Mission Software Engineering team builds, deploys, integrates, extends, and scales Anduril's software to deliver mission-critical capabilities to our customers. As the software engineers closest to Anduril customers and end-users, Mission Software Engineers solve technical challenges of operational scenarios while owning the end-to-end delivery of winning capabilities such as Counter Intrusion, Joint All Domain Command & Control, and Counter-Unmanned Aircraft Systems.

As a Mission Software Engineer, you will solve a wide variety of problems involving networking, autonomy, systems integration, robotics, and more, while making pragmatic engineering tradeoffs along the way. Your efforts will ensure that Anduril products seamlessly work together to achieve a variety of critical outcomes. Above all, Mission Software Engineers are driven by a “Whatever It Takes” mindset—executing in an expedient, scalable, and pragmatic way while keeping the mission top-of-mind and making sound engineering decisions to deliver successful outcomes correctly, on-time, and with high quality.

WHAT YOU’LL DO -Own the software solutions that are deployed to customers -Write code to improve products and scale the mission capability to more customers -Collaborate across multiple teams to plan, build, and test complex functionality -Create and analyze metrics that are leveraged for debugging and monitoring -Triage issues, root cause failures, and coordinate next-steps -Partner with end-users to turn needs into features while balancing user experience with engineering constraints -Travel up to 30% of time to build, test, and deploy capabilities in the real world

CORE REQUIREMENTS -Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics. -At least 2-5+ years working with a variety of programming languages such as Java, Python, Rust, Go, JavaScript, etc. (We encourage all levels to apply) -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -A desire to work on critical software that has a real-world impact -Must be eligible to obtain and maintain a U.S. TS clearance

Desired Requirements -Strong background with focus in Physics, Mathematics, and/or Motion Planning to inform modeling & simulation (M&S) and physical systems -Developing and testing multi-agent autonomous systems and deploying in real-world environments. -Feature and algorithm development with an understanding of behavior trees. -Developing software/hardware for flight systems and safety critical functionality. -Distributed communication networks and message standards -Knowledge of military systems and operational tactics

WHAT WE VALUE IN MISSION SOFTWARE Customer Facing - Mission Software Engineers are the software engineers closest to Anduril customers, end-users, and the technical challenges of operational scenarios. Mission First - Above all, MSEs execute their mission in an expedient, scalable, and pragmatic way. They keep the mission top-


Apply

We are looking for a Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release. You will also have extensive knowledge and experience in Computer Vision domain. You will also have experience within Generative AI space (GANs, Diffusion models and the like!).

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito and Prof. Matthias Niessner and director of Science Prof. Vittorio Ferrari.

If you know and love DALL.E, MUSE, IMAGEN, MAKE-A-VIDEO, STABLE DIFFUSION and more - and you love large data, large compute and writing clean code, then we would love to talk to you.


Apply

Redmond, Washington, United States


Overview Do you want to shape the future of Artificial Intelligence (AI)? Do you have a passion for solving real-world problems with cutting-edge technologies?

The Human-AI eXperiences (HAX) team at Microsoft Research AI Frontiers is looking for exceptional candidates to advance the state-of-the-art in human-AI collaboration with a focus on leveraging the capabilities of people and foundation model-based agents and systems to solve real problems.

As a Principal Researcher on our team, you will:

Work on challenging and impactful projects in areas such as human-AI collaboration and teaming, foundation model-based systems, multi-agent systems, next generation AI experiences and responsible AI. Apply your skills and knowledge to build practical solutions to solve real problems and impact the world. Collaborate with other researchers and engineers across the company to amplify your impact and grow your career in a supportive and stimulating environment. We are looking for candidates who have:

A drive for real world impact, demonstrated by a passion to build and release prototypes or OSS frameworks.
Demonstrated track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, UIST). Demonstrated interdisciplinary experience in applied machine learning, natural language processing, and human computer interaction including experience doing offline and online evaluations, and conducting user studies and user-centered research. Exceptional coding experience and hands-on experience working with large foundation models and related frameworks and toolkits. Familiarity with LLMs such as the OpenAI Generative Pre-Trained Transformers (GPT) models, LLAMA etc., model finetuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT etc.) and model evaluation is beneficial.
A passion for innovation and creativity, evidenced by the ability to generate novel ideas, approaches, and solutions. A team player mindset, characterized by effective communication, collaboration, and feedback skills.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Perform cutting-edge research to solve real-world problems. Collaborate closely with other researchers, engineers, and product group partners on high-impact projects that deliver real-world impact to people and society. Embody our culture and values.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply