Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Seattle, WA; Costa Mesa, CA; Boston, MA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

THE ROLE We build Lattice, the foundation for everything we do as a defense technology company. Our engineers are talented and hard-working, motivated to see their work rapidly deployed on the front lines. Our team is not just building an experiment in waiting, we deploy what we build on the Southern border, Iraq, Ukraine and more.

We have open roles across Platform Engineering, ranging from core infrastructure to distributed systems, web development, networking and more. We hire self-motivated people, those who hold a higher bar for themselves than anyone else could hold for them. If you love building infrastructure, platform services, or just working in high performing engineering cultures we invite you to apply!

YOU SHOULD APPLY IF YOU: -At least 3+ years working with a variety of programming languages such as Rust, Go, C++, Java, Python, JavaScript/TypeScript, etc. -Have experience working with customers to deliver novel software capabilities -Want to work on building and integrating model/software/hardware-in-the-loop components by leveraging first and third party technologies (related to simulation, data management, compute infrastructure, networking, and more). -Love building platform and infrastructure tooling that enables other software engineers to scale their output -Enjoy collaborating with team members and partners in the autonomy domain, and building technologies and processes which enable users to safely and rapidly develop and deploy autonomous systems at scale. -Are a U.S. Person because of required access to U.S. export controlled information

Note: The above bullet points describe the ideal candidate. None of us matched all of these at once when we first joined Anduril. We encourage you to apply even if you believe you meet only part of our wish list.

NICE TO HAVE -You've built or invented something: an app, a website, game, startup -Previous experience working in an engineering setting: a startup (or startup-like environment), engineering school, etc. If you've succeeded in a low structure, high autonomy environment you'll succeed here! -Professional software development lifecycle experience using tools such as version control, CICD systems, etc. -A deep, demonstrated understanding of how computers and networks work, from a single desktop to a multi-cluster cloud node -Experience building scalable backend software systems with various data storage and processing requirements -Experience with industry standard cloud platforms (AWS, Azure), CI/CD tools, and software infrastructure fundamentals (networking, security, distributed systems) -Ability to quickly understand and navigate complex systems and established code bases -Experience implementing robot or autonomous vehicle testing frameworks in a software-in-the-loop or hardware-in-the-loop (HITL) environment -Experience with modern build and deployment tooling (e.g. NixOS, Terraform) -Experience designing complex software systems, and iterating upon designs via a technical design review process -Familiarity with industry standard monitoring, logging, and data management tools and best practices -A bias towards rapid delivery and iteration


Apply

Redmond, Washington, United States


Overview We are seeking a Principal Research Engineer to join our organization and help improve steerability and control Large Language Models (LLMs) and other AI systems. Our team currently develops Guidance, a fully open-source project that enables developers to control language models more precisely and efficiently with constrained decoding.

As a Principal Research Engineer, you will play a crucial role in advancing the frontier of constrained decoding and imagining new application programming interface (APIs) for language models. If you’re excited about links between formal grammars and generative AI, deeply understanding and optimizing LLM inference, enabling more responsible AI without finetuning and RLHF, and/or exploring fundamental changes to the “text-in, text-out” API, we’d love to hear from you. Our team offers a vibrant environment for cutting-edge, multidisciplinary research. We have a long track record of open-source code and open publication policies, and you’ll have the opportunity to collaborate with world-leading experts across Microsoft and top academic institutions across the world.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Develop and implement new constrained decoding research techniques for increasing LLM inference quality and/or efficiency. Example areas of interest include speculative execution, new decoding strategies (e.g. extensions to beam search), “classifier in the loop” decoding for responsible AI, improving AI planning, and explorations of attention-masking based constraints. Re-imagine the use and construction of context-free grammars (CFG) and beyond to fit Generative AI. Examples of improvements here include better tools for constructing formal grammars, extensions to Earley parsing, and efficient batch processing for constrained generation. Consideration of how these techniques are presented to developers – who may not be well versed in grammars and constrained generation -- in an intuitive, idiomatic programming syntax is also top of mind. Design principled evaluation frameworks and benchmarks for measuring the effects of constrained decoding on a model. Some areas of interest to study carefully include efficiency (token throughput and latency), generation quality, and impacts of constrained decoding on AI safety. Publish your research in top AI conferences and contribute your research advances to the guidance open-source project. Other

Embody our Culture and Values


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

Mountain View


Who we are Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning. With our multi-national world-class technical team, we’re building things differently.

We don’t think it’s scalable to tell an algorithm how to drive through hand-coded rules and expensive HD maps. Instead, we believe that using experience and data will allow our algorithms to be more intelligent: capable of easily adapting to new environments. Our aim is to be the future of self-driving cars: the first to deploy in 100 cities across the world bringing autonomy to everyone, everywhere.

Where you will have an impact: Science is the team that is advancing our end-to-end autonomous driving research. The team’s mission is to accelerate our journey to AV2.0 and ensure the future success of Wayve by incubating and investing in new ideas that have the potential to become game-changing technological advances for the company.

As the first Research Manager in our Mountain View office, you will be responsible for managing & scaling a strong Science team which is building our Wayve Foundational Model in collaboration with other Wayve science teams in London and Vancouver. You will provide coaching and guidance to each of the researchers and engineers within your team and work with leaders across the company to ensure sustainable career growth for your team during a period of growth in the company. You will participate in our project-based operating model where your focus will be unlocking the potential of your team and its technical leaders to drive industry-leading impact. As part of your work, you will help identify the right projects to invest in, ensure the right allocation of resources to those projects, keep the team in good health, provide technical feedback to your team, share progress to build momentum, and build alignment and strong collaboration across the wider Science organisation. We are actively hiring and aim to substantially grow our research team over the next two years and you will be at the heart of this.

What you’ll bring to Wayve Essential: Prior experience as a manager of research teams (10-15+ people) with a clear career interest towards management Passionate about fostering personal and professional growth in individual team members Experience with roadmap planning, stakeholder management, requirements gathering and alignment with peers towards milestones and deliverables Strong knowledge of Machine Learning and related areas, such as Deep Learning, Natural Language Processing, Computer Vision, etc. Industry experience with machine learning technology development which has had real-world product impact Experience driving a team and technical project through the full lifecycle, ideally within the language, vision or multimodal space Passionate about bringing research concepts through to product Research and engineering fundamentals MS or PhD in Computer Science, Engineering, or similar experience

Desirable: Experience managing the execution of a technical product Good experience working in a project-based (“matrix”) operating environment Proven track record of successfully delivering research projects and publications Experience working with robotics, self-driving, AR/VR, or LLMs Our offer Competitive compensation, on-site chef and bar, lots of fun socials, workplace nursery scheme, comprehensive private health insurance and more! Immersion in a team of world-class researchers, engineers and entrepreneurs. A position to shape the future of autonomous driving, and thus bring about a real world deployment of a breakthrough technology. Help relocating/travelling to London, with visa sponsorship. Flexible working hours - we trust you to do your job well, at times that suit you and your team.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply

Natick, MA, United States


The Company: Cognex is a global leader in the exciting and growing field of machine vision. This position is a hybrid role in our Natick, MA corporate HQ.

The Team: This position is for an experienced Software Engineer in the Core Vision Technology team at Cognex, focused on architecting and productizing the best-in-class computer vision algorithms and AI models that power Cognex’s industrial barcode readers and 2D vision tools with a mission to innovate on behalf of customers and make this technology accessible to a broad range of users and platforms. Our products combine custom hardware, specialized lighting and optics, and world-class vision algorithms/models to create embedded systems that can find and read high-density symbols on package labels or marked directly on a variety of industrial parts, including aircraft engines, electronics substrates, and pharmaceutical test equipment. Our devices need to read hundreds of codes per second, so speed-optimized hardware and software work together to create best in class technology. Companies around the world rely on Cognex vision tools and technology to guide assembly, automate inspection, and speed up production and distribution.

Job Summary: The Core Vision Technology team is seeking an experienced developer with deep knowledge of the software development life cycle, creative problem solving skills and solid design thinking, with a focus on productization of AI technology on embedded platforms. You will play the critical role of ** a chief architect **, who will lead the development and productization of computer vision AI models and algorithms on multiple Cognex products; with the goal of making the technology modular and available to a broad range of users and platforms. In this role, you will interface with machine vision experts in R&D, product, hardware, and other software engineering teams at Cognex. A successful individual will lead design discussions, make sound architectural choices for the future on different embedded platforms, advocate for engineering excellence, mentor junior engineers and extend technical influence across teams. Prior experience with productization of AI technology is essential for this position.

Essential Functions: -Develop and productize innovative vision algorithms, including AI models developed by the R&D team for detecting and reading challenging 1D and 2D barcodes, and vision tools for gauging, inspection, guiding, and identifying industrial parts. -Lead software and API design discussions and make scalable technology choices meeting current and future business needs.
-More details in the link below

Minimum education and work experience required: MS or PhD from a top engineering school in EE, CS or equivalent 7+ years relevant, high tech work experience

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Location San Diego


Description

Qualcomm AI Research is looking for world-class algorithm engineers in general domain machine learning, especially deep learning, generative AI, LLM, LVM. Come join a high-caliber team of engineers building advanced machine learning technology, best-in-class solutions, and user friendly model optimization tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet) to enable state-of-the-art networks to run on devices with limited power, memory, and computation.

Members of our team enjoy the opportunity to participate in cutting edge research while simultaneously contributing technology that will be deployed worldwide in our industry-leading devices. You will be part of a multi-disciplinary talented team working on on-device generative AI optimization. Collaborate in a cross-functional environment spanning hardware, software and systems. See your design in action on industry-leading chips embedded in the next generation of smartphones, autonomous vehicles, robotics, and IOT devices.

Minimum Qualifications: • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 3+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field and 2+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.The R&D work responsibility for this position focuses on the following: Algorithms research and development in the area of Generative AI, LVM, LLM, Multi-modality Efficient inference algorithms research and development, e.g. batching, KV caching, efficient attentions, long context, speculative decoding Advanced quantization algorithms research and development for complex generative models, e.g., gradient/non-gradient based optimization, equivalent/non-equivalent transformation, automatic mixed precision, hardware in loop Model compression, lossy or lossless, structural and neural search Optimization based learning and learning based optimization Generative AI system prototyping Apply solutions toward system innovations for model efficiency advancement on device as well as in the cloud Python, Pytorch programmer Preferred Qualifications: Master's degree in Computer Science, Engineering, Information Systems, or related field. PHD's degree is preferred. 2+ years of experience with Machine Learning algorithms or systems engineering or related work experience


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply