Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Zoox is looking for a software engineer to join our Perception team and help us build novel architectures for classifying and understanding the complex and dynamic environments in our cities. In this role, you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms. We are creating new algorithms for segmentation, tracking, classification, and high-level scene understanding, and you could work on any (or all!) of these components.

We're looking for engineers with advanced degrees and experience building perception pipelines that work with real data in rapidly changing and uncertain environments.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

Location Multiple Locations


Description Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things. We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

Job Purpose & Responsibilities As a member of Qualcomm’s ML Systems Team, you will participate in two activities: Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings

Key Responsibilities: Contributing to the development and evolution of ML/AI compilers within Qualcomm Defining and implementing algorithms for mapping ML/AI workloads to Qualcomm HW Understanding trends in ML network design, through customer engagements and latest academic research, and how this affects both SW and HW design Creation of performance-driven simulation components (using C++, Python) for analysis and design of high-performance HW/SW algorithms on future SoCs Exploration and analysis of performance/area/power trade-offs for future HW and SW ML algorithms Pre-Silicon prediction of performance for various ML algorithms Running, debugging and analyzing performance simulations to suggest enhancements to Qualcomm hardware and software to tackle compute and system memory-related bottlenecks · Successful applications will work in cross-site, cross-functional teams.

Requirements: Demonstrated ability to learn, think and adapt in fast changing environment Detail-oriented with strong problem-solving, analytical and debugging skills Strong communication skills (written and verbal) Strong background in algorithm development and performance analysis is essential The following experiences would be significant assets: Strong object-oriented design principles Strong knowledge of C++ Strong knowledge of Python Experience in compiler design and development Knowledge of network model formats/platforms (eg. Pytorch, Tensorflow, ONNX) is an asset. On-silicon debug skills of high-performance compute algorithms · Knowledge of algorithms and data structures Knowledge of software development processes (revision control, CD/CI, etc.) · Familiarity with tools such as git, Jenkins, Docker, clang/MSVC Knowledge of computer architecture, digital circuits and event-driven transactional models/simulators


Apply

Tokyo, Tokyo-to, Japan


Overview As one of the world's leading industrial research laboratories, Microsoft Research (MSR) has more than 1,000 researchers and engineers working across the globe. In the past 30 years, Microsoft scientists have not only carried out world-class computer science research, but also transferred the advanced technologies into our products and services that have changed millions of people’s lives and ensured that Microsoft is at the forefront of digital transformation.

Part of Microsoft Research, Microsoft Research Asia (MSR Asia), established in 1998, is a leading research lab with major sites in Beijing, Shanghai and Vancouver. Over the years, technologies developed by MSR Asia have made a significant impact within Microsoft and also around the world, and new, innovative technologies are constantly being born from the lab. As one of the world-class research labs, MSRA offers an exhilarating, supportive, open and inclusive environment for top talents to create the future through their disruptive and cutting-edge research. (More information about Microsoft Research Lab - Asia - Microsoft Research).

Along with business growth, Microsoft Research Asia (MSRA) is increasing its presence in Japan, and looking for a Principal Research Manager who specializes in AI with an emphasis on Embodied AI and Robotics, AI Model innovations (NLP, vision, multi-modality), Societal AI, Wireless sensing, and Wellbeing. This is a unique opportunity to lead an ambitious research agenda and work with various teams to explore new applications of those research areas.

Responsibilities •As a leading and accomplished expert in a broad research area (e.g., Embodied AI and Robotics, AI Model, Multimedia and Vision), has a comprehensive understanding of the relevant literature, research methods, and business and academic context. •Defines and articulates a clear long-term research vision that is in line with MSRA strategic focus and drive research agenda landing with planned schedule •As a local representative, fosters cooperative relationships with local governments, academic communities, industry partners and business groups within Microsoft to establish MSRA presence locally and support future growth •Creates synergy among MSRA research groups in multiple locations to enable collaboration and creativity • As a people manager, hires and retains top talents. Deliveries success through empowerment and accountability


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

We are seeking a highly motivated candidate for a fully funded postdoctoral researcher position to work in 3D computer graphics and 3D computer vision.

The successful candidate will join the 3D Graphics and Vision research group led by Prof. Binh-Son Hua at the School of Computer Science and Statistics, Trinity College Dublin, Ireland to work on topics related to generative AI in the 3D domain. The School of Computer Science and Statistics at Trinity College Dublin is a collegiate, friendly, and research-intensive centre for academic study and research excellence. The School has been ranked #1 in Ireland, top 25 in Europe, and top 100 Worldwide (QS Subject Rankings 2018, 2019, 2020, 2021).

The postdoctoral researcher is expected to conduct fundamental research and publish in top-tier computer vision and computer graphics conferences (CVPR, ECCV, ICCV, SIGGRAPH) and journals (TPAMI, IJCV). Other responsibilities include supporting graduate or undergraduate students with technical guidance and engagement in other research activities such as paper reviews, reading group, workshop organization, etc.

The start date of the position is August 01, 2024. Contract duration is 1 year with the option of renewing for a second year. The successful candidate will require the following skills and knowledge: • PhD in Computer Science or related fields; • Strong tracked records in 3D computer graphics, 3D computer vision; • Hands-on experience in training deep models and generative models is required; • Hands-on experience and relevant skills in computer graphics and computer vision application development such as OpenGL, OpenCV, CUDA, Blender is desirable; • Strong programming skills in C++, Python. Capability in implementing systems from research papers and open-source software. • Additional background in math, statistics, or physics is an advantage.

Applicants should provide the following information: • A comprehensive CV including a full list of publications; • The name and contact details of two referees. One of the referees should be the applicant’s PhD supervisor; • Two representative papers by the applicant. Interested candidates should email their applications to Binh-Son Hua (https://sonhua.github.io) directly. Applications will be reviewed on a rolling basis until the position has been filled.


Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Location Sunnyvale, CA Seattle, WA New York, NY Cambridge, MA


Description The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems.

As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision.


Apply

Vancouver

Who we are Established in 2017, Wayve is a leader in autonomous vehicle technology, driven by breakthroughs in Embodied AI. Our intelligent, mapless, and hardware-agnostic technologies empower vehicles to navigate complex environments effortlessly. Supported by prominent investors, Wayve is advancing the transition from assisted to fully automated driving, making transportation safer, more efficient, and universally accessible. Join our world-class, multinational team of engineers and researchers as we push the boundaries of frontier AI and autonomous driving, creating impactful technologies and products on a global scale

We are seeking an experienced researcher to be a founding member of our Vancouver team! We are prioritising someone with experience leading projects in AI applied to autonomous driving or similar robotics or decision-making domains, inclusive, but not limited to the following specific areas: Foundation models for robotics or embodied AI Model-free and model-based reinforcement learning Offline reinforcement learning Large language models Planning with learned models, model predictive control and tree search Imitation learning, inverse reinforcement learning and causal inference Learned agent models: behavioural, oral and physical models of cars, people, and other dynamic agents

Challenges you will own You'll be working on some of the world's hardest problems, and able to attack them in new ways. You'll be a technical leader within our diverse, cross-disciplinary team, helping teach our robots how to drive safely and comfortably in complex real-world environments. This encompasses many aspects of research across perception, prediction, planning, and control, including:

Actively contributing to the Science’s technical leadership community, inclusive of proposing new projects, organising their work, and delivering substantial impact across Wayve. Leveraging our large, rich, and diverse sources of real-world driving data Architecting our models to best employ the latest advances in foundation models, transformers, world models, etc, evaluating and incorporating state-of-the-art techniques into our workflows Investigating learning algorithms to use (e.g. reinforcement learning, behavioural cloning) Leveraging simulation for controlled experimental insight, training data augmentation, and re-simulation Scaling models efficiently across data, model size, and compute, while maintaining efficient deployment on the car Collaborating with cross-functional, international teams to integrate research findings into scalable, production-level solutions Potentially contributing to academic publications for top-tier conferences like NeurIPS, CVPR, ICRA, ICLR, CoRL etc. working in a world-class team, contributing to the scientific community and establishing Wayve as a leader in the field

What you will bring to Wayve Proven track record of research in one or more of the topics above demonstrated through deployed applications or publications. Experience leading a research agenda aligned with larger organisation or company goals Strong programming skills in Python, with experience in deep learning frameworks such as PyTorch, numpy, pandas, etc. Experience bringing a machine learning research concept through the full ML development cycle Excellent problem-solving skills and the ability to work independently as well as in a team environment. Demonstrated ability to work collaboratively in a fast-paced, innovative, interdisciplinary team environment. Experience bringing an ML research concept through to production and at scale PhD in Computer Science, Computer Engineering, or a related field

What we offer you The chance to be part of a truly mission driven organisation and an opportunity to shape the future of autonomous driving. Unlike our competitors, Wayve is still relatively small and nimble, giving you the chance to make a huge impact


Apply

Vancouver, British Columbia, Canada


Overview Microsoft Research (MSR), a leading industrial research laboratory, comprises over 1,000 computer scientists working across the United States, United Kingdom, China, India, Canada, and the Netherlands.

We are currently seeking Principal Researcher in the area of Artificial Specialized Intelligence and artificial general intelligence located in Vancouver, British Columbia.

This is an opportunity to drive an ambitious research agenda while collaborating with diverse teams to push for novel applications of those areas.

Over the past 30 years, our scientists have not only conducted world-class computer science research but also integrated advanced technologies into our products and services, positively impacting millions of lives and propelling Microsoft to the forefront of digital transformation.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Identifying and driving new research directions, creating new technologies and collaborating with Microsoft product groups and external partners to deploy them in real-world settings. Stay current with the latest trends, research, and developments in AI, machine learning, and system architecture to ensure our systems remain at the forefront of innovation. Evaluate the performance of AI-centric systems and provide recommendations for improvement and optimization. Publish research findings in peer-reviewed journals, conferences, and other relevant venues, and present research results to internal and external stakeholders. Mentor and guide researchers and engineers in their research and development efforts. Collaborate with industry partners and academic institutions to drive joint research projects and initiatives.


Apply

Location Niskayuna, NY


Description Job Description Summary At GE Aerospace Research, our team develops advanced embedded systems technology for the future of flight. Our technology will enable sustainable air travel and next generation aviation systems for use in commercial as well as military applications. As a Lead Embedded Software Engineer, you will architect and develop state-of-the-art embedded systems for real-time controls and communication applications. You will lead and contribute to advanced research and development programs for GE Aerospace as well as with U.S. Government Agencies. You will collaborate with fellow researchers from a range of technology disciplines, contributing to projects across the breadth of GE Aerospace programs. Job Description Essential Responsibilities: As a Lead Embedded Software Engineer, you will:

Work independently as well as with a team to develop and apply advanced software technologies for embedded controls and communication systems for GE Aerospace products Interact with hardware suppliers and engineering tool providers to identify the best solutions for the most challenging applications Lead small to medium-sized projects or tasks Be responsible for documenting technology and results through patent applications, technical reports, and publications Expand your expertise staying current with advances in embedded software to seek out new ideas and applications Collaborate in a team environment with colleagues across GE Aerospace and government agencies

Qualifications/Requirements:

Bachelor’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 7 years of industry experience OR a master’s degree in Electrical Engineering, Computer Science, or related disciplines with a minimum of 5 years of industry experience OR a Ph.D. in Electrical Engineering, Computer Science, or related disciplines with a minimum of 3 years of industry experience. Strong background in software development for embedded systems (e.g., x86, ARM) Strong embedded programming skills such as: C/C++, Python, and Rust Familiarity with CNSA and NIST cryptographic algorithms Willingness to travel at a minimum of 2 weeks per year Ability to obtain and maintain US Government Security Clearance US Citizenship required Must be willing to work out of an office located in Niskayuna, NY You must submit your application for employment on the careers page at www.gecareers.com to be considered Ideal Candidate Characteristics:

Coding experience with Bash, Python, C#, MATLAB, ARMv8 assembly, RISCV assembly Experience with embedded devices from Intel, AMD, Xilinx, NXP, etc. Experience with hardware-based security (e.g., UEFI, TPM, ARM TrustZone, Secure Boot) Understanding of embedded system security requirements and security techniques Experience with Linux OS and Linux security Experience with OpenSSL and/or wolfSSL Experience with wired and wireless networking protocols or network security Knowledge of 802.1, 802.3, and/or 802.11 standards Experience in software defined networks (SDN) and relevant software such as OpenFlow, Open vSwitch, or Mininet Hands-on experience with embedded hardware (such as protoboards) or networking equipment (such as switches and analyzers) in a laboratory setting Experience with embedded development in an RTOS environment (e.g., VxWorks, FreeRTOS) Demonstrated ability to take an innovative idea from a concept to a product Experience with the Agile methodology of program management The base pay range for this position is 90,000 - 175,000 USD Annually. The specific pay offered may be influenced by a variety of factors, including the candidate’s experience, education, and skill set. This position is also eligible for an annual discretionary bonus based on a percentage of your base salary. This posting is expected to close on June 16, 2024


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

ASML US, including its affiliates and subsidiaries, bring together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, Netherlands and we have 18 office locations around the United States including main offices in Chandler, Arizona, San Jose and San Diego, California, Wilton, Connecticut, and Hillsboro, Oregon.

ASML’s Optical Sensing (Wafer Alignment Sensor and YieldStar) department in Wilton, Connecticut is seeking a Design Engineer to support and develop complex optical/photonic sensor systems used within ASML’s photolithography tools. These systems typically include light sources, detectors, optical/electro-optical components, fiber optics, electronics and signal processing software functioning in close collaboration with the rest of the lithography system. As a design engineer, you will design, develop, build and integrate optical sensor systems.

Role and Responsibilities Use general Physics, Optics, Software knowledge and an understanding of the sensor systems and tools to develop optical alignment sensors in lithography machines Have hands-on sills of building optical systems (e.g. imaging, testing, alignment, detector system, etc.) Have strong data analysis sills to evaluate sensor performance and troubleshooting Leadership:

Lead executing activities for determining problem root cause, execute complex tests, gather data and effectively communicate results on different levels of abstraction (from technical colleagues to high level managers) Lead engineers in various competencies (e.g. software, electronics, equipment engineering, manufacturing engineering, etc.) in support of feature delivery for alignment sensors Problem Solving: Troubleshooting complex technical problems Develop/debug data signal processing algorithms Develop and execute test plans in order to determine problem root cause Communications/Teamwork: Draw conclusions based on the input from different stakeholders Capability to clearly communicate the information on different level of abstraction Programming: Implement data analysis techniques into functioning MATLAB codes Optimization skills GUI building experience Familiarly with LabView and Python Some travel (up to 10%) to Europe, Asia and within the US can be expected


Apply