Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

The CVPR 2024 conference is not accepting applications to post at this time.

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Excited to see you at CVPR! We’ll be at booth 1404. Come see us to talk more about roles.

Our team consists of people with diverse software and academic experiences. We work together towards one common goal: integrating the software, you'll help us build into hundreds of millions of vehicles.

As the MLE, you will collaborate with researchers to perform research operations using existing infrastructure. You will use your judgment in complex scenarios and help apply standard techniques to various technical problems. Specifically, you will:

  • Characterize neural network quality, failure modes, and edge cases based on research data
  • Maintain awareness of current trends in relevant areas of research and technology
  • Coordinate with researchers and accurately convey the status of experiments
  • Manage a large number of concurrent experiments and make accurate time estimates for deadlines
  • Review experimental results and suggest theoretical or process improvements for future iterations
  • Write technical reports indicating qualitative and quantitative results to external parties

Apply

Location Seattle, WA


Description Interested in solving challenging problems using latest developments in Large Language Models and Artificial Intelligence (AI)? Amazon's Consumer Electronics Technology (CE Tech) organization is redefining shopping experiences leveraging state of the art AI technologies. We are looking for a talented Sr. Applied Scientist with a solid background in the design and development of scalable AI and ML systems and services, deep passion for building ML-powered products, a proven track record of executing complex projects, and delivering high business and customer impact. You will help us shape the future of shopping experiences. As a member of our team, you'll work on cutting-edge projects that directly impact millions of customers, selling partners, and employees every single day. This role will provide exposure to state-of-the-art innovations in AI/ML systems (including GenAI). Technologies you will have exposure to, and/or will work with, include AWS Bedrock, Amazon Q, SageMaker, and Foundational Models such as Anthropic’s Claude / Mistral, among others.


Apply

About the role As a detail-oriented and experienced Data Annotation QA Coordinator you will be responsible for both annotating in-house data-sets and ensuring the quality assurance of our outsourced data annotation deliveries.Your key responsibilities will include text, audio, image, and video annotation tasks, following detailed guidelines. To be successful in the team you will have to be comfortable working with standard tools and workflows for data annotation and possess the ability to manage projects and requirements effectively.

You will join a group of more than 40 Researchers and Engineers in the R&D department. This is an open, collaborative and highly supportive environment. We are all working together to build something big - the future of synthetic media and programmable video through Generative AI. You will be a central part of a dynamic and vibrant team and culture.

Please, note, this role is office-based. You will be working at our modern friendly office at the very heart of London.


Apply

Redmond, Washington, United States


Overview We are seeking a highly skilled and passionate Research Scientist to join our Responsible & OpenAI Research (ROAR) team in Azure Cognitive Services.

As a Research Scientist, you will play a key role in advancing the field of Responsible Artificial Intelligence (AI) to ensure safe releases of the rapidly advancing AI technologies, such as GPT-4, GPT-4V, DALL-E 3 and beyond, as well as to expand and enhance our standalone Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Contribute to the development of Responsible AI policies, guidelines, and best practices and ensure the practical implementation of these guidelines within various AI technology stacks across Microsoft, promoting a consistent approach to Responsible AI. Enable the safe release of new Azure OpenAI Service features, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Other: Embody our Culture and Values


Apply

Location Multiple Locations


Description The Qualcomm Cloud Computing team is developing hardware and software for Machine Learning solutions spanning the data center, edge, infrastructure, automotive market. Qualcomm’s Cloud AI 100 accelerators are currently deployed at AWS / Cirrascale Cloud and at several large organizations. We are rapidly expanding our ML hardware and software solutions for large scale deployments and are hiring across many disciplines.

We are seeing to hire for multiple machine learning positions in the Qualcomm Cloud team. In this role, you will work with Qualcomm's partners to develop and deploy best in class ML applications (CV, NLP, GenAI, LLMs etc) based on popular frameworks such as PyTorch, TensorFlow and ONNX, that are optimized for Qualcomm's Cloud AI accelerators. The work will include model assessment of throughput, latency and accuracy, model profiling and optimization, end-to-end application pipeline development, integration with customer frameworks and libraries and responsibility for customer documentation, training, and demos. This candidate must possess excellent communication, leadership, interpersonal and organizational skills, and analytical skills.

This role will interact with individuals of all levels and requires an experienced, dedicated professional to effectively collaborate with internal and external stakeholders. The ideal candidate has either developed or deployed deep learning models on popular ML frameworks. If you have a strong appetite for technology and enjoy working in small, agile, empowered teams solving complex problems within a high energy, oftentimes chaotic environment then this is the role for you.

Minimum Qualifications: • Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience.

• 2+ years of experience with Programming Language such as C, C++, Java, Python, etc. • 1+ year of experience with debugging techniques.Key Responsibilities: Key contributor to Qualcomm’s Cloud AI GitHub repo and developer documentation. Work with developers in large organizations to Onboard them on Qualcomm’s Cloud AI ML stack improve and optimize their Deep Learning models on Qualcomm AI 100 deploy their applications at scale Collaborate and interact with internal teams to analyze and optimize training and inference for deep learning. Work on Triton, ExecuTorch, Inductor, TorchDynamo to build abstraction layers for inference accelerator. Optimize LLM/GenAI workloads for both scale-up (multi-SoC) and scale-out (multi-card) systems. Partner with product management, hardware/software engineering to highlight customer progress, gaps in product features etc.


Apply

San Jose, CA

B GARAGE was founded in 2017 by a Ph.D. graduate from Stanford University. After having spent over five years researching robotics, computer vision, aeronautics, and drone autonomy, the founder and team set their minds on building a future where aerial robots would become an integral part of our daily lives without anyone necessarily piloting them. Together, our common goal is to redefine the user experience of drones and to expand the horizon for the use of drones.

Roles and Responsibilities

Design and develop perception for aerial robot and inventory recognition for warehouses by leveraging computer vision and deep learning techniques

Aid the computer vision team to deliver prototype and product in a timely manner

Collaborate with other teams within the company

Minimum Qualifications

M.S. degree in computer science, robotics, electrical engineering, or other engineering disciplines

10+ years of experience with computer vision and machine learning

Proficient in image processing algorithms and multiple view geometry using camera

Experience with machine learning architectures for object detection, segmentation, text recognition etc.

Proficient with ROS, C++, and Python

Experience with popular computer vision and GPU frameworks/libraries (e.g., OpenCV,TensorFlow, PyTorch, CUDA, cuDNN etc.)

Proficient in containerization technologies (Docker, Kubernetes) and container orchestration technologies

Experience in cloud computing platforms (AWS, GCP, etc.)

Experience with robots operating on real-time onboard processing

Self-motivated person who thrives in a fast-paced environment

Good problem solving and troubleshooting skills

Legally authorized to work in the United States

Optional Qualifications

Ph.D. degree in computer science, robotics, electrical engineering, or other engineering disciplines

Experience with scene reconstruction, bundle adjustment and factor graph optimization libraries

Experience with Javascript and massively parallel cloud computing technologies involving Kafka, Spark, MapReduce

Published research papers in CVPR, ICCV, ECCV, ICRA, IROS, etc.

Company Benefits

Competitive compensation packages

Medical, dental, vision, life insurance, and 401(k)

Flexible vacation and paid holidays

Complimentary lunches and snacks

Professional development reimbursement (online courses, conference, exhibit, etc.)

B GARAGE stands for an open and respectful corporate culture because we believe diversity helps us to find new perspectives.

B GARAGE ensures that all our members have equal opportunities – regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on a candidate’s actual competencies, qualifications, and business needs at the point of the time.


Apply

The Perception team at Zoox is responsible for developing the eyes and ears of our self driving car. Navigating safely and competently in the world requires us to detect, classify, track and understand several different attributes of all the objects around us that we might interact with, all in real time and with very high precision.

As a member of the Perception team at Zoox, you will be responsible for developing and improving state of the art machine learning techniques for doing everything from 2D/3D object detection, panoptic segmentation, tracking, to attribute classification. You will be working not just with our team of talented engineers and researchers in perception, but cross functionally with several teams including sensors, prediction and planning, and you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms.


Apply

Natick, MA, United States


The Company: Cognex is a global leader in the exciting and growing field of machine vision. This position is a hybrid role in our Natick, MA corporate HQ.

The Team: This position is for an experienced Software Engineer in the Core Vision Technology team at Cognex, focused on architecting and productizing the best-in-class computer vision algorithms and AI models that power Cognex’s industrial barcode readers and 2D vision tools with a mission to innovate on behalf of customers and make this technology accessible to a broad range of users and platforms. Our products combine custom hardware, specialized lighting and optics, and world-class vision algorithms/models to create embedded systems that can find and read high-density symbols on package labels or marked directly on a variety of industrial parts, including aircraft engines, electronics substrates, and pharmaceutical test equipment. Our devices need to read hundreds of codes per second, so speed-optimized hardware and software work together to create best in class technology. Companies around the world rely on Cognex vision tools and technology to guide assembly, automate inspection, and speed up production and distribution.

Job Summary: The Core Vision Technology team is seeking an experienced developer with deep knowledge of the software development life cycle, creative problem solving skills and solid design thinking, with a focus on productization of AI technology on embedded platforms. You will play the critical role of ** a chief architect **, who will lead the development and productization of computer vision AI models and algorithms on multiple Cognex products; with the goal of making the technology modular and available to a broad range of users and platforms. In this role, you will interface with machine vision experts in R&D, product, hardware, and other software engineering teams at Cognex. A successful individual will lead design discussions, make sound architectural choices for the future on different embedded platforms, advocate for engineering excellence, mentor junior engineers and extend technical influence across teams. Prior experience with productization of AI technology is essential for this position.

Essential Functions: -Develop and productize innovative vision algorithms, including AI models developed by the R&D team for detecting and reading challenging 1D and 2D barcodes, and vision tools for gauging, inspection, guiding, and identifying industrial parts. -Lead software and API design discussions and make scalable technology choices meeting current and future business needs.
-More details in the link below

Minimum education and work experience required: MS or PhD from a top engineering school in EE, CS or equivalent 7+ years relevant, high tech work experience

If you would like to meet the hiring manager at CVPR to discuss this opportunity, please email ahmed.elbarkouky@cognex.com


Apply

Location Santa Clara, CA


Description Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech, Vision and Language technology.

AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services.

Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).

As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding.

We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.


Apply

Overview We are seeking an exceptionally talented Postdoctoral Research Fellow to join our interdisciplinary team at the forefront of machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. This position is hosted by the Stanford Translational AI (STAI) in Medicine and Mental Health Lab (PI: Dr. Ehsan Adeli, https://stanford.edu/~eadeli), as part of the Department of Psychiatry and Behavioral Sciences at Stanford University. The postdoc will have the opportunity to directly collaborate with researchers and PIs within the Computational Neuroscience Lab (CNS Lab) in the School of Medicine and the Stanford Vision and Learning (SVL) lab in the Computer Science Department. These dynamic research groups are renowned for groundbreaking contributions to artificial intelligence and medical sciences.

Project Description The successful candidate will have the opportunity to work on cutting-edge projects aimed at building large-scale models for neuroimaging and neuroscience through innovative AI technologies and self-supervised learning methods. The postdoc will contribute to building a large-scale foundation model from brain MRIs and other modalities of data (e.g., genetics, videos, text). The intended downstream applications include understanding the brain development process during the early ages of life, decoding brain aging mechanisms, and identifying the pathology of different neurodegenerative or neuropsychiatric disorders. We use several public and private datasets including but not limited to the Human Connectome Project, UK Biobank, Alzheimer's Disease Neuroimaging Initiative (ADNI), Parkinson’s Progression Marker Initiative (PPMI), Open Access Series of Imaging Studies (OASIS), Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA), Adolescent Brain Cognitive Development (ABCD), and OpenNeuro.

Key Responsibilities Conduct research in machine learning, computer vision, and medical image analysis, with applications in neuroimaging and neuroscience. Develop and implement advanced algorithms for analyzing medical images and other modalities of medical data. Develop novel generative models. Develop large-scale foundation models. Collaborate with a team of researchers and clinicians to design and execute studies that advance our understanding of neurological disorders. Mentor graduate students (Ph.D. and MSc). Publish findings in top-tier journals and conferences. Contribute to grant writing and proposal development for securing research funding.

Qualifications PhD in Computer Science, Electrical Engineering, Neuroscience, or a related field. Proven track record of publications in high-impact journals and conferences including ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, MICCAI, Nature, and JAMA. Strong background in machine learning, computer vision, medical image analysis, neuroimaging, and neuroscience. Excellent programming skills in Python, C++, or similar languages and experience with ML frameworks such as TensorFlow or PyTorch. Ability to work independently and collaboratively in an interdisciplinary team. Excellent communication skills, both written and verbal.

Benefits Competitive salary and benefits package. Access to state-of-the-art facilities and computational resources. Opportunities for professional development and collaboration with leading experts in the field. Participation in international conferences and workshops. Working at Stanford University offers access to world-class research facilities and a vibrant intellectual community. The university provides numerous opportunities for interdisciplinary collaboration, professional development, and cutting-edge innovation. Additionally, being part of Stanford opens doors to a global network of leading experts and industry partners, enhancing both career growth and research impact.

Apply For full consideration, send a complete application via this form: https://forms.gle/KPQHPGGeXJcEsD6V6


Apply

Redmond, Washington, United States


Overview Are you interested in developing and optimizing deep learning systems? Are you interested in designing novel technology to accelerate their training and serving for cutting edge models and applications? Do you want to scale large Artificial Intelligence models to their limits on massive supercomputers? Are you interested in being part of an exciting open-source library for deep learning systems? The DeepSpeed team is hiring!

Microsoft's DeepSpeed is an open-source library built on the PyTorch (machine learning framework) ecosystem that combines numerous research innovations and technology advancements to make deep learning efficient and easier to use. DeepSpeed can parallelize across thousands of GPUs and train models with trillions of parameters. Our OSS (Open Source Software) has powered many advanced models like MT-530B and BLOOM, and it supports unprecedented scale and speed for both training and inference.

The DeepSpeed team is also part of the larger Microsoft AI at Scale initiative, which is pioneering the next-generation AI capabilities that are scaled across the company’s products and AI platforms.

The DeepSpeed team is looking for a Senior Researcher in Redmond, WA with passion for innovations and for building high-quality systems that will make significant impact inside and outside of Microsoft. Our team is highly collaborative, innovative, and end-user obsessed. We are looking for candidates with systems skills and passionate about driving innovations to improve the efficiency and effectiveness of deep learning systems. We value creativity, agility, accountability, and a desire to learn new technologies.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities Excels in one or more subareas and gains expertise in a broad area of research. Identifies and articulates problems in an area of research that are academically novel and may directly or indirectly impact business opportunities. Collaborates with other relevant researchers or research groups to contribute to or advance a research agenda. Researches and develops an understanding of the state-of-the-art insights, tools, technologies, or methods being used in the research community. Expands collaborative relationships with relevant product and business groups inside or outside of Microsoft and provides expertise or technology to them.


Apply

Location Multiple Locations


Description

Qualcomm's Multimedia R&D and Standards Group is seeking candidates for Video Compression Research Engineer positions. You will be part of world-renowned team of video compression experts. The team develops algorithms, hardware architectures, and systems for state-of-the-art applications of classical and machine learning methods in video compression, video processing, point cloud coding and processing, AR/VR and computer vision use cases. The successful candidate for this position will be a highly self-directed individual with strong creative and analytic skills and a passion for video compression technology. You will work on, but not be limited to, developing new applications of classical and machine learning methods in video compression improving state-of-the-art video codecs.

We are considering candidates with various levels of experience. We are flexible on location and open to hiring anywhere, preferred locations are USA, Germany and Taiwan.

Responsibilities: Contribute to the conception, development, implementation, and optimization of new algorithms extending existing techniques and systems allowing improved video compression. Initiate ideas, design and implement algorithms for superior hardware encoder performance, including perceptually based bit allocation. Develop new algorithms for deep learning-based video compression solutions. Represent Qualcomm in the related standardization forums: JVET, MPEG Video, and ITU-T/VCEG. Document and present new algorithms and implementations in various forms, including standards contributions, patent applications, conference and journal publications, presentations, etc. Ideal candidate would have the skills/experience below: Expert knowledge of the theory, algorithms, and techniques used in video and image coding. Knowledge and experience of video codecs and their test models, such as ECM, VVC, HEVC and AV1. Experience with deep learning structures CNN, RNN, autoencoder etc. and frameworks like TensorFlow/PyTorch. Track record of successful research accomplishments demonstrated through published papers, and/or patent applications in the fields of video coding or video processing. Solid programming and debugging skills in C/C++. Strong written and verbal English communication skills, great work ethic, and ability to work in a team environment to accomplish common goals. PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics or similar field, or equivalent practical experience.

Qualifications: PhD or Masters degree in Electrical Engineering, Computer Science, Physics, Mathematics, or similar fields. 1+ years of experience with programming language such as C, C++, MATLAB, etc.


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Captions is the AI-powered creative studio. Millions of creators around the world have used Captions to make their video content stand out from the pack and we're on a mission to empower the next billion.

Based in NYC, we are a team of ambitious, experienced, and devoted engineers, designers, and marketers. You'll be joining an early team where you'll have an outsized impact on both the product and company's culture.

We’re very fortunate to have some the best investors and entrepreneurs backing us, including Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, Lenny Rachitsky, and more.

Check out our latest milestone and our recent feature on the TODAY show and the New York Times.

** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) **

Responsibilities:

Conduct research and develop models to advance the state-of-the-art in generative video technologies, focusing on areas such as video in-painting, super resolution, text-to-video conversion, background removal, and neural background rendering.

Design and develop advanced neural network models tailored for generative video applications, exploring innovative techniques to manipulate and enhance video content for storytelling purposes.

Explore new areas and techniques to enhance video storytelling, including research into novel generative approaches and their applications in video production and editing.

Create tools and systems that leverage machine learning, artificial intelligence, and computational techniques to generate, manipulate, and enhance video content, with a focus on usability and scalability.

Preferred Qualifications:

PhD in computer science or related field or 3+ years of industry experience.

Publication Record: Highly relevant publication history, with a focus on generative video techniques and applications. Ideal candidates will have served as the primary author on these publications.

Video Processing Skills: Strong understanding of video processing techniques, including video compression, motion estimation, and object tracking, with the ability to apply these techniques in generative video applications.

Expertise in Deep Learning: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar, with hands-on experience in designing, training, and deploying neural networks for video-related tasks.

Strong understanding of Computer Science fundamentals (algorithms and data structures).

Benefits: Comprehensive medical, dental, and vision plans

Anything you need to do your best work

We’ve done team off-sites to places like Paris, London, Park City, Los Angeles, Upstate NY, and Nashville with more planned in the future.

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Please note benefits apply to full time employees only.


Apply

Location Mountain View, CA


Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role:

We're currently looking for a tech lead with specialized skills in LiDAR, camera, and radar perception technologies to enhance our autonomous driving systems' ability to understand and interact with complex environments. In this pivotal role, you'll be instrumental in designing and refining the ML algorithms that enable our trucks to safely navigate and operate in complex, dynamic environments. You will collaborate with a team of experts in AI, robotics, and software engineering to push the boundaries of what's possible in autonomous trucking.

What you'll do: - Design and implement cutting-edge perception algorithms for autonomous vehicles, focusing on areas such as sensor fusion, 3D object detection, segmentation, and tracking in complex dynamic environments - Design and implement ML models for real-time perception tasks, leveraging deep neural networks to enhance the perception capabilities of self-driving trucks - Lead initiatives to collect, augment, and utilize large-scale datasets for training and validating perception models under various driving conditions - Develop robust testing and validation frameworks to ensure the reliability and safety of the perception systems across diverse scenarios and edge cases - Conduct field tests and simulations to validate and refine perception algorithms, ensuring robust performance in real-world trucking routes and conditions - Work closely with the data engineering team to build and maintain large-scale datasets for training and evaluating perception models, including the development of data augmentation techniques

**Please click on the Apply link below to see the full job description and apply.


Apply