Skip to yearly menu bar Skip to main content




CVPR 2024 Career Website

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting CVPR 2024. Opportunities can be sorted by job category, location, and filtered by any other field using the search box. For information on how to post an opportunity, please visit the help page, linked in the navigation bar above.

Search Opportunities

Location Madrid, ESP


Description At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects.

You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog.

We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers.


Apply

Redmond, Washington, United States


Overview We are seeking skilled and passionate Senior Research Scientist to join our Responsible & Open Ai Research (ROAR) team in Azure Cognitive Services at Redmond, WA.

As a Senior Research Scientist, you will play a key role in advancing Responsible AI approaches to ensure safe releases of the rapidly evolving multimodal, AI models such as GPT-4 Vision, DALL-E, Sora, and beyond, as well as to expand and enhance the Azure AI Content Safety Service.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.

Responsibilities Conduct cutting-edge research to develop Responsible AI definitions, methodologies, algorithms, and models for both measurement and mitigation of multimodal AI risks. Stay abreast of the latest advancements in the field and contribute to the scientific community through publications at top venues. Enable the safe release of multimodal models from OpenAI in Azure OpenAI Service, expand and enhance the Azure AI Content Safety Service with new detection technologies. Develop innovative approaches to address AI safety challenges for diverse customer scenarios. Embody our Culture and Values


Apply

Geomagical Labs is a 3D R&D lab, in partnership with IKEA. We create magical mixed-reality experiences for hundreds of millions of users, using computer vision, neural networks, graphics, and computational photography. Last year we launched IKEA Kreativ, and we’re excited for what’s next! We have an opening in our lab for a senior computer vision researcher, with 3D Reconstruction and Deep Learning expertise, to develop and improve the underlying algorithms powering our consumer products. We are looking for highly-motivated, creative, applied researchers with entrepreneurial drive, that are excited about building novel technologies and shipping them all the way to the hands of millions of customers!

Requirements: Ph.D. and 2+ years of experience, or Master's and 6+ years of experience, focused on 3D Computer Vision and Deep Learning. Experience in classical methods for 3D Reconstruction: SfM/SLAM, Multi-view Stereo, RGB-D Fusion. Experience in using Deep Learning for 3D Reconstruction and/or Scene Understanding, having worked in any of: Depth Estimation, Room Layout Estimation, NeRFs, Inverse Rendering, 3D Scene Understanding. Familiarity with Computer Graphics and Computational Photography. Expertise in ML frameworks and libraries, e.g. PyTorch. Highly productive in Python. Ability to architect and implement complex systems at the micro and macro level. Entrepreneurial: Adventurous, self-driven, comfortable under uncertainty, with a desire to make systems work end-to-end. Innovative; with a track record of patents and/or first-authored publications at leading workshops or conferences such as CVPR, ECCV/ICCV, SIGGRAPH, ISMAR, NeurIPS, ICLR etc. Experience in developing technologies that got integrated into products, as well as post-launch performance tracking and shipping improvements. [Bonus] Comfortable with C++.

Benefits: Join a mission-driven R&D lab, strategically backed by an influential global brand. Work in a dynamic team of computer vision, AI, computational photography, AR, graphics, and design professionals, and successful serial entrepreneurs. Opportunity to publish novel and relevant research. Fully remote work available to people living in the USA or Canada. Headquartered in downtown Palo Alto, California --- an easy walk from restaurants, coffee shops and Caltrain commuter rail. The USA base salary for this full-time position ranges from $180,000 to $250,000 determined by location, role, skill, and experience level. Geomagical Labs offers a comprehensive set of benefits, and for qualifying roles, substantial incentive grants, vesting annually.


Apply

The Prediction & Behavior ML team is responsible for developing machine-learned models that understand the full scene around our vehicle and forecast the behavior for other agents, our own vehicle’s actions, and for offline applications. To solve these problems we develop deep learning algorithms that can learn behaviors from data and apply them on-vehicle to influence our vehicle’s driving behavior and offline to provide learned models to autonomy simulation and validation. Given the tight integration of behavior forecasting and motion planning, our team necessarily works very closely with the Planner team in the advancement of our overall vehicle behavior. The Prediction & Behavior ML team also works closely with our Perception, Simulation, and Systems Engineering teams on many cross-team initiatives.


Apply

Open to Seattle, WA; Costa Mesa, CA; or Washington, DC

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

WHY WE’RE HERE The Mission Software Engineering team builds, deploys, integrates, extends, and scales Anduril's software to deliver mission-critical capabilities to our customers. As the software engineers closest to Anduril customers and end-users, Mission Software Engineers solve technical challenges of operational scenarios while owning the end-to-end delivery of winning capabilities such as Counter Intrusion, Joint All Domain Command & Control, and Counter-Unmanned Aircraft Systems.

As a Mission Software Engineer, you will solve a wide variety of problems involving networking, autonomy, systems integration, robotics, and more, while making pragmatic engineering tradeoffs along the way. Your efforts will ensure that Anduril products seamlessly work together to achieve a variety of critical outcomes. Above all, Mission Software Engineers are driven by a “Whatever It Takes” mindset—executing in an expedient, scalable, and pragmatic way while keeping the mission top-of-mind and making sound engineering decisions to deliver successful outcomes correctly, on-time, and with high quality.

WHAT YOU’LL DO -Own the software solutions that are deployed to customers -Write code to improve products and scale the mission capability to more customers -Collaborate across multiple teams to plan, build, and test complex functionality -Create and analyze metrics that are leveraged for debugging and monitoring -Triage issues, root cause failures, and coordinate next-steps -Partner with end-users to turn needs into features while balancing user experience with engineering constraints -Travel up to 30% of time to build, test, and deploy capabilities in the real world

CORE REQUIREMENTS -Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics. -At least 2-5+ years working with a variety of programming languages such as Java, Python, Rust, Go, JavaScript, etc. (We encourage all levels to apply) -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -A desire to work on critical software that has a real-world impact -Must be eligible to obtain and maintain a U.S. TS clearance

Desired Requirements -Strong background with focus in Physics, Mathematics, and/or Motion Planning to inform modeling & simulation (M&S) and physical systems -Developing and testing multi-agent autonomous systems and deploying in real-world environments. -Feature and algorithm development with an understanding of behavior trees. -Developing software/hardware for flight systems and safety critical functionality. -Distributed communication networks and message standards -Knowledge of military systems and operational tactics

WHAT WE VALUE IN MISSION SOFTWARE Customer Facing - Mission Software Engineers are the software engineers closest to Anduril customers, end-users, and the technical challenges of operational scenarios. Mission First - Above all, MSEs execute their mission in an expedient, scalable, and pragmatic way. They keep the mission top-


Apply

A postdoctoral position is available in Harvard Ophthalmology Artificial Intelligence (AI) Lab (https://ophai.hms.harvard.edu) under the supervision of Dr. Mengyu Wang (https://ophai.hms.harvard.edu/team/dr-wang/) at Schepens Eye Research Institute of Massachusetts Eye and Ear and Harvard Medical School. The start date is flexible, with a preference for candidates capable of starting in August or September 2024. The initial appointment will be for one year with the possibility of extension. Review of applications will begin immediately and will continue until the position is filled. Salary for the postdoctoral fellow will follow the NIH guideline commensurate with years of postdoctoral research experience.

In the course of this interdisciplinary project, the postdoc will collaborate with a team of world-class scientists and clinicians with backgrounds in visual psychophysics, engineering, biostatistics, computer science, and ophthalmology. The postdoc will work on developing statistical and machine learning models to improve the diagnosis and prognosis of common eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. The postdoc will have access to abundant resources for education, career development and research both from the Harvard hospital campus and Harvard University campus. More than half of our postdocs secured a faculty position after their time in our lab.

For our data resources, we have about 3 million 2D fundus photos and more than 1 million 3D optical coherence tomography scans. Please check http://ophai.hms.harvard.edu/data for more details. For our GPU resources, we have 22 in-house GPUs in total including 8 80-GB Nvidia H100 GPUs, 10 48-GB Nvidia RTX A6000 GPUs, and 4 Nvidia RTX 6000 GPUs. Please check http://ophai.hms.harvard.edu/computing for more details. Our recent research has been published in ICCV 2023, ICLR 2024, CVPR 2024, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Please check https://github.com/Harvard-Ophthalmology-AI-Lab for more details.

The successful applicant will:

  1. possess or be on track to complete a PhD or MD with background in computer science, mathematics, computational science, statistics, machine learning, deep learning, computer vision, image processing, biomedical engineering, bioinformatics, visual science and ophthalmology or a related field. Fluency in written and spoken English is essential.

  2. have strong programming skills (Python, R, MATLAB, C++, etc.) and in-depth understanding of statistics and machine learning. Experience with Linux clusters is a plus.

  3. have a strong and productive publication record.

  4. have a strong work ethic and time management skills along with the ability to work independently and within a multidisciplinary team as required.

Your application should include:

  1. curriculum vitae

  2. statement of past research accomplishments, career goal and how this position will help you achieve your goals

  3. Two representative publications

  4. contact information for three references

The application should be sent to Mengyu Wang via email (mengyu_wang at meei.harvard.edu) with subject “Postdoctoral Application in Harvard Ophthalmology AI Lab".


Apply

Seattle, WA or Costa Mesa, CA

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Vehicle Autonomy (Robotics) team at Anduril develops aerial and ground-based robotic systems. The team is responsible for taking products like Ghost, Anvil, and our Sentry Tower from paper sketches to operational systems. We work in close coordination with specialist teams like Perception, Autonomy, and Manufacturing to solve some of the hardest problems facing our customers. We are looking for software engineers and roboticists excited about creating a powerful robotics stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.

WHAT YOU'LL DO -Write and maintain core libraries (frame transformations, targeting and guidance, etc.) that all robotics platforms at Anduril will use -Own feature development and rollout for our products - recent examples include: building a Software-in-the-Loop simulator for our Tower product, writing an autofocus control system for cameras, creating a distributed over IPC coordinate frame library, redesigning the Pan-Tilt controls to accurately move heavy loads -Design, evaluate, and implement sensor integrations that support operation by both human and autonomous planning agents -Work closely with our hardware and manufacturing teams during product development, providing quick feedback that contributes to the final hardware design

REQUIRED QUALIFICATIONS -Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics -5+ years of C++ or Rust experience in a Linux development environment -Experience building software solutions involving significant amounts of data processing and analysis -Ability to quickly understand and navigate complex systems and established code bases -Must be eligible to obtain and hold a US DoD Security Clearance.

PREFERRED QUALIFICATIONS -Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics. -Understanding of systems software (kernel, device drivers, system calls) and performance analysis


Apply

Location Bellevue, WA


Description Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale.

This role is for the AFT AI team which has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. The team is comprised of scientists, who develop machine learning and computer vision solutions, analytics, who evaluate the expected business impact for a project and the performance of these solutions, and software engineers, who provide necessary support such as annotation pipelines and machine learning library development.

We are looking for an Applied Scientist with expertise in computer vision. You will work alongside other CV scientists, engineers, product managers and various stakeholders to deploy vision models at scale across a diverse set of initiatives. If you are a self-motivated individual with a zeal for customer obsession and ownership, and are passionate about applying computer vision for real world problems - this is the team for you.


Apply

Figma is growing our team of passionate people on a mission to make design accessible to all. Born on the Web, Figma helps entire product teams brainstorm, design and build better products — from start to finish. Whether it’s consolidating tools, simplifying workflows, or collaborating across teams and time zones, Figma makes the design process faster, more efficient, and fun while keeping everyone on the same page. From great products to long-lasting companies, we believe that nothing great is made alone—come make with us!

The AI Platform team at Figma is working on an exciting mission of expanding the frontiers of AI for creativity, and developing magical experiences in Figma products. This involves making existing features like search smarter, and incorporating new features using cutting edge Generative AI and deep learning techniques. We’re looking for engineers with a background in Machine Learning and Artificial Intelligence to improve our products and build new capabilities. You will be driving fundamental and applied research in this area. You will be combining industry best practices and a first-principles approach to design and build ML models that will improve Figma’s design and collaboration tool.

What you’ll do at Figma:

  • Driving fundamental and applied research in ML/AI using Generative AI, deep learning and classical machine learning, with Figma product use cases in mind.
  • Formulate and implement new modeling approaches both to improve the effectiveness of Figma’s current models as well as enable the launch of entirely new AI-powered product features.
  • Work in concert with other ML researchers, as well as product and infrastructure engineers to productionize new models and systems to power features in Figma’s design and collaboration tool.
  • Expand the boundaries of what is possible with the current technology set and experiment with novel ideas.
  • Publish scientific work on problems relevant to Figma in leading conferences like ICML, NeurIPS, CVPR etc.

We'd love to hear from you if you have:

  • Recently obtained or is in the process of obtaining a PhD in AI, Computer Science or a related field. Degree must be completed prior to starting at Figma.
  • Demonstrated expertise in machine learning with a publication record in relevant conferences, or a track record in applying machine learning techniques to products.
  • Experience in Python and machine learning frameworks (such as PyTorch, TensorFlow or JAX).
  • Experience building systems based on deep learning, natural language processing, computer vision, and/or generative models.
  • Experience solving sophisticated problems and comparing alternative solutions, trade-offs, and diverse points of view to determine a path forward.
  • Experience communicating and working across functions to drive solutions.

While not required, it’s an added plus if you also have:

  • Experience working in industry on relevant AI projects through internships or past full time work.
  • Publications in recent advances in AI like Large language models (LLMs), Vision language Models (VLMs) or diffusion models.

Apply

Inria (Grenoble), France


human-robot interaction, machine learning, computer vision, representation learning

We are looking for highly motivated students joining our team at INRIA. This project will take place in close collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Topic: Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Contact pia.bideau@inria.fr The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).


Apply

Location Mountain View, CA


Description Gatik is thrilled to be at CVPR! Come meet our team at booth 1831 to talk about how you could make an impact at the autonomous middle mile logistics company redefining the transportation landscape.

Who we are: Gatik, the leader in autonomous middle mile logistics, delivers goods safely and efficiently using its fleet of light & medium-duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 customers including Kroger, Walmart, Tyson Foods, Loblaw, Pitney Bowes, Georgia-Pacific, and KBX; enabling them to optimize their hub-and-spoke supply chain operations, enhance service levels and product flow across multiple locations while reducing labor costs and meeting an unprecedented expectation for faster deliveries. Gatik’s Class 3-7 autonomous box trucks are commercially deployed in multiple markets including Texas, Arkansas, and Ontario, Canada.

About the role: We are seeking passionate Senior/Staff Software Engineers, who have strong fundamentals in software development practices and are experts in C++ language in production-oriented environment. The ideal candidate is a highly experienced C++ developer with a passion for enabling the world's first safe, reliable & efficient network of autonomous vehicles. You will partner with the research and software engineers to design, develop, test and validate AV features for our autonomous fleet.

This role will be onsite at our Mountain View office.

What you'll do: +Design, implement, integrate, and support real-time mission-critical software for the Gatik’s autonomy stack +Work with the research engineers to develop maintainable, testable and robust software designs +Architect and implement solutions to complex issues between components partitioned across the large software stack +Be at the forefront of guiding & ensuring best SDLC practices while contributing to improving the safety in the core autonomy stack +Collaborate with the Infrastructure and DevOps teams for efficient, secure and scalable software delivery to a network of Gatik’s autonomous fleet
+Guide and mentor autonomy researchers and algorithm developers to make sure their components are running efficiently and with optimal compute and memory usage +Review and refine technical requirements and translate them into high-level design & plans to support the development of safe AV technology +Conduct code and design reviews and advise on technical matters

Click the apply button below to see the full job description and apply


Apply

Seattle, US


Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.

We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

The Opportunity Photoshop ART is seeking a Research Scientist to join our inpainting R&D team focused on making significant progress in image generation/restoration, low level vision, image editing with an eventual posture toward productization. Individuals in this role are expected to be expert in identified research areas such as artificial intelligence, machine learning, computer vision, and image processing. The ideal candidate will have a keen interest in producing new science to advance Adobe products.

What you'll Do Work towards long-term results-oriented research goals, while identifying intermediate achievements. Contribute to research that can be applied to Adobe product development. Help integrating novel research work into Adobe’s product. Lead and collaborate on research projects across different Adobe divisions. What you need to succeed Ph.D. and solid publications in machine learning, AI, computer science, statistics, or scene semantic understanding. Experience communicating research for public audiences of peers. Experience working in teams. Knowledge in a programming language. Preferred Qualification 2 years of professional full-time experience preferred, but not required 2+ year(s) of internship with primary emphasis on AI research in image generation, low level vision, image restoration, and segmentation Experience in collaboration with a team with varied strengths. 4+ First-author publications at peer-reviewed AI conferences (e.g. NIPS, CVPR, ECCV, ICML, ICLR, ICCV, and ACL). Experience in developing and debugging in Python. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely.

If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer.

Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status.

Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $129,400 -- $242,200 annually. Pay within this range varies by work location and may also depend on job-relate


Apply

Location Multiple Locations


Description

Members of our team are part of a multi-disciplinary core research group within Qualcomm which spans software, hardware, and systems. Our members contribute technology deployed worldwide by partnering with our business teams across mobile, compute, automotive, cloud, and IOT. We also perform and publish state-of-the-art research on a wide range of topics in machine-learning, ranging from general theory to techniques that enable deployment on resource-constrained devices. Our research team has demonstrated first-in-the-world research and proof-of-concepts in areas such model efficiency, neural video codecs, video semantic segmentation, federated learning, and wireless RF sensing (https://www.qualcomm.com/ai-research), has won major research competitions such as the visual wake word challenge, and converted leading research into best-in-class user-friendly tools such as Qualcomm Innovation Center’s AI Model Efficiency Toolkit (https://github.com/quic/aimet). We recently demonstrated the feasibility of running a foundation model (Stable Diffusion) with >1 billion parameters on an Android phone under one second after performing our full-stack AI optimizations on the model.

Role responsibility can include both, applied and fundamental research in the field of machine learning with development focus in one or many of the following areas:

  • Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. large language models, deep generative models (VAE, Normalizing-Flow, ARM, etc), Bayesian deep learning, equivariant CNNs, adversarial learning, diffusion models, active learning, Bayesian optimizations, unsupervised learning, and ML combinatorial optimization using tools like graph neural networks, learned message-passing heuristics, and reinforcement learning.

  • Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design.

  • Performs advanced platform research to enable new machine learning compute paradigms, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, causal and language-based reasoning.

  • Creates new machine learning models for advanced use cases that achieve state-of-the-art performance and beyond. The use cases can broadly include computer vision, audio, speech, NLP, image, video, power management, wireless, graphics, and chip design

  • Design, develop & test software for machine learning frameworks that optimize models to run efficiently on edge devices. Candidate is expected to have strong interest and deep passion on making leading-edge deep learning algorithms work on mobile/embedded platforms for the benefit of end users.

  • Research, design, develop, enhance, and implement different components of machine learning compiler for HW Accelerators.

  • Design, implement and train DL/RL algorithms in high-level languages/frameworks (PyTorch and TensorFlow).


Apply

Location Seattle, WA New York, NY


Description We are looking for an Applied Scientist to join our Seattle team. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. Our team solves a broad range of problems ranging from natural knowledge understanding of third-party shoppable content, product and content recommendation to social media influencers and their audiences, determining optimal compensation for creators, and mitigating fraud. We generate deep semantic understanding of the photos, and videos in shoppable content created by our creators for efficient processing and appropriate placements for the best customer experience. For example, you may lead the development of reinforcement learning models such as MAB to rank content/product to be shown to influencers. To achieve this, a deep understanding of the quality and relevance of content must be established through ML models that provide those contexts for ranking.

In order to be successful in our team, you need a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as SageMaker, S3, and EC2 with a variety of skillset in shallow and deep learning ML models, particularly in NLP and CV. You will bring knowledge in many of these domains along with your own specialties.


Apply