Ego2Web: A Web Agent Benchmark Grounded on Egocentric Videos
Abstract
Multimodal AI agents are increasingly automating complex real-world workflows that involve online web execution. However, current web-agent benchmarks suffer from a critical limitation: they focus entirely on web-based interaction and perception, lacking grounding in the user's real-world physical surroundings. This limitation prevents evaluation in crucial scenarios, such as when an agent must use egocentric visual perception (e.g., via AR glasses) to recognize an object in the user's surroundingsand then complete a related task online (e.g., making a purchase).To address this gap, we introduce Ego2Web, the first benchmark designed to bridge egocentric video perception and multimodal web agent execution. Ego2Web pairs real-world first-person video recordings with web tasks that require visual understanding, web task planning, and interaction in an online environment for successful completion. We utilize an automatic data-generation pipeline combined with human verification to curate well-constructed, high-quality video-task pairs across diverse web task types, including e-commercial, navigation, media search, and so on. To facilitate a more accurate and scalable evaluation for our novel benchmark, we also develop a novel LLM-as-a-Judge automatic evaluation method Ego2WebJudge, and demonstrate around 85\% agreement with human judgment, substantially higher than existing evaluation methods.Experiments with diverse SoTA multimodal agents show that they perform significantly below the human level, revealing a major gap in capability. We also conduct a comprehensive ablation study on task design, highlighting the necessity of video perception in the proposed task and the limitations of current agents.We hope Ego2Web can be a critical new resource for developing truly capable AI assistants that can seamlessly see, understand, and act across the physical and digital worlds.