Paper Detail

Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos

Shoubin Yu, Lei Shu, Antoine Yang, Yao Fu, Srinivas Sunkara, Maria Wang, Jindong Chen, Mohit Bansal, Boqing Gong

arxiv Score 23.8

Published 2026-03-23 · First seen 2026-03-27

Research Track B · General AI

Abstract

Multimodal AI agents are increasingly automating complex real-world workflows that involve online web execution. However, current web-agent benchmarks suffer from a critical limitation: they focus entirely on web-based interaction and perception, lacking grounding in the user's real-world physical surroundings. This limitation prevents evaluation in crucial scenarios, such as when an agent must use egocentric visual perception (e.g., via AR glasses) to recognize an object in the user's surroundings and then complete a related task online. To address this gap, we introduce Ego2Web, the first benchmark designed to bridge egocentric video perception and web agent execution. Ego2Web pairs real-world first-person video recordings with web tasks that require visual understanding, web task planning, and interaction in an online environment for successful completion. We utilize an automatic data-generation pipeline combined with human verification and refinement to curate well-constructed, high-quality video-task pairs across diverse web task types, including e-commerce, media retrieval, knowledge lookup, etc. To facilitate accurate and scalable evaluation for our benchmark, we also develop a novel LLM-as-a-Judge automatic evaluation method, Ego2WebJudge, which achieves approximately 84% agreement with human judgment, substantially higher than existing evaluation methods. Experiments with diverse SoTA agents on our Ego2Web show that their performance is weak, with substantial headroom across all task categories. We also conduct a comprehensive ablation study on task design, highlighting the necessity of accurate video understanding in the proposed task and the limitations of current agents. We hope Ego2Web can be a critical new resource for developing truly capable AI assistants that can seamlessly see, understand, and act across the physical and digital worlds.

Workflow Status

Review status
pending
Role
unreviewed
Read priority
now
Vote
Not set.
Saved
no
Collections
Not filed yet.
Next action
Not filled yet.

Reading Brief

Key Findings

The paper introduces Ego2Web, the first benchmark designed to connect real-world egocentric video perception with web-based tasks. The authors also present Ego2WebJudge, an LLM-based automatic evaluation method with high human agreement. Experiments demonstrate that current state-of-the-art agents perform poorly on this new benchmark, indicating significant headroom for development.

Limitations

The study highlights the weak performance and limitations of current agents in handling tasks that require accurate video understanding for web execution, pointing to a clear direction for future work.

Methodology

The benchmark was created by pairing real-world, first-person video recordings with related web tasks using an automatic data-generation pipeline combined with human verification and refinement.

Significance

This benchmark provides a critical resource for developing advanced AI assistants that can seamlessly perceive the physical world and act within the digital world.

Why It Surfaced

No ranking explanation is available yet.

Tags

No tags.

BibTeX

@article{yu2026ego2web,
  title = {Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos},
  author = {Shoubin Yu and Lei Shu and Antoine Yang and Yao Fu and Srinivas Sunkara and Maria Wang and Jindong Chen and Mohit Bansal and Boqing Gong},
  year = {2026},
  abstract = {Multimodal AI agents are increasingly automating complex real-world workflows that involve online web execution. However, current web-agent benchmarks suffer from a critical limitation: they focus entirely on web-based interaction and perception, lacking grounding in the user's real-world physical surroundings. This limitation prevents evaluation in crucial scenarios, such as when an agent must use egocentric visual perception (e.g., via AR glasses) to recognize an object in the user's surroundi},
  url = {https://arxiv.org/abs/2603.22529},
  keywords = {cs.CV, cs.AI, cs.CL},
  eprint = {2603.22529},
  archiveprefix = {arXiv},
}

Metadata

{}