Paper Detail
Yunteng Tan, Zhi Gao, Xinxiao Wu
Large language model-based web agents have shown strong potential in automating web interactions through advanced reasoning and instruction following. While retrieval-based memory derived from historical trajectories enables these agents to handle complex, long-horizon tasks, current methods struggle to generalize across unseen websites. We identify that this challenge arises from the flat memory structures that entangle high-level task logic with site-specific action details. This entanglement induces a workflow mismatch in new environments, where retrieved contents are conflated with current web, leading to logically inconsistent execution. To address this, we propose Hierarchical Memory Tree (HMT), a structured framework designed to explicitly decouple logical planning from action execution. HMT constructs a three-level hierarchy from raw trajectories via an automated abstraction pipeline: the Intent level maps diverse user instructions to standardized task goals; the Stage level defines reusable semantic subgoals characterized by observable pre-conditions and post-conditions; and the Action level stores action patterns paired with transferable semantic element descriptions. Leveraging this structure, we develop a stage-aware inference mechanism comprising a Planner and an Actor. By explicitly validating pre-conditions, the Planner aligns the current state with the correct logical subgoal to prevent workflow mismatch, while the Actor grounds actions by matching the stored semantic descriptions to the target page. Experimental results on Mind2Web and WebArena show that HMT significantly outperforms flat-memory methods, particularly in cross-website and cross-domain scenarios, highlighting the necessity of structured memory for robust generalization of web agents.
Current web agents fail to generalize to unseen websites because their flat memory structures entangle task logic with site-specific details. This paper introduces Hierarchical Memory Tree (HMT), a structured framework that decouples high-level planning from low-level actions. This separation significantly improves agent performance in cross-website and cross-domain scenarios by preventing workflow mismatches.
The provided abstract does not explicitly mention limitations or directions for future work.
The proposed method, HMT, automatically constructs a three-level memory hierarchy (Intent, Stage, Action) from past trajectories to separate logic from execution. A stage-aware inference mechanism uses a Planner to validate logical subgoals and an Actor to ground actions based on semantic descriptions, enabling robust adaptation to new web environments.
The research demonstrates that structured memory is essential for building robust web agents that can generalize learned skills to novel websites and tasks.
No ranking explanation is available yet.
No tags.
@article{tan2026enhancing,
title = {Enhancing Web Agents with a Hierarchical Memory Tree},
author = {Yunteng Tan and Zhi Gao and Xinxiao Wu},
year = {2026},
abstract = {Large language model-based web agents have shown strong potential in automating web interactions through advanced reasoning and instruction following. While retrieval-based memory derived from historical trajectories enables these agents to handle complex, long-horizon tasks, current methods struggle to generalize across unseen websites. We identify that this challenge arises from the flat memory structures that entangle high-level task logic with site-specific action details. This entanglement },
url = {https://arxiv.org/abs/2603.07024},
keywords = {cs.AI},
eprint = {2603.07024},
archiveprefix = {arXiv},
}
{}