Large language models (LLMs) have enabled the creation of autonomous language agents capable of solving complex tasks in dynamic environments without task-specific training. However, these agents often face challenges when tasked with broad, high-level goals due to their ambiguous nature and delayed rewards. The impracticality of frequent model retraining to adapt to new goals and tasks further complicates the issue. Current approaches focus on two types of auxiliary guidance: prior task decomposition and post-hoc experience summarization. However, these methods have limitations, such as a lack of empirical grounding or difficulty in effectively prioritizing strategies. The challenge lies in enabling autonomous language agents to achieve high-level goals without training while overcoming these limitations consistently.
Prior studies have explored various methods to mitigate these challenges; Reflexion enables agents to reflect on failures and devise new plans, while Voyager develops a code-based skill library from detailed feedback. Some approaches analyze both failed and successful attempts to summarize causal abstractions. However, the learnings from feedback are often too general and unsystematic. LLMs struggle with long-term, high-level goals in decision-making tasks, requiring additional support modules. Decomposition methods like Decomposed Prompting, OKR-Agent, and ADAPT break down complex tasks into sub-tasks or use hierarchical agents. Yet, these approaches often decompose tasks before environmental interaction, lacking grounded, dynamic adjustment. The limitations of existing methods highlight the need for a more adaptive and context-aware approach to achieving high-level goals.
Researchers from Fudan University and Allen Institute for AI propose SELFGOAL, a self-adaptive framework for language agents to utilize both prior knowledge and environmental feedback to achieve high-level goals. The main idea is to build a tree of textual subgoals, where agents choose appropriate ones as guidelines based on the current situation. SELFGOAL features two main modules to operate a GOALTREE: a Search Module that selects the most suited goal nodes, and a Decomposition Module that breaks down goal nodes into more concrete subgoals. An Act Module uses the selected subgoals as guidelines for the LLM to take actions. This approach provides precise guidance for high-level goals and adapts to diverse environments, significantly improving language agent performance in both collaborative and competitive scenarios.
SELFGOAL employs a non-parametric learning approach for language agents to achieve high-level goals. It conducts a top-down hierarchical decomposition of the high-level goal, using a tree structure (GOALTREE) for decision-making guidance. The framework interacts with the environment through three key modules: Search, Decompose, and Act. The Search Module identifies the most appropriate subgoals for the current situation by selecting from leaf nodes in GOALTREE. The Decomposition Module refines GOALTREE by breaking down selected subgoals into more concrete ones, using a filtering mechanism to control granularity and avoid redundancy. The Act Module then uses these selected subgoals to update the instruction prompt and guide the agent’s actions in the environment. This dynamic approach allows SELFGOAL to adapt to changing situations and provide contextually relevant guidance.
SELFGOAL significantly outperforms baseline frameworks in various environments with high-level goals, showing greater improvements with larger LLMs. Unlike task decomposition methods like ReAct and ADAPT, which may provide unsuitable or overly broad guidance, or post-hoc experience summarization methods like Reflexion and CLIN, which can produce overly detailed guidelines, SELFGOAL dynamically adjusts its guidance. For example, in the Public Good Game, SELFGOAL refines its subgoals based on observed player behaviors, allowing agents to adapt their strategies effectively. The framework also shows superior performance with smaller LLMs, attributed to its logical, structural architecture. In competitive scenarios, such as auction competitions, SELFGOAL demonstrates a clear advantage over baselines, employing more strategic bidding behaviors that lead to better outcomes.
In this study, researchers have proposed SELFGOAL, which enhances LLMs’ capabilities to achieve high-level goals across various dynamic tasks and environments. By dynamically generating and refining a hierarchical GOALTREE of contextual subgoals based on environmental interactions, SELFGOAL significantly improves agent performance. The method proves effective in both competitive and cooperative scenarios, outperforming baseline approaches. The continual updating of GOALTREE enables agents to navigate complex environments with greater precision and adaptability. While SELFGOAL shows effectiveness even for smaller models, there remains a demand for improved understanding and summarizing capabilities in models to fully realize its potential. Despite this limitation, SELFGOAL represents a significant advancement in enabling autonomous language agents to consistently achieve high-level goals without frequent retraining.
Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter.Â
Join our Telegram Channel and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 44k+ ML SubReddit
The post SelfGoal: An Artificial Intelligence AI Framework to Enhance an LLM-based Agent’s Capabilities to Achieve High-Level Goals appeared first on MarkTechPost.
Source: Read MoreÂ