Skip to content

管理内存

许多 AI 应用需要内存来在多个交互之间共享上下文。LangGraph 支持构建对话代理所需的两种类型的内存:

  • 短期记忆: 通过在会话中维护消息历史记录,跟踪正在进行的对话。
  • 长期记忆: 在会话之间存储用户特定或应用级别的数据。

启用 短期记忆 后,长对话可能会超出 LLM 的上下文窗口。常见的解决方案包括:

这使得代理可以在不超出 LLM 上下文窗口的情况下跟踪对话。

添加短期记忆

短期记忆使代理能够跟踪多轮对话:

API Reference: InMemorySaver | StateGraph

from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph

checkpointer = InMemorySaver()

builder = StateGraph(...)
graph = builder.compile(checkpointer=checkpointer)

graph.invoke(
    {"messages": [{"role": "user", "content": "hi! i am Bob"}]},
    {"configurable": {"thread_id": "1"}},
)

有关如何使用短期记忆的更多信息,请参阅persistence指南。

添加长期记忆

使用长期记忆来跨会话存储用户特定或应用特定的数据。这对于聊天机器人等应用非常有用,您希望记住用户的偏好或其他信息。

API Reference: StateGraph

from langgraph.store.memory import InMemoryStore
from langgraph.graph import StateGraph

store = InMemoryStore()

builder = StateGraph(...)
graph = builder.compile(store=store)

有关如何使用长期记忆的更多信息,请参阅 持久化 指南。

修剪消息

要修剪消息历史,可以使用 trim_messages 函数:

API Reference: trim_messages | count_tokens_approximately

from langchain_core.messages.utils import (
    trim_messages,
    count_tokens_approximately
)

def call_model(state: MessagesState):
    messages = trim_messages(
        state["messages"],
        strategy="last",
        token_counter=count_tokens_approximately,
        max_tokens=128,
        start_on="human",
        end_on=("human", "tool"),
    )
    response = model.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(MessagesState)
builder.add_node(call_model)
...
完整示例:修剪消息
from langchain_core.messages.utils import (
    trim_messages,
    count_tokens_approximately
)
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, START, MessagesState

model = init_chat_model("anthropic:claude-3-7-sonnet-latest")
summarization_model = model.bind(max_tokens=128)

def call_model(state: MessagesState):
    messages = trim_messages(
        state["messages"],
        strategy="last",
        token_counter=count_tokens_approximately,
        max_tokens=128,
        start_on="human",
        end_on=("human", "tool"),
    )
    response = model.invoke(messages)
    return {"messages": [response]}

checkpointer = InMemorySaver()
builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_edge(START, "call_model")
graph = builder.compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "1"}}
graph.invoke({"messages": "hi, my name is bob"}, config)
graph.invoke({"messages": "write a short poem about cats"}, config)
graph.invoke({"messages": "now do the same but for dogs"}, config)
final_response = graph.invoke({"messages": "what's my name?"}, config)

final_response["messages"][-1].pretty_print()
================================== Ai Message ==================================

Your name is Bob, as you mentioned when you first introduced yourself.

消息摘要

处理长对话历史记录的一种有效策略是在消息达到一定阈值后对早期消息进行摘要:

API Reference: AnyMessage | count_tokens_approximately | StateGraph | START

from typing import Any, TypedDict

from langchain_core.messages import AnyMessage
from langchain_core.messages.utils import count_tokens_approximately
from langmem.short_term import SummarizationNode
from langgraph.graph import StateGraph, START, MessagesState

class State(MessagesState):
    context: dict[str, Any]  # (1)!

class LLMInputState(TypedDict):  # (2)!
    summarized_messages: list[AnyMessage]
    context: dict[str, Any]

summarization_node = SummarizationNode(
    token_counter=count_tokens_approximately,
    model=summarization_model,
    max_tokens=512,
    max_tokens_before_summary=256,
    max_summary_tokens=256,
)

def call_model(state: LLMInputState):  # (3)!
    response = model.invoke(state["summarized_messages"])
    return {"messages": [response]}

builder = StateGraph(State)
builder.add_node(call_model)
builder.add_node("summarize", summarization_node)
builder.add_edge(START, "summarize")
builder.add_edge("summarize", "call_model")
...
  1. 我们将在 context 字段中跟踪运行中的摘要(这是 SummarizationNode 所期望的)。
  2. 定义仅用于过滤 call_model 节点输入的私有状态。
  3. 我们在这里传递一个私有输入状态,以隔离摘要节点返回的消息。
完整示例:消息摘要
from typing import Any, TypedDict

from langchain.chat_models import init_chat_model
from langchain_core.messages import AnyMessage
from langchain_core.messages.utils import count_tokens_approximately
from langgraph.graph import StateGraph, START, MessagesState
from langgraph.checkpoint.memory import InMemorySaver
from langmem.short_term import SummarizationNode

model = init_chat_model("anthropic:claude-3-7-sonnet-latest")
summarization_model = model.bind(max_tokens=128)

class State(MessagesState):
    context: dict[str, Any]  # (1)!

class LLMInputState(TypedDict):  # (2)!
    summarized_messages: list[AnyMessage]
    context: dict[str, Any]

summarization_node = SummarizationNode(
    token_counter=count_tokens_approximately,
    model=summarization_model,
    max_tokens=256,
    max_tokens_before_summary=256,
    max_summary_tokens=128,
)

def call_model(state: LLMInputState):  # (3)!
    response = model.invoke(state["summarized_messages"])
    return {"messages": [response]}

checkpointer = InMemorySaver()
builder = StateGraph(State)
builder.add_node(call_model)
builder.add_node("summarize", summarization_node)
builder.add_edge(START, "summarize")
builder.add_edge("summarize", "call_model")
graph = builder.compile(checkpointer=checkpointer)

# 调用图
config = {"configurable": {"thread_id": "1"}}
graph.invoke({"messages": "hi, my name is bob"}, config)
graph.invoke({"messages": "write a short poem about cats"}, config)
graph.invoke({"messages": "now do the same but for dogs"}, config)
final_response = graph.invoke({"messages": "what's my name?"}, config)

final_response["messages"][-1].pretty_print()
print("\nSummary:", final_response["context"]["running_summary"].summary)
  1. 我们将在 context 字段中跟踪运行中的摘要(这是 SummarizationNode 所期望的)。
  2. 定义仅用于过滤 call_model 节点输入的私有状态。
  3. 我们在这里传递一个私有输入状态,以隔离摘要节点返回的消息。
================================== Ai Message ==================================

从我们的对话中,我可以看出你介绍了自己是 Bob。那是我们开始交谈时你与我分享的名字。

Summary: 在这次对话中,我认识了 Bob,他随后让我写一首关于猫的诗。我创作了一首题为《猫之谜》的诗,描绘了猫优雅的动作、独立的性格以及它们与人类特殊的关系。Bob 然后要求一首类似的关于狗的诗,所以我写了《狗之乐》,突出了狗的忠诚、热情和爱的陪伴。这两首诗采用了相似的风格,但强调了使每种宠物独特的特点。

删除消息

要从图状态中删除消息,可以使用 RemoveMessage

  • 删除特定消息:

    from langchain_core.messages import RemoveMessage
    
    def delete_messages(state):
        messages = state["messages"]
        if len(messages) > 2:
            # 删除最早的两条消息
            return {"messages": [RemoveMessage(id=m.id) for m in messages[:2]]}
    
  • 删除 所有 消息:

    from langgraph.graph.message import REMOVE_ALL_MESSAGES
    
    def delete_messages(state):
        return {"messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)]}
    

add_messages reducer

要使 RemoveMessage 生效,你需要使用一个带有 add_messages reducer 的状态键,例如 MessagesState

有效的消息历史

在删除消息时,确保 结果消息历史是有效的。检查你使用的 LLM 提供商的限制。例如:

  • 一些提供商要求消息历史以 user 消息开头
  • 大多数提供商要求包含工具调用的 assistant 消息后必须跟随相应的 tool 结果消息。
完整示例:删除消息
from langchain_core.messages import RemoveMessage

def delete_messages(state):
    messages = state["messages"]
    if len(messages) > 2:
        # 删除最早的两条消息
        return {"messages": [RemoveMessage(id=m.id) for m in messages[:2]]}

def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": response}

builder = StateGraph(MessagesState)
builder.add_sequence([call_model, delete_messages])
builder.add_edge(START, "call_model")

checkpointer = InMemorySaver()
app = builder.compile(checkpointer=checkpointer)

for event in app.stream(
    {"messages": [{"role": "user", "content": "hi! I'm bob"}]},
    config,
    stream_mode="values"
):
    print([(message.type, message.content) for message in event["messages"]])

for event in app.stream(
    {"messages": [{"role": "user", "content": "what's my name?"}]},
    config,
    stream_mode="values"
):
    print([(message.type, message.content) for message in event["messages"]])
[('human', "hi! I'm bob")]
[('human', "hi! I'm bob"), ('ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?')]
[('human', "hi! I'm bob"), ('ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?'), ('human', "what's my name?")]
[('human', "hi! I'm bob"), ('ai', 'Hi Bob! How are you doing today? Is there anything I can help you with?'), ('human', "what's my name?"), ('ai', 'Your name is Bob.')]
[('human', "what's my name?"), ('ai', 'Your name is Bob.')]