🤖 Native AI Agent Nodes in n8n

0 of 3 lessons • 0% complete

Lesson 1 of 3

AI Agent Node Overview

📖 8 min20 XP

The AI Agent node is n8n's most powerful feature — a visual implementation of the ReAct (Reasoning + Acting) framework that turns LLMs from simple text generators into autonomous agents capable of using tools, maintaining memory, and completing multi-step tasks. Instead of hardcoding every step in your workflow, you give the agent a goal and the tools to achieve it, and it figures out the steps on its own.

The ReAct Framework

ReAct stands for Reasoning + Acting. It's an AI agent pattern where the LLM follows a loop: Think about the task, choose an Action (tool to call), Observe the result, then repeat until the task is complete. This is fundamentally different from a chain, where every step is predetermined.

{
  "react_loop": {
    "step_1_think": "I need to find the customer's recent orders. Let me query the database.",
    "step_2_act": "Call SQL tool: SELECT * FROM orders WHERE customer_id = 42 ORDER BY date DESC LIMIT 5",
    "step_3_observe": "Found 5 orders. The most recent was placed yesterday for $129.99.",
    "step_4_think": "Now I need to check if this order has shipped. Let me check the shipping API.",
    "step_5_act": "Call HTTP tool: GET /api/shipping/status?order_id=1234",
    "step_6_observe": "Order is in transit, expected delivery tomorrow.",
    "step_7_respond": "Your most recent order (#1234) for $129.99 is currently in transit and should arrive tomorrow."
  }
}

Tool Calling in n8n

The AI Agent node supports tool calling — the ability for the LLM to invoke specific functions during its reasoning process. In n8n, tools are connected as sub-nodes to the agent. Each tool has a name, description, and input schema that the LLM uses to decide when and how to call it.

  • Tools are connected to the Agent node via the "tools" input
  • Each tool gets a name and description that the LLM reads to decide when to use it
  • The agent can call multiple tools in sequence or even in parallel (with supported LLMs)
  • Tool results are fed back into the agent's context for the next reasoning step
  • You can limit the maximum number of tool calls per execution to prevent runaway loops

Memory Types

Agents need memory to maintain context across interactions. n8n supports four memory types, each suited to different use cases:

  • Window Memory — Keeps the last N messages in context. Simple and effective for short conversations. Set window size to 10-20 messages for most use cases.
  • Token Buffer Memory — Keeps messages up to a token limit. Better than window memory when message sizes vary significantly.
  • Summary Memory — Uses an LLM to summarize older messages, keeping the summary plus recent messages. Best for long conversations where early context matters.
  • Vector Store Memory — Stores all messages in a vector database and retrieves the most relevant ones. Best for agents that need to recall specific details from very long interaction histories.

Agent Node Configuration

To set up an AI Agent node in n8n, you connect four types of sub-nodes: an LLM (required), tools (optional but usually needed), memory (optional), and an output parser (optional). The agent node itself has settings for the system prompt, max iterations, and return format. We'll build a complete agent in Lesson 19.3.

1 / 3