Skip to main content

What are Tools?

⚙️ Tools are the various ways you can extend an LLM's capabilities beyond simple text generation. When enabled, they allow your chatbot to do amazing things — like search the web, scrape data, generate images, talk back using AI voices, and more.

Because there are several ways to integrate "Tools" in Open WebUI, it's important to understand which type you are using.


Tooling Taxonomy: Which "Tool" are you using?

🧩 Users often encounter the term "Tools" in different contexts. Here is how to distinguish them:

TypeLocation in UIBest For...Source
Native FeaturesAdmin/SettingsCore platform functionalityBuilt-in to Open WebUI
Workspace ToolsWorkspace > ToolsUser-created or community Python scriptsCommunity Library
Native MCP (HTTP)Settings > ConnectionsStandard MCP servers reachable via HTTP/SSEExternal MCP Servers
MCP via Proxy (MCPO)Settings > ConnectionsLocal stdio-based MCP servers (e.g., Claude Desktop tools)MCPO Adapter
OpenAPI ServersSettings > ConnectionsStandard REST/OpenAPI web servicesExternal Web APIs

1. Native Features (Built-in)

These are deeply integrated into Open WebUI and generally don't require external scripts.

  • Web Search: Integrated via engines like SearXNG, Google, or Tavily.
  • URL Fetching: Extract text content directly from websites using # or native tools.
  • Image Generation: Integrated with DALL-E, ComfyUI, or Automatic1111.
  • Memory: The ability for models to remember facts about you across chats.
  • RAG (Knowledge): The ability to query uploaded documents (#).

In Native Mode, these features are exposed as Tools that the model can call independently.

2. Workspace Tools (Custom Plugins)

These are Python scripts that run directly within the Open WebUI environment.

  • Capability: Can do anything Python can do (web scraping, complex math, API calls).
  • Access: Managed via the Workspace menu.
  • Safety: Always review code before importing, as these run on your server.
  • ⚠️ Security Warning: Normal or untrusted users should not be given permission to access the Workspace Tools section. This access allows a user to upload and execute arbitrary Python code on your server, which could lead to a full system compromise.

3. MCP (Model Context Protocol)

🔌 MCP is an open standard that allows LLMs to interact with external data and tools.

  • Native HTTP MCP: Open WebUI can connect directly to any MCP server that exposes an HTTP/SSE endpoint.
  • MCPO (Proxy): Most community MCP servers use stdio (local command line). To use these in Open WebUI, you use the MCPO Proxy to bridge the connection.

4. OpenAPI / Function Calling Servers

Generic web servers that provide an OpenAPI (.json or .yaml) specification. Open WebUI can ingest these specs and treat every endpoint as a tool.


How to Install & Manage Workspace Tools

📦 Workspace Tools are the most common way to extend your instance with community features.

  1. Go to Community Tool Library
  2. Choose a Tool, then click the Get button.
  3. Enter your Open WebUI instance’s URL (e.g. http://localhost:3000).
  4. Click Import to WebUI.
Safety Tip

Never import a Tool you don’t recognize or trust. These are Python scripts and might run unsafe code on your host system. Crucially, ensure you only grant "Tool" permissions to trusted users, as the ability to create or import tools is equivalent to the ability to run arbitrary code on the server.


How to Use Tools in Chat

🔧 Once installed or connected, here’s how to enable them for your conversations:

Option 1: Enable on-the-fly (Specific Chat)

While chatting, click the ➕ (plus) icon in the input area. You’ll see a list of available Tools — you can enable them specifically for that session.

Option 2: Enable by Default (Global/Model Level)

  1. Go to Workspace ➡️ Models.
  2. Choose the model you’re using and click the ✏️ edit icon.
  3. Scroll to the Tools section.
  4. ✅ Check the Tools you want this model to always have access to by default.
  5. Click Save.

You can also let your LLM auto-select the right Tools using the AutoTool Filter.


Tool Calling Modes: Default vs. Native

Open WebUI offers two distinct ways for models to interact with tools: a standard Default Mode and a high-performance Native Mode (Agentic Mode). Choosing the right mode depends on your model's capabilities and your performance requirements.

🟡 Default Mode (Prompt-based)

In Default Mode, Open WebUI manages tool selection by injecting a specific prompt template that guides the model to output a tool request.

  • Compatibility: Works with practically any model, including older or smaller local models that lack native function-calling support.
  • Flexibility: Highly customizable via prompt templates.
  • Caveat: Can be slower (requires extra tokens) and less reliable for complex, multi-step tool chaining.

🟢 Native Mode (Agentic Mode / System Function Calling)

Native Mode (also called Agentic Mode) leverages the model's built-in capability to handle tool definitions and return structured tool calls (JSON). This is the recommended mode for high-performance agentic workflows.

Model Quality Matters

Agentic tool calling requires high-quality models to work reliably. While small local models may technically support function calling, they often struggle with the complex reasoning required for multi-step tool usage. For best results, use frontier models like GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, or MiniMax M2.5. Small local models may produce malformed JSON or fail to follow the strict state management required for agentic behavior.

Why use Native Mode (Agentic Mode)?

  • Speed & Efficiency: Lower latency as it avoids bulky prompt-based tool selection.
  • Reliability: Higher accuracy in following tool schemas (with quality models).
  • Multi-step Chaining: Essential for Agentic Research and Interleaved Thinking where a model needs to call multiple tools in succession.
  • Autonomous Decision-Making: Models can decide when to search, which tools to use, and how to combine results.

How to Enable Native Mode (Agentic Mode)

Native Mode can be enabled at two levels:

  1. Global/Administrator Level (Recommended):
    • Navigate to Admin Panel > Settings > Models.
    • Scroll to Model Specific Settings for your target model.
    • Under Advanced Parameters, find the Function Calling dropdown and select Native.
  2. Per-Chat Basis:
    • Inside a chat, click the ⚙️ Chat Controls icon.
    • Go to Advanced Params and set Function Calling to Native.

Chat Controls

Model Requirements & Caveats

Recommended Models for Agentic Mode

For reliable agentic tool calling, use high-tier frontier models:

  • GPT-5 (OpenAI)
  • Claude 4.5 Sonnet (Anthropic)
  • Gemini 3 Flash (Google)
  • MiniMax M2.5

These models excel at multi-step reasoning, proper JSON formatting, and autonomous tool selection.

  • Large Local Models: Some large local models (e.g., Qwen 3 32B, Llama 3.3 70B) can work with Native Mode, but results vary significantly by model quality.
  • Small Local Models Warning: Small local models (under 30B parameters) often struggle with Native Mode. They may produce malformed JSON, fail to follow strict state management, or make poor tool selection decisions. For these models, Default Mode is usually more reliable.

Known Model-Specific Issues

DeepSeek V3.2 Function Calling Issues

DeepSeek V3.2 has known issues with native function calling that cause reproducible failures. Despite being a 600B+ parameter model, it often outputs malformed tool calls.

The Problem: DeepSeek V3.2 was trained using a proprietary format called DSML (DeepSeek Markup Language) for tool calls. When using native function calling, the model sometimes outputs raw DSML/XML-like syntax instead of proper JSON:

  • <functionInvoke name="fetch_url"> instead of valid JSON
  • <function_calls> / </function_calls> tags in content
  • Garbled hybrid text like prominentfunction_cinvoke name="search_parameter

Why it happens: This is heavily model-dependent behavior induced during DeepSeek's fine-tuning process. DeepSeek chose to train their model on DSML rather than standard OpenAI-style JSON tool calls. While inference providers (VertexAI, OpenRouter, etc.) attempt to intercept DSML blocks and convert them to OpenAI-style JSON, this translation layer is unreliable under certain conditions (streaming, high temperature, high concurrency, multi-turn conversations). The primary responsibility lies with DeepSeek for using a non-standard format that requires fragile translation.

Known contributing factors:

  • Higher temperature values correlate with more malformed output
  • Multi-round conversations (6-8+ turns) can cause the model to stop calling functions entirely
  • Complex multi-step workflows (15-30 tool calls) may cause "schema drift" where argument formats degrade

Workarounds:

  • Use Default Mode (prompt-based) instead of Native Mode for DeepSeek — this is the recommended approach
  • Lower temperature when using tool calling
  • Limit multi-round tool calling sessions
  • Consider alternative models for agentic workflows

This is a DeepSeek model/API issue, not an Open WebUI issue. Open WebUI correctly sends tools in standard OpenAI format — the malformed output originates from DeepSeek's non-standard internal format.

FeatureDefault ModeNative Mode
LatencyMedium/HighLow
Model CompatibilityUniversalRequires Tool-Calling Support
LogicPrompt-based (Open WebUI)Model-native (API/Ollama)
Complex Chaining⚠️ Limited✅ Excellent

Built-in System Tools (Native/Agentic Mode)

🛠️ When Native Mode (Agentic Mode) is enabled, Open WebUI automatically injects powerful system tools. This unlocks truly agentic behaviors where capable models (like GPT-5, Claude 4.5 Sonnet, Gemini 3 Flash, or MiniMax M2.5) can perform multi-step research, explore knowledge bases, or manage user memory autonomously.

ToolPurpose
Search & WebRequires ENABLE_WEB_SEARCH enabled.
search_webSearch the public web for information. Best for current events, external references, or topics not covered in internal documents.
fetch_urlVisits a URL and extracts text content via the Web Loader.
Knowledge BaseRequires per-model "Knowledge Base" category enabled (default: on).
list_knowledge_basesList the user's accessible knowledge bases with file counts. Use this first to discover what knowledge is available.
query_knowledge_basesSearch KB names and descriptions by semantic similarity. Use to find which KB is relevant when you don't know which one to query.
search_knowledge_basesSearch knowledge bases by name/description (text match).
query_knowledge_filesSearch file contents inside KBs using vector search. This is your main tool for finding information. When a KB is attached to the model, searches are automatically scoped to that KB.
search_knowledge_filesSearch files across accessible knowledge bases by filename (not content).
view_knowledge_fileGet the full content of a file from a knowledge base.
Image GenRequires image generation enabled (per-tool).
generate_imageGenerates a new image based on a prompt. Requires ENABLE_IMAGE_GENERATION.
edit_imageEdits existing images based on a prompt and image URLs. Requires ENABLE_IMAGE_EDIT.
MemoryRequires Memory feature enabled AND per-model "Memory" category enabled (default: on).
search_memoriesSearches the user's personal memory/personalization bank.
add_memoryStores a new fact in the user's personalization memory.
replace_memory_contentUpdates an existing memory record by its unique ID.
NotesRequires ENABLE_NOTES AND per-model "Notes" category enabled (default: on).
search_notesSearch the user's notes by title and content.
view_noteGet the full markdown content of a specific note.
write_noteCreate a new private note for the user.
replace_note_contentUpdate an existing note's content or title.
Chat HistoryRequires per-model "Chat History" category enabled (default: on).
search_chatsSimple text search across chat titles and message content. Returns matching chat IDs and snippets.
view_chatReads and returns the full message history of a specific chat by ID.
ChannelsRequires ENABLE_CHANNELS AND per-model "Channels" category enabled (default: on).
search_channelsFind public or accessible channels by name/description.
search_channel_messagesSearch for specific messages inside accessible channels.
view_channel_messageView a specific message or its details in a channel.
view_channel_threadView a full message thread/replies in a channel.
SkillsRequires per-model "Skills" category enabled (default: on).
view_skillLoad the full instructions of a skill from the available skills manifest.
Time ToolsRequires per-model "Time & Calculation" category enabled (default: on).
get_current_timestampGet the current UTC Unix timestamp and ISO date.
calculate_timestampCalculate relative timestamps (e.g., "3 days ago").

Tool Reference

ToolParametersOutput
Search & Web
search_webquery (required), count (default: 5)Array of {title, link, snippet}
fetch_urlurl (required)Plain text content (max 50,000 chars)
Knowledge Base
list_knowledge_basescount (default: 10), skip (default: 0)Array of {id, name, description, file_count}
query_knowledge_basesquery (required), count (default: 5)Array of {id, name, description} by similarity
search_knowledge_basesquery (required), count (default: 5), skip (default: 0)Array of {id, name, description, file_count}
query_knowledge_filesquery (required), knowledge_ids (optional), count (default: 5)Array of {id, filename, content_snippet, knowledge_id}
search_knowledge_filesquery (required), knowledge_id (optional), count (default: 5), skip (default: 0)Array of {id, filename, knowledge_id, knowledge_name}
view_knowledge_filefile_id (required){id, filename, content}
Image Gen
generate_imageprompt (required){status, message, images} — auto-displayed
edit_imageprompt (required), image_urls (required){status, message, images} — auto-displayed
Memory
search_memoriesquery (required), count (default: 5)Array of {id, date, content}
add_memorycontent (required){status: "success", id}
replace_memory_contentmemory_id (required), content (required){status: "success", id, content}
Notes
search_notesquery (required), count (default: 5), start_timestamp, end_timestampArray of {id, title, snippet, updated_at}
view_notenote_id (required){id, title, content, updated_at, created_at}
write_notetitle (required), content (required){status: "success", id}
replace_note_contentnote_id (required), content (required), title (optional){status: "success", id, title}
Chat History
search_chatsquery (required), count (default: 5), start_timestamp, end_timestampArray of {id, title, snippet, updated_at}
view_chatchat_id (required){id, title, messages: [{role, content}]}
Channels
search_channelsquery (required), count (default: 5)Array of {id, name, description}
search_channel_messagesquery (required), count (default: 10), start_timestamp, end_timestampArray of {id, channel_id, content, user_name, created_at}
view_channel_messagemessage_id (required){id, content, user_name, created_at, reply_count}
view_channel_threadparent_message_id (required)Array of {id, content, user_name, created_at}
Skills
view_skillname (required){name, content}
Time Tools
get_current_timestampNone{current_timestamp, current_iso}
calculate_timestampdays_ago, weeks_ago, months_ago, years_ago (all default: 0){current_timestamp, current_iso, calculated_timestamp, calculated_iso}
Automatic Timezone Detection

Open WebUI automatically detects and stores your timezone when you log in. This allows time-related tools and features to provide accurate local times without any manual configuration. Your timezone is determined from your browser settings.

Recommended KB Tool Workflow

When an attached KB is returning empty results:

  1. First call list_knowledge_bases to confirm the model can see the attached KB
  2. Then use query_knowledge_files (without specifying knowledge_ids — it auto-scopes to attached KBs)
  3. If still empty, the files may not be embedded yet, or you may have Full Context mode enabled which bypasses the vector store

Do NOT use Full Context mode with knowledge tools — Full Context injects file content directly and doesn't store embeddings, so query_knowledge_files will return empty. Use Focused Retrieval (default) for tool-based access.

Knowledge Base Tools vs RAG Pipeline

The native query_knowledge_files tool uses simple vector search with a default of 5 results. It does not use:

  • Hybrid search (BM25 + vector)
  • Reranking (external reranker endpoint)
  • The "Top K Reranker" admin setting

For the full RAG pipeline with hybrid search and reranking, use the File Context capability (attach files via # or knowledge base assignment) instead of relying on autonomous tool calls.

Knowledge is NOT Auto-Injected in Native Mode

Important: When using Native Function Calling, attached knowledge is not automatically injected into the conversation. The model must actively call knowledge tools to search and retrieve information.

If your model isn't using attached knowledge:

  1. Add instructions to your system prompt telling the model to discover and query knowledge bases. Example: "When users ask questions, first use list_knowledge_bases to see what knowledge is available, then use query_knowledge_files to search the relevant knowledge base before answering."
  2. Or disable Native Function Calling for that model to restore automatic RAG injection.
  3. Or use "Full Context" mode for attached knowledge (click on the attachment and select "Use Entire Document") which always injects the full content.

See Knowledge Scoping with Native Function Calling for more details.

Why use these? It allows for Deep Research (searching the web multiple times, or querying knowledge bases), Contextual Awareness (looking up previous chats or notes), Dynamic Personalization (saving facts), and Precise Automation (generating content based on existing notes or documents).

Disabling Builtin Tools (Per-Model)

The Builtin Tools capability can be toggled on or off for each model in the Workspace > Models editor under Capabilities. When enabled (the default), all the system tools listed above are automatically injected when using Native Mode.

When to disable Builtin Tools:

ScenarioReason to Disable
Model doesn't support function callingSmaller or older models may not handle the tools parameter correctly
Simpler/predictable behavior neededYou want the model to work only with pre-injected context, no autonomous tool calls
Security/control concernsPrevents the model from actively querying knowledge bases, searching chats, accessing memories, etc.
Token efficiencyTool specifications consume tokens; disabling saves context window space

What happens when Builtin Tools is disabled:

  1. No tool injection: The model won't receive any of the built-in system tools, even in Native Mode.
  2. RAG still works (if File Context is enabled): Attached files are still processed via RAG and injected as context.
  3. No autonomous retrieval: The model cannot decide to search knowledge bases or fetch additional information—it works only with what's provided upfront.

Granular Builtin Tool Categories (Per-Model)

When the Builtin Tools capability is enabled, you can further control which categories of builtin tools are available to the model. This appears in the Model Editor as a set of checkboxes under Builtin Tools.

CategoryTools IncludedDescription
Time & Calculationget_current_timestamp, calculate_timestampGet current time and perform date/time calculations
Memorysearch_memories, add_memory, replace_memory_contentSearch and manage user memories
Chat Historysearch_chats, view_chatSearch and view user chat history
Notessearch_notes, view_note, write_note, replace_note_contentSearch, view, and manage user notes
Knowledge Baselist_knowledge_bases, search_knowledge_bases, query_knowledge_bases, search_knowledge_files, query_knowledge_files, view_knowledge_fileBrowse and query knowledge bases
Channelssearch_channels, search_channel_messages, view_channel_message, view_channel_threadSearch channels and channel messages
Skillsview_skillLoad skill instructions on-demand from the manifest

All categories are enabled by default. Disabling a category prevents those specific tools from being injected, while keeping other categories active.

Use cases for granular control:

ScenarioRecommended Configuration
Privacy-focused modelDisable Memory and Chat History to prevent access to personal data
Read-only assistantDisable Notes (prevents creating/modifying notes) but keep Knowledge Base enabled
Minimal token usageEnable only the categories the model actually needs
Knowledge-centric botDisable everything except Knowledge Base and Time
note

These per-category toggles only appear when the main Builtin Tools capability is enabled. If you disable Builtin Tools entirely, no tools are injected regardless of category settings.

Global Features Take Precedence

Enabling a per-model category toggle does not override global feature flags. For example, if ENABLE_NOTES is disabled globally (Admin Panel), Notes tools will not be available even if the "Notes" category is enabled for the model. The per-model toggles only allow you to further restrict what's already available—they cannot enable features that are disabled at the global level.

Builtin Tools vs File Context

Builtin Tools controls whether the model gets tools for autonomous retrieval. It does not control whether file content is injected via RAG—that's controlled by the separate File Context capability.

  • File Context = Whether Open WebUI extracts and injects file content (RAG processing)
  • Builtin Tools = Whether the model gets tools to autonomously search/retrieve additional content

See File Context vs Builtin Tools for a detailed comparison.

Interleaved Thinking

🧠 When using Native Mode (Agentic Mode), high-tier models can engage in Interleaved Thinking. This is a powerful "Thought → Action → Thought → Action → Thought → ..." loop where the model can reason about a task, execute one or more tools, evaluate the results, and then decide on its next move.

Quality Models Required

Interleaved thinking requires models with strong reasoning capabilities. This feature works best with frontier models (GPT-5, Claude 4.5+, Gemini 3+) that can maintain context across multiple tool calls and make intelligent decisions about which tools to use when.

This is fundamentally different from a single-shot tool call. In an interleaved workflow, the model follows a cycle:

  1. Reason: Analyze the user's intent and identify information gaps.
  2. Act: Call a tool (e.g., query_knowledge_files for internal docs or search_web and fetch_url for web research).
  3. Think: Read the tool's output and update its internal understanding.
  4. Iterate: If the answer isn't clear, call another tool (e.g., view_knowledge_file to read a specific document or fetch_url to read a specific page) or refine the search.
  5. Finalize: Only after completing this "Deep Research" cycle does the model provide a final, grounded answer.

This behavior is what transforms a standard chatbot into an Agentic AI capable of solving complex, multi-step problems autonomously.



🚀 Summary & Next Steps

Tools bring your AI to life by giving it hands to interact with the world.