Skip to main content

🐍 Python Code Execution

Overview

Open WebUI provides two ways to execute Python code:

  1. Manual Code Execution: Run Python code blocks generated by LLMs using a "Run" button in the browser (uses Pyodide/WebAssembly).
  2. Code Interpreter: An AI capability that allows models to automatically write and execute Python code as part of their response (uses Pyodide or Jupyter).

Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat.

Code Interpreter Capability

The Code Interpreter is a model capability that enables LLMs to write and execute Python code autonomously during a conversation. When enabled, models can:

  • Perform calculations and data analysis
  • Generate visualizations (charts, graphs, plots)
  • Process data dynamically
  • Execute multi-step computational tasks

Enabling Code Interpreter

Per-Model Setup (Admin):

  1. Go to Admin Panel → Models
  2. Select the model you want to configure
  3. Under Capabilities, enable Code Interpreter
  4. Save changes

Global Configuration (Admin Panel):

These settings can be configured at Admin Panel → Settings → Code Execution:

  • Enable/disable code interpreter
  • Select engine (Pyodide or Jupyter)
  • Configure Jupyter connection settings
  • Set blocked modules

Global Configuration (Environment Variables):

VariableDefaultDescription
ENABLE_CODE_INTERPRETERtrueEnable/disable code interpreter globally
CODE_INTERPRETER_ENGINEpyodideEngine to use: pyodide (browser) or jupyter (server)
CODE_INTERPRETER_PROMPT_TEMPLATE(built-in)Custom prompt template for code interpreter
CODE_INTERPRETER_BLACKLISTED_MODULES""Comma-separated list of blocked Python modules

For Jupyter configuration, see the Jupyter Notebook Integration tutorial.

Displaying Images Inline (matplotlib, etc.)

When using matplotlib or other visualization libraries, images can be displayed directly in the chat. For this to work correctly, the code must output the image as a base64 data URL.

import matplotlib.pyplot as plt
import io
import base64

# Create your chart
plt.figure(figsize=(10, 6))
plt.bar(['A', 'B', 'C'], [4, 7, 5])
plt.title('Sample Chart')

# Output as base64 data URL (triggers automatic upload)
buf = io.BytesIO()
plt.savefig(buf, format='png', dpi=150, bbox_inches='tight')
buf.seek(0)
img_base64 = base64.b64encode(buf.read()).decode('utf-8')
print(f"data:image/png;base64,{img_base64}")
plt.close()

How Image Display Works

  1. The code executes and prints data:image/png;base64,... to stdout
  2. Open WebUI's middleware detects the base64 image data in the output
  3. The image is automatically uploaded and stored as a file
  4. The base64 string is replaced with a file URL (e.g., /api/v1/files/{id}/content)
  5. The model sees this URL in the code output and can reference it in its response
  6. The image renders inline in the chat
Understanding the Flow

The model's code should print the base64 data URL. Open WebUI intercepts this and converts it to a permanent file URL. The model should then use this resulting URL in markdown like ![Chart](/api/v1/files/abc123/content) — it should NOT paste the raw base64 string into its response text.

If you see raw base64 text appearing in chat responses, the model is incorrectly echoing the base64 instead of using the converted URL from the code output.

Example Prompt

Create a bar chart showing quarterly sales: Q1: 150, Q2: 230, Q3: 180, Q4: 310. Use matplotlib, save the figure to a BytesIO buffer, encode it as base64, and print the data URL. After the code runs, use the resulting file URL from the output to display the image in your response.

Expected model behavior:

  1. Model writes Python code using the base64 pattern above
  2. Code executes and outputs data:image/png;base64,...
  3. Open WebUI converts this to a file URL in the output (e.g., ![Output Image](/api/v1/files/abc123/content))
  4. Model references this URL in its response to display the chart

Common Issues

IssueCauseSolution
Raw base64 text appears in chatModel output the base64 in its response textInstruct model to only print base64 in code, not repeat it
Image doesn't displayCode used plt.show() without base64 outputUse the base64 pattern above instead
"Analyzing..." spinner stuckCode execution timeout or errorCheck backend logs for errors

Manual Code Execution (Pyodide)

Open WebUI includes a browser-based Python environment using Pyodide (WebAssembly). This allows running Python scripts directly in your browser with no server-side setup.

Running Code Manually

  1. Ask an LLM to write Python code
  2. A Run button appears in the code block
  3. Click to execute the code using Pyodide
  4. Output appears below the code block

Supported Libraries

Pyodide includes the following pre-configured packages:

  • micropip
  • packaging
  • requests
  • beautifulsoup4
  • numpy
  • pandas
  • matplotlib
  • scikit-learn
  • scipy
  • regex
note

Packages not pre-compiled in Pyodide cannot be installed at runtime. For additional packages, consider using the Jupyter integration or forking Pyodide to add custom packages.

Example: Creating a Chart

Prompt:

"Create a bar chart with matplotlib showing: Acuity 4.1, Signify 7.2, Hubbell 5.6, Legrand 8.9. Output the chart as a base64 data URL so it displays inline."

Expected Code Output:

import matplotlib.pyplot as plt
import io
import base64

companies = ['Acuity', 'Signify', 'Hubbell', 'Legrand']
values = [4.1, 7.2, 5.6, 8.9]

plt.figure(figsize=(10, 6))
bars = plt.bar(companies, values, color=['#3498db', '#2ecc71', '#e74c3c', '#9b59b6'])

for bar, value in zip(bars, values):
plt.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.1,
str(value), ha='center', va='bottom', fontsize=12)

plt.title('Company Values', fontsize=16, fontweight='bold')
plt.xlabel('Company', fontsize=12)
plt.ylabel('Value', fontsize=12)
plt.tight_layout()

buf = io.BytesIO()
plt.savefig(buf, format='png', dpi=150, bbox_inches='tight')
buf.seek(0)
print(f"data:image/png;base64,{base64.b64encode(buf.read()).decode()}")
plt.close()

The image will be automatically uploaded and displayed inline in your chat.

Tips for Better Results

  • Mention the environment: Tell the LLM it's running in a "Pyodide environment" or "code interpreter" for better code generation
  • Be explicit about output: Ask for "base64 data URL output" for images
  • Use print statements: Results must be printed to appear in the output
  • Check library support: Verify the libraries you need are available in Pyodide

Further Reading