⭐ Features
Key Features of Open WebUI ⭐
-
🚀 Effortless Setup: Install seamlessly using Docker, Kubernetes, Podman, Helm Charts (
kubectl
,kustomize
,podman
, orhelm
) for a hassle-free experience with support for both:ollama
image with bundled Ollama and:cuda
with CUDA support. -
🛠️ Guided Initial Setup: Complete the setup process with clarity, including an explicit indication of creating an admin account during the first-time setup.
-
🤝 OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. The OpenAI API URL can be customized to integrate Open WebUI seamlessly with various third-party applications.
-
🛡️ Granular Permissions and User Groups: By allowing administrators to create detailed user roles, user groups, and permissions across the workspace, we ensure a secure user environment for all users involved. This granularity not only enhances security, but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
-
📱 Responsive Design: Enjoy a seamless experience across desktop PCs, laptops, and mobile devices.
-
📱 Progressive Web App for Mobile: Enjoy a native progressive web application experience on your mobile device with offline access on
localhost
or a personal domain, and a smooth user interface. In order for our PWA to be installable on your device, it must be delivered in a secure context. This usually means that it must be served over HTTPS.info- To set up a PWA, you'll need some understanding of technologies like Linux, Docker, and reverse proxies such as
Nginx
,Caddy
, orTraefik
. Using these tools can help streamline the process of building and deploying a PWA tailored to your needs. While there's no "one-click install" option available, and your available option to securely deploy your Open WebUI instance over HTTPS requires user experience, using these resources can make it easier to create and deploy a PWA tailored to your needs.
- To set up a PWA, you'll need some understanding of technologies like Linux, Docker, and reverse proxies such as
-
✒️🔢 Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown, LaTex, and Rich Text capabilities for enriched interaction.
-
🧩 Model Builder: Easily create custom models from base Ollama models directly from Open WebUI. Create and add custom characters and agents, customize model elements, and import models effortlessly through Open WebUI Community integration.
-
📚 Local and Remote RAG Integration: Dive into the future of chat interactions and explore your documents with our cutting-edge Retrieval Augmented Generation (RAG) technology within your chats. Documents can be loaded into the
Documents
tab of the Workspace, after which they can be accessed using the pound key [#
] before a query, or by starting the prompt with the pound key [#
], followed by a URL for webpage content integration. -
🔍 Web Search for RAG: You can perform web searches using a selection of various search providers and inject the results directly into your local Retrieval Augmented Generation (RAG) experience.
-
🌐 Web Browsing Capabilities: Integrate websites seamlessly into your chat experience by using the
#
command followed by a URL. This feature enables the incorporation of web content directly into your conversations, thereby enhancing the richness and depth of your interactions. -
🎨 Image Generation Integration: Seamlessly incorporate image generation capabilities to enrich your chat experience with dynamic visual content.
-
⚙️ Concurrent Model Utilization: Effortlessly engage with multiple models simultaneously, harnessing their unique strengths for optimal responses. Leverage a diverse set of model modalities in parallel to enhance your experience.
-
🔐 Role-Based Access Control (RBAC): Ensure secure access with restricted permissions. Only authorized individuals can access your Ollama, while model creation and pulling rights are exclusively reserved for administrators.
-
🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (
i18n
) support. We invite you to join us in expanding our supported languages! We're actively seeking contributors! -
🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features.
And many more remarkable features including... ⚡️
🔧 Pipelines Support
-
🔧 Pipelines Framework: Seamlessly integrate and customize your Open WebUI experience with our modular plugin framework for enhanced customization and functionality (https://github.com/open-webui/pipelines). Our framework allows for the easy addition of custom logic and integration of Python libraries, from AI agents to home automation APIs.
-
📥 Upload Pipeline: Pipelines can be uploaded directly from the
Admin Panel
>Settings
>Pipelines
menu, streamlining the pipeline management process.
The possibilities with our Pipelines framework knows no bounds and are practically limitless. Start with a few pre-built pipelines to help you get started!
-
🔗 Function Calling: Integrate Function Calling seamlessly through Pipelines to enhance your LLM interactions with advanced function calling capabilities.
-
📚 Custom RAG: Integrate a custom Retrieval Augmented Generation (RAG) pipeline seamlessly to enhance your LLM interactions with custom RAG logic.
-
📊 Message Monitoring with Langfuse: Monitor and analyze message interactions in real-time usage statistics via Langfuse pipeline.
-
⚖️ User Rate Limiting: Manage API usage efficiently by controlling the flow of requests sent to LLMs to prevent exceeding rate limits with Rate Limit pipeline.
-
🌍 Real-Time LibreTranslate Translation: Integrate real-time translations into your LLM interactions using LibreTranslate pipeline, enabling cross-lingual communication.
- Please note that this pipeline requires further setup with LibreTranslate in a Docker container to work.
-
🛡️ Toxic Message Filtering: Our Detoxify pipeline automatically filters out toxic messages to maintain a clean and safe chat environment.
-
🔒 LLM-Guard: Ensure secure LLM interactions with LLM-Guard pipeline, featuring a Prompt Injection Scanner that detects and mitigates crafty input manipulations targeting large language models. This protects your LLMs from data leakage and adds a layer of resistance against prompt injection attacks.
-
🕒 Conversation Turn Limits: Improve interaction management by setting limits on conversation turns with Conversation Turn Limit pipeline.
-
📈 OpenAI Generation Stats: Our OpenAI pipeline provides detailed generation statistics for OpenAI models.
-
🚀 Multi-Model Support: Our seamless integration with various AI models from various providers expands your possibilities with a wide range of language models to select from and interact with.
In addition to the extensive features and customization options, we also provide a library of example pipelines ready to use along with a practical example scaffold pipeline to help you get started. These resources will streamline your development process and enable you to quickly create powerful LLM interactions using Pipelines and Python. Happy coding! 💡
🖥️ User Experience
-
🖥️ Intuitive Interface: The chat interface has been designed with the user in mind, drawing inspiration from the user interface of ChatGPT.
-
⚡ Swift Responsiveness: Enjoy reliably fast and responsive performance.