User Tools

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
services:open-webui [2025/10/15 08:34] – created willyservices:open-webui [2025/10/15 11:51] (current) willy
Line 1: Line 1:
-====== Open WebUI ======+====== Open WebUI & Ollama ====== 
 + 
 +[[https://github.com/open-webui/open-webui|Open WebUI]] is a web based frontend to LLMs models, and let you run your own private chatbot, or in general AI models. 
 + 
 +[[https://ollama.com/|Ollama]] is a collection of open AI / LLM models that can be used withit Open WebUI. 
 + 
 +both can be installed with a container, easily. 
 + 
 +===== Installation ===== 
 + 
 +To install Open WebUI, of course, you need it's dedicated user, and you will also need some persistent folders to map as volumes in the containers. I choose to put these folders under **/data/llm**. 
 + 
 +so add user and create folders: 
 +<code bash> 
 +useradd -d /data/daemons/openwebui -m openwebui 
 +mkdir /data/llm 
 +chown openwebui:openwebui /data/llm 
 +su - openwebui 
 +cd /data/llm 
 +mkdir webui-data 
 +mkdir ollama 
 +mkdir ollama/code 
 +mkdir ollama/ollama 
 +</code> 
 + 
 +Open WebUI can be installed on bare metal, without containers, using //pip//, but due to strict python requirement (3.11 at the time of writing this), this is not recomended (Gentoo has already Python 3.13). 
 + 
 +Let's go with the containers way, using of course Podman compose. 
 + 
 +From [[https://docs.openwebui.com/getting-started/quick-start/|this page]], select "docker compose" and  
 + 
 +This is the compose file i am using, adapt to your needs: 
 +<file - docker-compose.yaml> 
 +services: 
 +  openwebui: 
 +    image: ghcr.io/open-webui/open-webui:main 
 +    ports: 
 +      - "3080:8080" 
 +    volumes: 
 +      - /data/llm/webui-data:/app/backend/data 
 +    networks: 
 +      - openwebui-net 
 + 
 +  ollama: 
 +    image: docker.io/ollama/ollama:latest 
 +    ports: 
 +      - 3081:11434 
 +    volumes: 
 +      - /data/llm/ollama/code:/code 
 +      - /data/llm/ollama/ollama:/root/.ollama 
 +    container_name: ollama 
 +    pull_policy: always 
 +    tty: true 
 +    environment: 
 +      - OLLAMA_KEEP_ALIVE=24h 
 +      - OLLAMA_HOST=0.0.0.0 
 +    networks: 
 +      - openwebui-net 
 + 
 +networks: 
 +  openwebui-net: 
 +    dns_enabled: true 
 +</file> 
 + 
 +this setup will pull in the same container setup both Ollama and Open WebUI. This allows for a seamless integration and neat organization in the server itself.  
 + 
 +This setup will let you access your Ollama instance from //outside// the container, on port 3081, which should **NOT** pe forwarded on the proxy server, because it's only for home access. The Open WebUI instance will instead be available on port 3080 and accessible trough web proxy, see below. 
 + 
 + 
 +===== Reverse Proxy ===== 
 + 
 +Open WebUI can be hosted on subdomain, let's assume you choose **ai.mydomain.com**. 
 + 
 +As usual you want it protected by the Reverse Proxy, so create the **ai.conf** file: 
 +<file - ai.conf> 
 +server { 
 +        server_name ai.mydomain.com; 
 +        listen 443 ssl; 
 +        listen 8443 ssl; 
 +        http2 on; 
 + 
 +        access_log /var/log/nginx/ai.mydomain.com_access_log main; 
 +        error_log /var/log/nginx/ai.mydomain.com_error_log info; 
 + 
 +        location / { # The trailing / is important! 
 +                proxy_pass        http://127.0.0.1:3080/; # The / is important! 
 +                proxy_set_header  X-Script-Name /; 
 +                proxy_set_header  Host $http_host; 
 +        } 
 + 
 +        include com.mydomain/certbot.conf; 
 +
 +</file> 
 +add this config file to NGINX (see [[selfhost:nginx|The Reverse Proxy concept]] for more details) and restart nginx. 
 + 
 +Now go with browser to **https://ai.mydomain.com** to finish setup. 
 + 
 + 
 +===== Configuration ===== 
 + 
 +After you start the containers, be ready to wait a good ten minutes or more until the web gui is operative. YMMV of course, depending on your server capabilities. 
 + 
 +You can find your Ollama public key under **data/daemons/openwebui/ollama/ollama/id_ed25519.pub** 
 + 
 +To start using your own offline LLM: 
 +  * Login to the Open WebUI page (ai.mydomain.com) 
 +  * At first login, you will be prompted to create the admin user, do so. 
 +  * Before chatting, you need to setup a model on Ollama 
 +  * Go to //admin panel / settings / connections//  
 +  * under Ollama, edit it to the URL **http://ollama:11434**, and paste your Ollama key (see above) 
 +  * Now tap on the small download-like icon on the right of the URL 
 +  * You need to write a model name (ex: deepseek-r1) and download it 
 +  * There will be no notification after download is finished, but under the //models// page in admin panel, the model(s) will be displayed 
 + 
 +At this point, your LLM is ready and operative! 
 + 
 + 
 +===== Autostart ===== 
 + 
 +To start it, and set it up on boot, as usual follow my indications [[gentoo:containers|Using Containers on Gentoo]], so link the **user-containers** init script: 
 +<code> 
 +ln -s /etc/init.d/user-containers /etc/init.d/user-containers.openwebui 
 +</code> 
 + 
 +and create the following config file: 
 +<file - /etc/conf.d/user-containers.openwebui> 
 +USER=openwebui 
 +DESCRIPTION="Open web AI interface" 
 +</file> 
 + 
 +Add the service to the default runlevel and start it now: 
 +<code bash> 
 +rc-update add user-containers.openwebui default 
 +rc-service user-containers.openwebui start 
 +</code>