User Tools

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
services:open-webui [2025/10/15 08:51] willyservices:open-webui [2025/10/15 11:51] (current) willy
Line 1: Line 1:
-====== Open WebUI ======+====== Open WebUI & Ollama ======
  
 [[https://github.com/open-webui/open-webui|Open WebUI]] is a web based frontend to LLMs models, and let you run your own private chatbot, or in general AI models. [[https://github.com/open-webui/open-webui|Open WebUI]] is a web based frontend to LLMs models, and let you run your own private chatbot, or in general AI models.
 +
 +[[https://ollama.com/|Ollama]] is a collection of open AI / LLM models that can be used withit Open WebUI.
 +
 +both can be installed with a container, easily.
  
 ===== Installation ===== ===== Installation =====
  
-To install Open WebUI, of course, you need it's dedicated user, so add user:+To install Open WebUI, of course, you need it's dedicated user, and you will also need some persistent folders to map as volumes in the containers. I choose to put these folders under **/data/llm**. 
 + 
 +so add user and create folders:
 <code bash> <code bash>
 useradd -d /data/daemons/openwebui -m openwebui useradd -d /data/daemons/openwebui -m openwebui
 +mkdir /data/llm
 +chown openwebui:openwebui /data/llm
 +su - openwebui
 +cd /data/llm
 +mkdir webui-data
 +mkdir ollama
 +mkdir ollama/code
 +mkdir ollama/ollama
 </code> </code>
  
Line 18: Line 32:
 This is the compose file i am using, adapt to your needs: This is the compose file i am using, adapt to your needs:
 <file - docker-compose.yaml> <file - docker-compose.yaml>
 +services:
 +  openwebui:
 +    image: ghcr.io/open-webui/open-webui:main
 +    ports:
 +      - "3080:8080"
 +    volumes:
 +      - /data/llm/webui-data:/app/backend/data
 +    networks:
 +      - openwebui-net
  
 +  ollama:
 +    image: docker.io/ollama/ollama:latest
 +    ports:
 +      - 3081:11434
 +    volumes:
 +      - /data/llm/ollama/code:/code
 +      - /data/llm/ollama/ollama:/root/.ollama
 +    container_name: ollama
 +    pull_policy: always
 +    tty: true
 +    environment:
 +      - OLLAMA_KEEP_ALIVE=24h
 +      - OLLAMA_HOST=0.0.0.0
 +    networks:
 +      - openwebui-net
 +
 +networks:
 +  openwebui-net:
 +    dns_enabled: true
 </file> </file>
  
 +this setup will pull in the same container setup both Ollama and Open WebUI. This allows for a seamless integration and neat organization in the server itself. 
  
-==== Reverse Proxy ====+This setup will let you access your Ollama instance from //outside// the container, on port 3081, which should **NOT** pe forwarded on the proxy server, because it's only for home access. The Open WebUI instance will instead be available on port 3080 and accessible trough web proxy, see below. 
 + 
 + 
 +===== Reverse Proxy =====
  
 Open WebUI can be hosted on subdomain, let's assume you choose **ai.mydomain.com**. Open WebUI can be hosted on subdomain, let's assume you choose **ai.mydomain.com**.
Line 33: Line 79:
         listen 8443 ssl;         listen 8443 ssl;
         http2 on;         http2 on;
- 
-        include "com.mydomain/authelia_location-basic.conf"; 
  
         access_log /var/log/nginx/ai.mydomain.com_access_log main;         access_log /var/log/nginx/ai.mydomain.com_access_log main;
Line 42: Line 86:
                 proxy_pass        http://127.0.0.1:3080/; # The / is important!                 proxy_pass        http://127.0.0.1:3080/; # The / is important!
                 proxy_set_header  X-Script-Name /;                 proxy_set_header  X-Script-Name /;
-                proxy_set_header  X-Remote-User $remote_user; 
-                proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for; 
                 proxy_set_header  Host $http_host;                 proxy_set_header  Host $http_host;
-                proxy_pass_header Authorization; 
-                include "com.mydomain/authelia_proxy.conf"; 
-                include "com.mydomain/authelia_authrequest-basic.conf"; 
         }         }
  
Line 54: Line 93:
 </file> </file>
 add this config file to NGINX (see [[selfhost:nginx|The Reverse Proxy concept]] for more details) and restart nginx. add this config file to NGINX (see [[selfhost:nginx|The Reverse Proxy concept]] for more details) and restart nginx.
- 
-This set-up links also to use your SSO (see [[selfhost:sso|this page]], which is a good approach. 
  
 Now go with browser to **https://ai.mydomain.com** to finish setup. Now go with browser to **https://ai.mydomain.com** to finish setup.
  
 +
 +===== Configuration =====
 +
 +After you start the containers, be ready to wait a good ten minutes or more until the web gui is operative. YMMV of course, depending on your server capabilities.
 +
 +You can find your Ollama public key under **data/daemons/openwebui/ollama/ollama/id_ed25519.pub**
 +
 +To start using your own offline LLM:
 +  * Login to the Open WebUI page (ai.mydomain.com)
 +  * At first login, you will be prompted to create the admin user, do so.
 +  * Before chatting, you need to setup a model on Ollama
 +  * Go to //admin panel / settings / connections// 
 +  * under Ollama, edit it to the URL **http://ollama:11434**, and paste your Ollama key (see above)
 +  * Now tap on the small download-like icon on the right of the URL
 +  * You need to write a model name (ex: deepseek-r1) and download it
 +  * There will be no notification after download is finished, but under the //models// page in admin panel, the model(s) will be displayed
 +
 +At this point, your LLM is ready and operative!