Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| services:open-webui [2025/10/15 08:49] – willy | services:open-webui [2025/10/15 11:51] (current) – willy | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== Open WebUI ====== | + | ====== Open WebUI & Ollama |
| [[https:// | [[https:// | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | both can be installed with a container, easily. | ||
| ===== Installation ===== | ===== Installation ===== | ||
| - | To install Open WebUI, of course, you need it's dedicated user, so add user: | + | To install Open WebUI, of course, you need it's dedicated user, and you will also need some persistent folders to map as volumes in the containers. I choose to put these folders under **/ |
| + | |||
| + | so add user and create folders: | ||
| <code bash> | <code bash> | ||
| useradd -d / | useradd -d / | ||
| + | mkdir /data/llm | ||
| + | chown openwebui: | ||
| + | su - openwebui | ||
| + | cd /data/llm | ||
| + | mkdir webui-data | ||
| + | mkdir ollama | ||
| + | mkdir ollama/code | ||
| + | mkdir ollama/ | ||
| </ | </ | ||
| Line 18: | Line 32: | ||
| This is the compose file i am using, adapt to your needs: | This is the compose file i am using, adapt to your needs: | ||
| <file - docker-compose.yaml> | <file - docker-compose.yaml> | ||
| + | services: | ||
| + | openwebui: | ||
| + | image: ghcr.io/ | ||
| + | ports: | ||
| + | - " | ||
| + | volumes: | ||
| + | - / | ||
| + | networks: | ||
| + | - openwebui-net | ||
| + | ollama: | ||
| + | image: docker.io/ | ||
| + | ports: | ||
| + | - 3081:11434 | ||
| + | volumes: | ||
| + | - / | ||
| + | - / | ||
| + | container_name: | ||
| + | pull_policy: | ||
| + | tty: true | ||
| + | environment: | ||
| + | - OLLAMA_KEEP_ALIVE=24h | ||
| + | - OLLAMA_HOST=0.0.0.0 | ||
| + | networks: | ||
| + | - openwebui-net | ||
| + | |||
| + | networks: | ||
| + | openwebui-net: | ||
| + | dns_enabled: | ||
| </ | </ | ||
| + | this setup will pull in the same container setup both Ollama and Open WebUI. This allows for a seamless integration and neat organization in the server itself. | ||
| - | ==== Reverse Proxy ==== | + | This setup will let you access your Ollama instance from //outside// the container, on port 3081, which should **NOT** pe forwarded on the proxy server, because it's only for home access. The Open WebUI instance will instead be available on port 3080 and accessible trough web proxy, see below. |
| + | |||
| + | |||
| + | ===== Reverse Proxy ===== | ||
| Open WebUI can be hosted on subdomain, let's assume you choose **ai.mydomain.com**. | Open WebUI can be hosted on subdomain, let's assume you choose **ai.mydomain.com**. | ||
| Line 33: | Line 79: | ||
| listen 8443 ssl; | listen 8443 ssl; | ||
| http2 on; | http2 on; | ||
| - | |||
| - | include " | ||
| access_log / | access_log / | ||
| Line 42: | Line 86: | ||
| proxy_pass | proxy_pass | ||
| proxy_set_header | proxy_set_header | ||
| - | proxy_set_header | ||
| - | proxy_set_header | ||
| proxy_set_header | proxy_set_header | ||
| - | proxy_pass_header Authorization; | ||
| - | include " | ||
| - | include " | ||
| } | } | ||
| Line 54: | Line 93: | ||
| </ | </ | ||
| add this config file to NGINX (see [[selfhost: | add this config file to NGINX (see [[selfhost: | ||
| - | |||
| - | This set-up links also to use your SSO (see [[selfhost: | ||
| Now go with browser to **https:// | Now go with browser to **https:// | ||
| + | ===== Configuration ===== | ||
| - | ===== Startup ===== | + | After you start the containers, be ready to wait a good ten minutes or more until the web gui is operative. YMMV of course, depending on your server capabilities. |
| + | You can find your Ollama public key under **data/ | ||
| + | |||
| + | To start using your own offline LLM: | ||
| + | * Login to the Open WebUI page (ai.mydomain.com) | ||
| + | * At first login, you will be prompted to create the admin user, do so. | ||
| + | * Before chatting, you need to setup a model on Ollama | ||
| + | * Go to //admin panel / settings / connections// | ||
| + | * under Ollama, edit it to the URL **http:// | ||
| + | * Now tap on the small download-like icon on the right of the URL | ||
| + | * You need to write a model name (ex: deepseek-r1) and download it | ||
| + | * There will be no notification after download is finished, but under the //models// page in admin panel, the model(s) will be displayed | ||
| + | |||
| + | At this point, your LLM is ready and operative! | ||
| + | |||
| + | |||
| + | ===== Autostart ===== | ||
| + | |||
| + | To start it, and set it up on boot, as usual follow my indications [[gentoo: | ||
| + | < | ||
| + | ln -s / | ||
| + | </ | ||
| + | |||
| + | and create the following config file: | ||
| + | <file - / | ||
| + | USER=openwebui | ||
| + | DESCRIPTION=" | ||
| + | </ | ||
| + | |||
| + | Add the service to the default runlevel and start it now: | ||
| + | <code bash> | ||
| + | rc-update add user-containers.openwebui default | ||
| + | rc-service user-containers.openwebui start | ||
| + | </ | ||