Table of Contents

Open WebUI & Ollama

Open WebUI is a web based frontend to LLMs models, and let you run your own private chatbot, or in general AI models.

Ollama is a collection of open AI / LLM models that can be used withit Open WebUI.

both can be installed with a container, easily.

Installation

To install Open WebUI, of course, you need it's dedicated user, and you will also need some persistent folders to map as volumes in the containers. I choose to put these folders under /data/llm.

so add user and create folders:

useradd -d /data/daemons/openwebui -m openwebui
mkdir /data/llm
chown openwebui:openwebui /data/llm
su - openwebui
cd /data/llm
mkdir webui-data
mkdir ollama
mkdir ollama/code
mkdir ollama/ollama

Open WebUI can be installed on bare metal, without containers, using pip, but due to strict python requirement (3.11 at the time of writing this), this is not recomended (Gentoo has already Python 3.13).

Let's go with the containers way, using of course Podman compose.

From this page, select “docker compose” and

This is the compose file i am using, adapt to your needs:

docker-compose.yaml
services:
  openwebui:
    image: ghcr.io/open-webui/open-webui:main
    ports:
      - "3080:8080"
    volumes:
      - /data/llm/webui-data:/app/backend/data
    networks:
      - openwebui-net

  ollama:
    image: docker.io/ollama/ollama:latest
    ports:
      - 3081:11434
    volumes:
      - /data/llm/ollama/code:/code
      - /data/llm/ollama/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0
    networks:
      - openwebui-net

networks:
  openwebui-net:
    dns_enabled: true

this setup will pull in the same container setup both Ollama and Open WebUI. This allows for a seamless integration and neat organization in the server itself.

This setup will let you access your Ollama instance from outside the container, on port 3081, which should NOT pe forwarded on the proxy server, because it's only for home access. The Open WebUI instance will instead be available on port 3080 and accessible trough web proxy, see below.

Reverse Proxy

Open WebUI can be hosted on subdomain, let's assume you choose ai.mydomain.com.

As usual you want it protected by the Reverse Proxy, so create the ai.conf file:

ai.conf
server {
        server_name ai.mydomain.com;
        listen 443 ssl;
        listen 8443 ssl;
        http2 on;

        access_log /var/log/nginx/ai.mydomain.com_access_log main;
        error_log /var/log/nginx/ai.mydomain.com_error_log info;

        location / { # The trailing / is important!
                proxy_pass        http://127.0.0.1:3080/; # The / is important!
                proxy_set_header  X-Script-Name /;
                proxy_set_header  Host $http_host;
        }

        include com.mydomain/certbot.conf;
}

add this config file to NGINX (see The Reverse Proxy concept for more details) and restart nginx.

Now go with browser to https://ai.mydomain.com to finish setup.

Configuration

After you start the containers, be ready to wait a good ten minutes or more until the web gui is operative. YMMV of course, depending on your server capabilities.

You can find your Ollama public key under data/daemons/openwebui/ollama/ollama/id_ed25519.pub

To start using your own offline LLM:

At this point, your LLM is ready and operative!

Autostart

To start it, and set it up on boot, as usual follow my indications Using Containers on Gentoo, so link the user-containers init script:

ln -s /etc/init.d/user-containers /etc/init.d/user-containers.openwebui

and create the following config file:

/etc/conf.d/user-containers.openwebui
USER=openwebui
DESCRIPTION="Open web AI interface"

Add the service to the default runlevel and start it now:

rc-update add user-containers.openwebui default
rc-service user-containers.openwebui start