Table of Contents

File Server

I will not discuss how to share your files on the home network using legacy tools like NFS or SAMBA, there are plenty of tutorials online and, beside, it's kind out of the scope for self-hosting.

I will focus on how to provide access via web browser and via WebDAV, which is a web-based sharing protocol a bit like NFS or SAMBA, but aimed ad broader internet access, and not intranet access.

The idea is to create share areas where your users will be able to store files. It is possible to extend this idea also to user-specific areas where each user can put private stuff not visible by other users, but this require a little bit extra complexity and might be addressed in the future.

You will be using your SSO authentication, there will be no need to create new users anywhere, and it will of course be protected by the Reverse Proxy for external access.

A future upgrade might prefer the use of sFtpGo instead of FileBrowser + Apache.

Overall Architecture and Shares

This solution leverages two tools:

The NGINX reverse proxy will integrate with your preferred SSO authentication and add the HTTPS layer to ensure all access is properly encrypted.

I will assume that your shares are under /shares, but of course each share can be located anywhere you like. Let's also assume, as an example, that your share is called /shares/common and is managed by the user fileserver of the group users. The requirement for users and groups will be detailed later on.

Each share folder will have the following structure:

This structure is provided as an example to follow, of course you can move the individual folders where you prefer. The only caveat is that, for security reasons, the db and webdav folder should not be inside the data folder.

You will also need to assign two ports for each share, as an example for our common share:

Any other share can start from these port numbers and go up in numbering.

I choose to assign a dedicated subdomain, drive.mydomain.com, as file server and organize the shares like this:

You can add any more folders as separate shares as you like. Due to how WebDAV works, it is mandatory to separate the browser accessible URLs from the WebDAV ones, like i did above.

Permissions and Users

Each share will be accessible by different users, so this needs to be planned a bit. For user-specific shares, not much needs to be done except run FileBrowser for the specific share as the specific user. This is left as an exercise for you.

For common shares instead, it's important to create one common user, which i will call fileserver user to run the associated services, and create the /home/common folder.

You need to assign that folder to the users group and the fileserver user:

useradd -d /shares/common -m fileserver -g users

You need to set the umask for the fileserver user to 0002 so that any new files created by it will be writable by the users. Also, create the db folder, where the FileBrowser database will need to be located, and the webdav folder, there specific Apache configuration need to be located, and of course don't forget the data folder, where you can put the shared content itself:

su - fileserver
echo "umask 0002" >> ~/.bashrc
source ~/.bashrc
mkdir db
mkdir webdav
mkdir data

Fileserver access via Browser

I am currently using FileBrowser because it's lightweight, don't get in the way, is flexible and simple to use. Check the linked page for the generic installation instructions, and here are the specific details for this case.

I assume FileBrowser has been installed on your system already by following my guide.

You will need to run one instance of FileBrowser for each share, so you will to allocate one specific port for each share. I will describe how to run it for the common share, so FileBrowser will run as the fileserver user that you created above.

So, create the specific /etc/conf.d/fileserver.common:

filebrowser.common
BASE_URL="/common"
DATABASE="/shares/common/db/filebrowser_common.db"
DESCRIPTION="Common web archive"
FOLDER="/shares/common/data"
GROUP="users"
PORT=3002
USER="filebrowser"

Create the init.d symlink too, and start it. Of course, choose a free port (3002). See my FileBrowser instructions page.

Fileserver access via WebDAV

NOTE: using HTTP will cause a 301 redirect to HTTPS, and WebDAV clients will fail. So use HTTPS URL in webdav clients and not HTTP.

While there are a few WebDAV servers like Dave, they seems to be either unmaintained or overly complicated. Also NGINX can be a WebDAV server, but it seems to be buggy and not supporting LOCK stuff, so i decided to go with Apache web server, which also has a long standing WebDAV implementation.

The idea here is to run a dedicated copy of Apache as user fileserver and group users so that it can access and manage the shared files. So first you need to emerge apache:

emerge apache

WebDAV is enabled by default in Gentoo Apache ebuild, so there is no need to fix USE flags.

You will not be running Apache as system service, because that will mess with our user permission approach. I have prepared the following init script that manages to start separated Apache copies for each of your shares. Do drop the following file to /etc/init.d/webdav:

webdav
#!/sbin/openrc-run
# Copyright 2024 Willy Garidol
# Distributed under the terms of the GNU General Public License v3

depend() {
        need localmount net
}

# Name of the share
WD_SHARE_NAME="${SHARE_NAME}"
# Where is the original data
WD_DATA_FOLDER="${DATA_FOLDER}"
# Where WebDAV temporary stuff will be located
WD_TEMP_FOLDER="${TEMP_FOLDER}"
WD_ROOT_FOLDER="${WD_TEMP_FOLDER}/root"
WD_MOUNT_FOLDER="${WD_TEMP_FOLDER}/root/webdav/${WD_SHARE_NAME}"
WD_LOCKS_FOLDER="${WD_TEMP_FOLDER}/locks"

WD_TIMEOUT=${TIMEOUT:-5}
WD_LOG_PATH="/var/log/webdav"
WD_SLOT="${SVCNAME#webdav.}"
WD_USER=${USER:-${WD_SLOT}}
WD_GROUP=${GROUP:-${WD_SLOT}}

description=${DESCRIPTION:-WebDAV starter}
pidfile="/run/${RC_SVCNAME}.pid"
apache_args=(
-c "ServerRoot /usr/lib64/apache2"
-c "LoadModule dav_module modules/mod_dav.so"
-c "LoadModule dav_fs_module modules/mod_dav_fs.so"
-c "LoadModule dav_lock_module modules/mod_dav_lock.so"
-c "Include /etc/apache2/modules.d/*.conf"
-c "User ${WD_USER}"
-c "Group ${WD_GROUP}"
-c "DavLockDB ${WD_TEMP_FOLDER}/locks"
-c "PidFile ${pidfile}"
-c "ErrorLog ${WD_LOG_PATH}/${WD_SLOT}/error.log"
-c "CustomLog ${WD_LOG_PATH}/${WD_SLOT}/access.log common"
-c "DocumentRoot ${WD_ROOT_FOLDER}"
-c "ServerName 127.0.0.1"
-c "Listen 127.0.0.1:${PORT}"
-c "<Directory ${WD_ROOT_FOLDER}>"
-c " DAV On"
-c " AllowOverride All"
-c " Options -Indexes +FollowSymlinks -ExecCGI -Includes"
-c " Require all granted"
-c "</Directory>"
-c "SetEnv redirect-carefully"
)

start_pre() {
        # script must be run with ".sharename" symlink:
        if [ "${WD_SLOT}" = "webdav" ]
        then
                ebegin "Error: do not run this script, run a link to it!"
                eend 255
                return 255
        fi
        # Data folder must exist:
        if [ -z ${WD_DATA_FOLDER} -o ! -d ${WD_DATA_FOLDER} ]
        then
                ebegin "Error: DATA_FOLDER must be defined and path must exist!"
                eend 255
                return 255
        fi
        # Create log paths
        test -e "${WD_LOG_PATH}" || mkdir "${WD_LOG_PATH}"
        test -e "${WD_LOG_PATH}/${WD_SLOT}" || {
                ebegin "Creating log path '${WD_LOG_PATH}/${WD_SLOT}'"
                mkdir "${WD_LOG_PATH}/${WD_SLOT}"
        } && chown -R ${WD_USER} "${WD_LOG_PATH}/${WD_SLOT}"
        # Create all temporary paths:
        for path in ${WD_TEMP_FOLDER} ${WD_ROOT_FOLDER} ${WD_MOUNT_FOLDER} ${WD_LOCKS_FOLDER}
        do
                test -e ${path} || {
                        ebegin "Creating '${path}' path"
                        mkdir -p ${path}
                        chown ${WD_USER}:${WD_GROUP} ${path}
                }
        done
        test -z "$(mount | grep ${WD_MOUNT_FOLDER})" && {
                ebegin "Mounting/binding root path '${WD_DATA_FOLDER}' -> '${WD_MOUNT_FOLDER}'"
                mount -o bind ${WD_DATA_FOLDER} ${WD_MOUNT_FOLDER}
        }
        eend 0
}

start() {
        start-stop-daemon -w ${WD_TIMEOUT} --start --pidfile "${pidfile}" -- \
                /usr/bin/apache2 "${apache_args[@]}"
        eend $?
}

stop_post() {
        test -n "$(mount | grep ${WD_MOUNT_FOLDER})" && {
                ebegin "Unmounting/unbinding root path '${WD_DATA_FOLDER}' -|-> '${WD_MOUNT_FOLDER}'"
                umount ${WD_MOUNT_FOLDER}
        }
        eend 0
}

and make it executable:

chmod +x /etc/init.d/webdav

Create apache configuration files for each share

By using the above init script, defining a new share means to create a share symlink of that script and the associated config file.

For our common example share, create the following /etc/conf.d/webdav.common:

webdav.common
DESCRIPTION="Common WebDAV archive"
# this must point to where your data to be shared is located
DATA_FOLDER="/deposito/shares/common/data"
# this will contain temporary webdav stuff, will be created if missing
TEMP_FOLDER="/deposito/shares/common/webdav"
# this refers to the URL "https://drive.mydomain.com/webdav/<this part of the url>"
SHARE_NAME="common"
GROUP="users"
USER="filebrowser"
PORT=10001

Note the port, it needs to be unique and available.

Create the symlink:

ln -s /etc/init.d/webdav /etc/init.d/webdav.common

Prepare apache folders for each share

The above mentioned init script will create all the needed sub-folders for you, but here is a recap:

Those wll be created by the init script above if missing. They will not be deleted in any case, if existing.

Messing with the WebDAV root folder

Now, the fun part is that you want to protect this behind the NGINX reverse proxy (for HTTPS and authorization reasons) and it seems that WebDAV does not play well with URL redirection and similar funny things. In other words, the base url you will be using on the reverse proxy must match the url in the Apache. You cannot use rewrite directives or Alias stuff.

Since you will be exposing the browser-based access as https://drive.mydomain.com/common and the WebDAV access as https://drive.mydomain.com/webdav/common it means that we need to connect your /shares/common/data folder to /shares/common/webdav/root/webdav/common for it to work. Nicely messed up eh?

Since symbolic links cannot be used by WebDAV (could it be that simple?), the only viable option is mount -o bind. This is taken care automatically in the above init script.

Startup Apache for the share (and autostart)

Since you have already created the share specific startup script symlink and the associated config file, all you need to do is add it to the default runlevel and start it:

rc-update add webdav.common default
/etc/init.d/webdav.common start

Reverse Proxy and wrap-up

Everything is protected behind the NGINX Reverse Proxy, so you should create the following config file for the drive subdomain:

drive.conf
server {
        server_name drive.mydomain.com;
        listen 443 ssl; 
        listen 8443 ssl; 
        http2 on;

        access_log /var/log/nginx/drive.mydomain.com_access_log main;
        error_log /var/log/nginx/drive.mydomain.com_error_log info;

        # WebDAV requires basic auth, while normal auth can be used with FileBrowser
        include "com.mydomain/authelia_location.conf";
        include "com.mydomain/authelia_location-basic.conf";

        location / {
                include "com.mydomain/authelia_proxy.conf";
                include "com.mydomain/authelia_authrequest.conf";
                root /home/web/drive;
        }

        location = /common {
                 return 301 https://$host/common/;
        }

        location /common/ {
                include "com.mydomain/authelia_proxy.conf";
                include "com.mydomain/authelia_authrequest.conf";
                client_max_body_size 512M;
                proxy_pass http://127.0.0.1:3002;
                proxy_set_header Connection $http_connection;
                proxy_set_header Connection 'upgrade';
                proxy_cache_bypass $http_upgrade;
       }

       location /webdav/common {
                include "com.mydomain/authelia_proxy.conf";
                include "com.mydomain/authelia_authrequest-basic.conf";

                # https://mailman.nginx.org/pipermail/nginx/2007-January/000504.html - fix Destination: header
                # https://trac.nginx.org/nginx/ticket/348 - bug, workaround with named capture
                set $dest $http_destination;
                if ($http_destination ~ "^https://(?<myvar>(.+))") {
                        set $dest http://$myvar;
                }

                # Warning: adding / at the end of the proxy_pass will break WebDAV!
                proxy_pass http://127.0.0.1:10001;
                proxy_buffering off;
                gzip off;
                proxy_pass_request_headers on;
               proxy_set_header Destination       $dest;
        }
        client_max_body_size 100M;
}

This example also shows how i have integrated SSO Authentication with the filesever.

Refer to the The Reverse Proxy concept page to activate this specific NGIX configuration. Of course you need to create the Let's Encrypt certificates and the subdomain in your DNS provider.

Main Directory Page

As you can spot from the above NGINX configuration, i have defined a common landing on https://drive.mydomain.com to provide a nice page to access the individual shares.

For this i am using my Simple Dashboard with the following site.json:

site.json
{         
    "title" : "My Drive Title",
    "header" : {                  
        "img" : "",               
        "text" : "My Drive" 
        },                                 
    "content" : [              
        {
        "foldable": false,                                                                                                                                                                   
        "title": "",         
        "content":          
            [ {
                "img" : "images/folder.png",
                "text" : "Common",         
                "link" : "/common/",      
                "style" : "box-inline",         
                "new_page" : true      
            } ]
        }
        ],
     "footer" : {
        "img" : "",
        "text" : "back home",
        "style" : "footer-light",
        "link" : "https://home.mydomain.com"
    }
}

Experimental stuff

Just some additional experiments i did, for future references.

Nephele-Serve

Replacing WebDAV with Nephele-Serve (which will support also CardDAV/CalDAV in the future)

https://www.npmjs.com/package/nephele-serve https://github.com/sciactive/nephele

NPM needs to be enabled for the fileserver user:

NPM_PACKAGES="$HOME/.npm-packages" 
mkdir -p "$NPM_PACKAGES"  
echo "prefix = $NPM_PACKAGES" >> ~/.npmrc

And in ~/.bashrc:

# NPM packages in homedir
NPM_PACKAGES="$HOME/.npm-packages"
# Tell our environment about user-installed node tools
PATH="$NPM_PACKAGES/bin:$PATH"
# Unset manpath so we can inherit from /etc/manpath via the `manpath` command
unset MANPATH # delete if you already modified MANPATH elsewhere in your configuration  
MANPATH="$NPM_PACKAGES/share/man:$(manpath)"
# Tell Node about these packages
NODE_PATH="$NPM_PACKAGES/lib/node_modules:$NODE_PATH"

Install:

source ~/.bashrc  
npm install -g nephele-serve

Advantages: it's a simple server that supports pam_auth. In the future, it might also replace Radicale with a single service.

Disadvantages: do not support base_url, so unable to host under /webdav even with reverse proxy.

sFtpGO WebDAV / web browser

Interesting sFtpGo support both web-browser access and WebDAV from a single tool.

You need to start it once then edit sftpgo.json:

"external_auth_hook": "/data/daemons/fileserver/login.sh",
"webdavd": {
    "bindings": [
      {
        "port": 10001, 
        "address": "127.0.0.1",
        "enable_https": false,
        "certificate_file": "",
        "certificate_key_file": "",
        "min_tls_version": 12,
        "client_auth_type": 0,
        "tls_cipher_suites": [],
        "prefix": "/webdav/common",
        "proxy_allowed": [],
        "client_ip_proxy_header": "",
        "client_ip_header_depth": 0,
        "disable_www_auth_header": false
      }
    ],

Advnatages: easier than Apache to setup, support base_url

Disadvantages: cannot use pam_auth and cannot disable authentication. So double-auth over reverse proxy that might be annoying.