Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
selfhost:fileserver [2024/01/18 10:58] – willy | selfhost:fileserver [2025/03/13 09:29] (current) – created - external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== File Server ====== | + | ====== |
+ | I will not discuss how to share your files on the home network using __legacy__ tools like [[https:// | ||
- | Access | + | I will focus on how to provide access via __web browser__ and via __WebDAV__, which is a web-based sharing protocol a bit like NFS or SAMBA, but aimed ad broader //in**ter**net// access, |
- | Access must be both from web page (HTTP/ | + | The idea is to create share areas where your users will be able to store files. It is possible to extend this idea also to user-specific areas where each user can put private stuff not visible by other users, but this require a little bit extra complexity |
- | [[sailing: | + | You will be using your SSO authentication, |
- | [[sailing: | ||
- | ===== Permissions | + | ===== Overall Architecture |
- | All users need to be in the **users** group. | + | This solution leverages different tools: |
+ | | ||
+ | | ||
+ | | ||
- | The **common** share will be accessible by any user in the **users** group. | + | Note: choosing between FileBrowser or Cloud Commander is a matter of preference. I use both, for different kind of shares. |
- | ===== Shares Configuration ===== | + | The NGINX reverse proxy will integrate with your preferred [[selfhost: |
- | Files will be under **/home/common** for example. The shares | + | I will assume that your shares are under **/shares**, but of course each share can be located anywhere you like. Let's also assume, as an example, that your share is called __/ |
- | <file txt shares> | + | |
- | SHARES=" | + | |
- | </ | + | |
- | where " | + | Each share folder |
+ | | ||
+ | | ||
+ | * / | ||
+ | * / | ||
+ | This structure is provided as an example to follow, of course you can move the individual folders where you prefer. The only caveat is that, for security reasons, the **db** and **webdav** folder should **not** be inside the **data** folder. | ||
- | ===== Software Installation | + | You will also need to assign two ports for each share, as an example for our //common// share: |
+ | * 3002: port for FileBrowser or Cloud Commander | ||
+ | * 10001: port for Apache WebDAV server | ||
- | [[https:// | + | Any other share can start from these port numbers and go up in numbering. |
- | I do not like the default installation method because it will install system-wide. I will show you how to install in a more customized way. | + | I choose to assign a dedicated subdomain, **drive.mydomain.com**, |
+ | * **https:// | ||
+ | * **https:// | ||
+ | * **https:// | ||
- | first you need to create a new user: | + | You can add any more folders as separate shares as you like. Due to how WebDAV works, it is mandatory to separate the browser accessible URLs from the WebDAV ones, like i did above. |
- | <code bash> | + | |
- | > useradd -d / | + | |
- | </ | + | === Permissions and Users === |
- | the // | + | (Note: you should set both FileBrowser and Cloud Commander with user **fileserver**) |
- | You will need to create the following folders architecture in your filebrowser home folder: | + | Each share will be accessible by different users, so this needs to be planned a bit. For user-specific shares, not much needs to be done except run FileBrowser/ |
- | * bin: where the FileBrowser binary will be located | + | |
- | * data/db: where the FileBrowser | + | |
- | * data/logs: where the various log files will be created | + | |
- | You need to set the //umask// for the user to **0002** so that any new files created by it will be writable by the users. | + | For common shares instead, it's important |
- | Then, as // | + | You need to assign that folder |
<code bash> | <code bash> | ||
- | > su - filebrowser | + | useradd |
- | > echo "umask 0002" >> ~/.bashrc | + | |
- | > source ~/.bashrc | + | |
- | > mkdir bin data data/logs data/db | + | |
- | > cd bin | + | |
- | > tar xvf ../linux-amd64-filebrowser.tar.gz | + | |
</ | </ | ||
- | Now, you will need to start a copy of FileBrowser | + | You need to set the // |
+ | <code bash> | ||
+ | su - fileserver | ||
+ | echo "umask 0002" >> ~/.bashrc | ||
+ | source ~/.bashrc | ||
+ | mkdir db | ||
+ | mkdir webdav | ||
+ | mkdir data | ||
+ | </ | ||
- | To achieve this, you will be using a special script called **fileserver.sh** which i will show you at the end, because it will contain also the WebDAV start stuff in it. | ||
- | |||
- | ===== Software Installation for WebDAV access ===== | ||
+ | ===== Fileserver access via Browser ===== | ||
+ | Both [[services: | ||
+ | You can find installation instruction for both tools in the links above. Install both or the one you prefer, i will assume you have installed your pick on your system already by following my guides above. | ||
+ | You will need to run **one** instance of the tool you choose for //each share//, so you will to allocate one specific port for each share. I will describe how to run it for the **common** share, so the tool will run as the **fileserver** user that you created above. | ||
+ | If you choose FileBrowser: | ||
+ | So, create the specific **/ | ||
+ | <file - filebrowser.common> | ||
+ | BASE_URL="/ | ||
+ | DATABASE="/ | ||
+ | DESCRIPTION=" | ||
+ | FOLDER="/ | ||
+ | GROUP=" | ||
+ | PORT=3002 | ||
+ | USER=" | ||
+ | </ | ||
+ | If you choose Cloud Commander: | ||
+ | So, create the specific **/ | ||
+ | <file - cloudcmd.common> | ||
+ | BASE_URL="/ | ||
+ | DESCRIPTION=" | ||
+ | FOLDER="/ | ||
+ | GROUP=" | ||
+ | PORT=3002 | ||
+ | USER=" | ||
+ | </ | ||
+ | Create the **init.d** symlink too, and start it. Of course, choose a free port (3002). | ||
+ | ===== Fileserver access via WebDAV ===== | ||
+ | __NOTE:__ using HTTP will cause a 301 redirect to HTTPS, and WebDAV clients will fail. So use HTTPS URL in webdav clients and not HTTP. | ||
+ | While there are a few WebDAV servers like [[https:// | ||
+ | The idea here is to run a dedicated copy of Apache as user // | ||
+ | <code bash> | ||
+ | emerge apache | ||
+ | </ | ||
+ | WebDAV is enabled by default in Gentoo Apache ebuild, so there is no need to fix USE flags. | ||
+ | You will **not** be running Apache as system service, because that will mess with our user permission approach. I have prepared the following init script that manages to start separated Apache copies for each of your shares. Do drop the following file to **/ | ||
+ | <file - webdav> | ||
+ | # | ||
+ | # Copyright 2024 Willy Garidol | ||
+ | # Distributed under the terms of the GNU General Public License v3 | ||
+ | depend() { | ||
+ | need localmount net | ||
+ | } | ||
+ | # Name of the share | ||
+ | WD_SHARE_NAME=" | ||
+ | # Where is the original data | ||
+ | WD_DATA_FOLDER=" | ||
+ | # Where WebDAV temporary stuff will be located | ||
+ | WD_TEMP_FOLDER=" | ||
+ | WD_ROOT_FOLDER=" | ||
+ | WD_MOUNT_FOLDER=" | ||
+ | WD_LOCKS_FOLDER=" | ||
+ | WD_TIMEOUT=${TIMEOUT: | ||
+ | WD_LOG_PATH="/ | ||
+ | WD_SLOT=" | ||
+ | WD_USER=${USER: | ||
+ | WD_GROUP=${GROUP: | ||
+ | description=${DESCRIPTION: | ||
+ | pidfile="/ | ||
+ | apache_args=( | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c "User ${WD_USER}" | ||
+ | -c "Group ${WD_GROUP}" | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c " | ||
+ | -c "< | ||
+ | -c " DAV On" | ||
+ | -c " AllowOverride All" | ||
+ | -c " Options -Indexes +FollowSymlinks -ExecCGI -Includes" | ||
+ | -c " Require all granted" | ||
+ | -c "</ | ||
+ | -c " | ||
+ | ) | ||
+ | start_pre() { | ||
+ | # script must be run with " | ||
+ | if [ " | ||
+ | then | ||
+ | ebegin " | ||
+ | eend 255 | ||
+ | return 255 | ||
+ | fi | ||
+ | # Data folder must exist: | ||
+ | if [ -z ${WD_DATA_FOLDER} -o ! -d ${WD_DATA_FOLDER} ] | ||
+ | then | ||
+ | ebegin " | ||
+ | eend 255 | ||
+ | return 255 | ||
+ | fi | ||
+ | # Create log paths | ||
+ | test -e " | ||
+ | test -e " | ||
+ | ebegin " | ||
+ | mkdir " | ||
+ | } && chown -R ${WD_USER} " | ||
+ | # Create all temporary paths: | ||
+ | for path in ${WD_TEMP_FOLDER} ${WD_ROOT_FOLDER} ${WD_MOUNT_FOLDER} ${WD_LOCKS_FOLDER} | ||
+ | do | ||
+ | test -e ${path} || { | ||
+ | ebegin " | ||
+ | mkdir -p ${path} | ||
+ | chown ${WD_USER}: | ||
+ | } | ||
+ | done | ||
+ | test -z " | ||
+ | ebegin " | ||
+ | mount -o bind ${WD_DATA_FOLDER} ${WD_MOUNT_FOLDER} | ||
+ | } | ||
+ | eend 0 | ||
+ | } | ||
+ | start() { | ||
+ | start-stop-daemon -w ${WD_TIMEOUT} --start --pidfile " | ||
+ | / | ||
+ | eend $? | ||
+ | } | ||
- | + | stop_post() { | |
- | since it will be hidden behind the reverse proxy, you can disable FileBrowser internal authentication. | + | test -n " |
- | + | | |
- | You need to setup filebrowser to access your common archive, so create the folder **/data/ | + | |
+ | } | ||
+ | eend 0 | ||
+ | } | ||
+ | </file> | ||
+ | and make it executable: | ||
<code bash> | <code bash> | ||
- | > mkdir /data/archive | + | chmod +x /etc/init.d/webdav |
- | > mkdir /data/ | + | |
- | > chown -R filebrowser: | + | |
</ | </ | ||
+ | === Create apache configuration files for each share === | ||
+ | By using the above init script, defining a new share means to create a share symlink of that script and the associated config file. | ||
+ | For our __common__ example share, create the following **/ | ||
+ | <file - webdav.common> | ||
+ | DESCRIPTION=" | ||
+ | # this must point to where your data to be shared is located | ||
+ | DATA_FOLDER="/ | ||
+ | # this will contain temporary webdav stuff, will be created if missing | ||
+ | TEMP_FOLDER="/ | ||
+ | # this refers to the URL " | ||
+ | SHARE_NAME=" | ||
+ | GROUP=" | ||
+ | USER=" | ||
+ | PORT=10001 | ||
+ | </ | ||
+ | Note the port, it needs to be unique and available. | ||
+ | Create the symlink: | ||
+ | <code bash> | ||
+ | ln -s / | ||
+ | </ | ||
- | xxxxxx | ||
+ | === Prepare apache folders for each share === | ||
+ | The above mentioned init script will create all the needed sub-folders for you, but here is a recap: | ||
+ | * / | ||
+ | * / | ||
- | Now, reverse proxy is simple, but this into **/ | + | Those wll be created by the init script above if missing. They will not be deleted in any case, if existing. |
- | <file txt filebrowser.conf> | + | |
- | location / | ||
- | client_max_body_size 512M; | ||
- | proxy_pass http:// | + | === Messing with the WebDAV root folder === |
- | proxy_http_version 1.1; | + | |
- | proxy_set_header Connection $http_connection; | + | Now, the fun part is that you want to protect this behind the NGINX reverse proxy (for HTTPS and authorization reasons) and it seems that WebDAV does **not** play well with URL redirection and similar funny things. In other words, the base url you will be using on the reverse proxy **must match** the url in the Apache. You **cannot use** rewrite directives or Alias stuff. |
- | proxy_set_header Connection ' | + | |
- | proxy_cache_bypass $http_upgrade; | + | |
- | proxy_set_header Host $host; | + | |
- | proxy_set_header X-Real-IP $remote_addr; | + | |
- | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | + | |
- | proxy_set_header X-Forwarded-Proto $scheme; | + | |
- | } | + | |
- | </ | + | |
- | and put this file include inside | + | Since you will be exposing |
+ | Since symbolic links cannot be used by WebDAV (could it be //that// simple?), the only viable option is **mount -o bind**. This is taken care automatically in the above init script. | ||
- | <file bash filebrowser.sh> | + | === Startup Apache for the share (and autostart) === |
- | source / | + | |
- | BASE_PATH=/deposito/daemons/filebrowser/data | + | Since you have already created the share specific startup script symlink and the associated config file, all you need to do is add it to the default runlevel and start it: |
+ | <code bash> | ||
+ | rc-update add webdav.common default | ||
+ | /etc/init.d/webdav.common start | ||
+ | </code> | ||
- | for i in $SHARES | + | ===== Reverse Proxy and wrap-up ===== |
- | do | + | |
- | SHARE=$(echo $i | cut -d: -f1) | + | |
- | PORT=$(echo $i | cut -d: -f2) | + | |
- | OWNER=filebrowser | + | |
- | echo Starting FileBrowser | + | Everything is protected behind the [[selfhost: |
- | su - $OWNER -c "/ | + | < |
- | | + | server { |
+ | | ||
+ | listen 443 ssl; | ||
+ | listen 8443 ssl; | ||
+ | http2 on; | ||
- | | + | |
- | done | + | |
- | </file> | + | |
+ | # WebDAV requires basic auth, while normal auth can be used with FileBrowser | ||
+ | include " | ||
+ | include " | ||
- | And the usual autostart stuff: | + | location / { |
- | <file bash 40-filebrowser.start> | + | |
- | #!/bin/bash | + | |
- | /data/daemons/filebrowser/ | + | |
- | </ | + | } |
- | Make both files executable. | + | location = /common { |
+ | | ||
+ | } | ||
+ | location /common/ { | ||
+ | include " | ||
+ | include " | ||
+ | client_max_body_size 512M; | ||
+ | proxy_pass http:// | ||
+ | proxy_set_header Connection $http_connection; | ||
+ | proxy_set_header Connection ' | ||
+ | proxy_cache_bypass $http_upgrade; | ||
+ | } | ||
+ | | ||
+ | include " | ||
+ | include " | ||
- | ===== Background ===== | + | # https:// |
+ | # https:// | ||
+ | set $dest $http_destination; | ||
+ | if ($http_destination ~ " | ||
+ | set $dest http:// | ||
+ | } | ||
- | From users point of view, the common area will be managed by user **filebrowser** which is designed to run as group **users** with an //umask// 550 so that any files uploaded via web browser will be accessible to the normal users. | + | # Warning: adding / at the end of the proxy_pass |
+ | proxy_pass http://127.0.0.1: | ||
+ | proxy_buffering off; | ||
+ | gzip off; | ||
+ | proxy_pass_request_headers on; | ||
+ | | ||
+ | } | ||
+ | client_max_body_size 100M; | ||
+ | } | ||
+ | </ | ||
- | Of course, each user will need to be part of the **users** group as well. | + | The reverse proxy configuration doesn' |
- | You will need a common " | + | This example also shows how i have integrated [[selfhost: |
- | This folder will need to contain: | + | Refer to the [[selfhost:nginx|The Reverse Proxy concept]] page to activate this specific NGIX configuration. Of course you need to create |
- | * **common** subfolder, where the common area files will be stored (created in the [[sailing:filebrowser]] instructions) | + | |
- | * **temp/ | + | |
- | * **temp/ | + | |
- | * **temp/ | + | |
- | * **logs** subfolder, to store NGINX log files | + | |
- | * **conf** subfolder, where you will store custom NGINX config files for the private areas (and common area too) | + | |
- | Create the folders: | + | ==== Main Directory Page ==== |
- | <code bash> | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > mkdir / | + | |
- | > chown filebrowser: | + | |
- | </ | + | |
- | ===== NGINX WebDAV approach ===== | + | As you can spot from the above NGINX configuration, |
- | No need to use third party WebDAV server since NGINX has a pretty solid implementation of it already. Follow the [[sailing:nginx]] instructions to set NGINX up with WebDAV and PAM auth support. | + | For this i am using my [[services:dashboards|Simple Dashboard]] with the following |
- | + | < | |
- | Now, there is a nasty catch here which stems from using NGINX as WebDAV server... You need to run NGINX as // | + | { |
- | + | " | |
- | For consistency, | + | " |
- | < | + | " |
- | worker_processes 1; | + | " |
- | pid /data/ | + | }, |
- | error_log /data/archive/logs/ | + | " |
- | + | { | |
- | events | + | " |
- | | + | " |
- | | + | " |
+ | [ { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | } ] | ||
+ | } | ||
+ | ], | ||
+ | " | ||
+ | | ||
+ | | ||
+ | " | ||
+ | " | ||
+ | } | ||
} | } | ||
+ | </ | ||
- | http { | ||
- | include / | ||
- | default_type application/ | ||
- | # These folder MUST be redirected to avoid usage of system wide ones: | ||
- | client_body_temp_path | ||
- | proxy_temp_path | ||
- | fastcgi_temp_path | ||
- | uwsgi_temp_path | ||
- | scgi_temp_path | ||
- | disable_symlinks off; | ||
- | keepalive_timeout 75 20; | ||
- | server { | + | ===== Experimental stuff ===== |
- | server_name 127.0.0.1; | + | |
- | access_log / | + | Just some additional experiments i did, for future references. |
- | location / { | + | |
- | root / | + | |
- | dav_methods PUT DELETE MKCOL COPY MOVE; | + | === Nephele-Serve === |
- | | + | Replacing WebDAV with Nephele-Serve (which will support also CardDAV/ |
- | dav_access user:rw group:rw all:r; | + | |
- | client_max_body_size 0; | + | https:// |
- | | + | https://github.com/sciactive/nephele |
- | client_body_temp_path | + | |
- | } | + | |
- | listen 10000; | + | |
- | } | + | |
- | } | + | |
- | </file> | + | |
- | This NGINX server will listen on 127.0.0.1: | + | NPM needs to be enabled for the fileserver user: |
- | <file txt webdav.conf> | + | <code> |
- | location ~ ^/webdav/ | + | NPM_PACKAGES=" |
- | | + | mkdir -p "$NPM_PACKAGES" |
- | | + | echo " |
- | } | + | </code> |
- | </file> | + | |
- | and including it into the main NGINX server. | + | And in **~/.bashrc**: |
- | Now, edit the **/ | ||
< | < | ||
- | nginx -c /deposito/archive/conf/ | + | # NPM packages in homedir |
+ | NPM_PACKAGES=" | ||
+ | # Tell our environment about user-installed node tools | ||
+ | PATH=" | ||
+ | # Unset manpath so we can inherit from /etc/manpath via the `manpath` command | ||
+ | unset MANPATH # delete if you already modified MANPATH elsewhere in your configuration | ||
+ | MANPATH=" | ||
+ | # Tell Node about these packages | ||
+ | NODE_PATH=" | ||
</ | </ | ||
- | like this: | + | Install: |
- | <file bash filebrowser.sh> | + | <code bash> |
- | #!/bin/bash | + | source ~/.bashrc |
+ | npm install -g nephele-serve | ||
+ | </code> | ||
- | cd / | + | Advantages: it's a simple server that supports pam_auth. In the future, it might **also** replace [[services: |
- | nginx -c / | + | |
- | ./ | + | |
- | </ | + | |
- | and restart filebrwoser and the main NGINX. | + | Disadvantages: |
- | At this point, your common area will be ready and working both on WebDAV | + | === sFtpGO |
- | To access via browser: | + | Interesting [[https:// |
+ | |||
+ | You need to start it once then edit **sftpgo.json**: | ||
+ | < | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | ], | ||
+ | </ | ||
+ | Advnatages: easier than Apache to setup, support base_url | ||
- | to access via WebDAV clients: | + | Disadvantages: cannot use pam_auth and cannot disable authentication. So double-auth over reverse proxy that might be annoying. |
+ | === KaraDAV / PicoDAV === | ||
+ | [[https:// | ||
+ | [[https:// | ||
+ | Unfortunately, | ||