Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
1.selfhost:fileserver [2025/01/21 15:21] – created - external edit 127.0.0.1 | 1.selfhost:fileserver [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== File Server ====== | ||
- | |||
- | I will not discuss how to share your files on the home network using __legacy__ tools like [[https:// | ||
- | |||
- | I will focus on how to provide access via __web browser__ and via __WebDAV__, which is a web-based sharing protocol a bit like NFS or SAMBA, but aimed ad broader // | ||
- | |||
- | The idea is to create share areas where your users will be able to store files. It is possible to extend this idea also to user-specific areas where each user can put private stuff not visible by other users, but this require a little bit extra complexity and might be addressed in the future. | ||
- | |||
- | You will be using your SSO authentication, | ||
- | |||
- | |||
- | ===== Overall Architecture and Shares ===== | ||
- | |||
- | This solution leverages different tools: | ||
- | * [[services: | ||
- | * [[services: | ||
- | * [[https:// | ||
- | |||
- | Note: choosing between FileBrowser or Cloud Commander is a matter of preference. I use both, for different kind of shares. | ||
- | |||
- | The NGINX reverse proxy will integrate with your preferred [[selfhost: | ||
- | |||
- | I will assume that your shares are under **/ | ||
- | |||
- | Each share folder will have the following structure: | ||
- | * / | ||
- | * / | ||
- | * / | ||
- | * / | ||
- | |||
- | This structure is provided as an example to follow, of course you can move the individual folders where you prefer. The only caveat is that, for security reasons, the **db** and **webdav** folder should **not** be inside the **data** folder. | ||
- | |||
- | You will also need to assign two ports for each share, as an example for our //common// share: | ||
- | * 3002: port for FileBrowser or Cloud Commander | ||
- | * 10001: port for Apache WebDAV server | ||
- | |||
- | Any other share can start from these port numbers and go up in numbering. | ||
- | |||
- | I choose to assign a dedicated subdomain, **drive.mydomain.com**, | ||
- | * **https:// | ||
- | * **https:// | ||
- | * **https:// | ||
- | |||
- | You can add any more folders as separate shares as you like. Due to how WebDAV works, it is mandatory to separate the browser accessible URLs from the WebDAV ones, like i did above. | ||
- | |||
- | |||
- | === Permissions and Users === | ||
- | |||
- | (Note: you should set both FileBrowser and Cloud Commander with user **fileserver**) | ||
- | |||
- | Each share will be accessible by different users, so this needs to be planned a bit. For user-specific shares, not much needs to be done except run FileBrowser/ | ||
- | |||
- | For common shares instead, it's important to create one common user, which i will call **fileserver** user to run the associated services, and create the **/ | ||
- | |||
- | You need to assign that folder to the **users** group and the **fileserver** user: | ||
- | <code bash> | ||
- | useradd -d / | ||
- | </ | ||
- | |||
- | You need to set the //umask// for the fileserver user to **0002** so that any new files created by it will be writable by the users. Also, create the **db** folder, where the FileBrowser database will need to be located, and the **webdav** folder, there specific Apache configuration need to be located, and of course don't forget the **data** folder, where you can put the shared content itself: | ||
- | <code bash> | ||
- | su - fileserver | ||
- | echo "umask 0002" >> ~/.bashrc | ||
- | source ~/.bashrc | ||
- | mkdir db | ||
- | mkdir webdav | ||
- | mkdir data | ||
- | </ | ||
- | |||
- | |||
- | ===== Fileserver access via Browser ===== | ||
- | |||
- | Both [[services: | ||
- | |||
- | You can find installation instruction for both tools in the links above. Install both or the one you prefer, i will assume you have installed your pick on your system already by following my guides above. | ||
- | |||
- | You will need to run **one** instance of the tool you choose for //each share//, so you will to allocate one specific port for each share. I will describe how to run it for the **common** share, so the tool will run as the **fileserver** user that you created above. | ||
- | |||
- | If you choose FileBrowser: | ||
- | So, create the specific **/ | ||
- | <file - filebrowser.common> | ||
- | BASE_URL="/ | ||
- | DATABASE="/ | ||
- | DESCRIPTION=" | ||
- | FOLDER="/ | ||
- | GROUP=" | ||
- | PORT=3002 | ||
- | USER=" | ||
- | </ | ||
- | |||
- | If you choose Cloud Commander: | ||
- | So, create the specific **/ | ||
- | <file - cloudcmd.common> | ||
- | BASE_URL="/ | ||
- | DESCRIPTION=" | ||
- | FOLDER="/ | ||
- | GROUP=" | ||
- | PORT=3002 | ||
- | USER=" | ||
- | </ | ||
- | |||
- | Create the **init.d** symlink too, and start it. Of course, choose a free port (3002). | ||
- | |||
- | |||
- | ===== Fileserver access via WebDAV ===== | ||
- | |||
- | __NOTE:__ using HTTP will cause a 301 redirect to HTTPS, and WebDAV clients will fail. So use HTTPS URL in webdav clients and not HTTP. | ||
- | |||
- | While there are a few WebDAV servers like [[https:// | ||
- | |||
- | The idea here is to run a dedicated copy of Apache as user // | ||
- | <code bash> | ||
- | emerge apache | ||
- | </ | ||
- | WebDAV is enabled by default in Gentoo Apache ebuild, so there is no need to fix USE flags. | ||
- | |||
- | You will **not** be running Apache as system service, because that will mess with our user permission approach. I have prepared the following init script that manages to start separated Apache copies for each of your shares. Do drop the following file to **/ | ||
- | <file - webdav> | ||
- | # | ||
- | # Copyright 2024 Willy Garidol | ||
- | # Distributed under the terms of the GNU General Public License v3 | ||
- | |||
- | depend() { | ||
- | need localmount net | ||
- | } | ||
- | |||
- | # Name of the share | ||
- | WD_SHARE_NAME=" | ||
- | # Where is the original data | ||
- | WD_DATA_FOLDER=" | ||
- | # Where WebDAV temporary stuff will be located | ||
- | WD_TEMP_FOLDER=" | ||
- | WD_ROOT_FOLDER=" | ||
- | WD_MOUNT_FOLDER=" | ||
- | WD_LOCKS_FOLDER=" | ||
- | |||
- | WD_TIMEOUT=${TIMEOUT: | ||
- | WD_LOG_PATH="/ | ||
- | WD_SLOT=" | ||
- | WD_USER=${USER: | ||
- | WD_GROUP=${GROUP: | ||
- | |||
- | description=${DESCRIPTION: | ||
- | pidfile="/ | ||
- | apache_args=( | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c "User ${WD_USER}" | ||
- | -c "Group ${WD_GROUP}" | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c " | ||
- | -c "< | ||
- | -c " DAV On" | ||
- | -c " AllowOverride All" | ||
- | -c " Options -Indexes +FollowSymlinks -ExecCGI -Includes" | ||
- | -c " Require all granted" | ||
- | -c "</ | ||
- | -c " | ||
- | ) | ||
- | |||
- | start_pre() { | ||
- | # script must be run with " | ||
- | if [ " | ||
- | then | ||
- | ebegin " | ||
- | eend 255 | ||
- | return 255 | ||
- | fi | ||
- | # Data folder must exist: | ||
- | if [ -z ${WD_DATA_FOLDER} -o ! -d ${WD_DATA_FOLDER} ] | ||
- | then | ||
- | ebegin " | ||
- | eend 255 | ||
- | return 255 | ||
- | fi | ||
- | # Create log paths | ||
- | test -e " | ||
- | test -e " | ||
- | ebegin " | ||
- | mkdir " | ||
- | } && chown -R ${WD_USER} " | ||
- | # Create all temporary paths: | ||
- | for path in ${WD_TEMP_FOLDER} ${WD_ROOT_FOLDER} ${WD_MOUNT_FOLDER} ${WD_LOCKS_FOLDER} | ||
- | do | ||
- | test -e ${path} || { | ||
- | ebegin " | ||
- | mkdir -p ${path} | ||
- | chown ${WD_USER}: | ||
- | } | ||
- | done | ||
- | test -z " | ||
- | ebegin " | ||
- | mount -o bind ${WD_DATA_FOLDER} ${WD_MOUNT_FOLDER} | ||
- | } | ||
- | eend 0 | ||
- | } | ||
- | |||
- | start() { | ||
- | start-stop-daemon -w ${WD_TIMEOUT} --start --pidfile " | ||
- | / | ||
- | eend $? | ||
- | } | ||
- | |||
- | stop_post() { | ||
- | test -n " | ||
- | ebegin " | ||
- | umount ${WD_MOUNT_FOLDER} | ||
- | } | ||
- | eend 0 | ||
- | } | ||
- | </ | ||
- | and make it executable: | ||
- | <code bash> | ||
- | chmod +x / | ||
- | </ | ||
- | |||
- | |||
- | === Create apache configuration files for each share === | ||
- | |||
- | By using the above init script, defining a new share means to create a share symlink of that script and the associated config file. | ||
- | |||
- | For our __common__ example share, create the following **/ | ||
- | <file - webdav.common> | ||
- | DESCRIPTION=" | ||
- | # this must point to where your data to be shared is located | ||
- | DATA_FOLDER="/ | ||
- | # this will contain temporary webdav stuff, will be created if missing | ||
- | TEMP_FOLDER="/ | ||
- | # this refers to the URL " | ||
- | SHARE_NAME=" | ||
- | GROUP=" | ||
- | USER=" | ||
- | PORT=10001 | ||
- | </ | ||
- | Note the port, it needs to be unique and available. | ||
- | |||
- | Create the symlink: | ||
- | <code bash> | ||
- | ln -s / | ||
- | </ | ||
- | |||
- | |||
- | === Prepare apache folders for each share === | ||
- | |||
- | The above mentioned init script will create all the needed sub-folders for you, but here is a recap: | ||
- | * / | ||
- | * / | ||
- | |||
- | Those wll be created by the init script above if missing. They will not be deleted in any case, if existing. | ||
- | |||
- | |||
- | === Messing with the WebDAV root folder === | ||
- | |||
- | Now, the fun part is that you want to protect this behind the NGINX reverse proxy (for HTTPS and authorization reasons) and it seems that WebDAV does **not** play well with URL redirection and similar funny things. In other words, the base url you will be using on the reverse proxy **must match** the url in the Apache. You **cannot use** rewrite directives or Alias stuff. | ||
- | |||
- | Since you will be exposing the browser-based access as **https:// | ||
- | |||
- | Since symbolic links cannot be used by WebDAV (could it be //that// simple?), the only viable option is **mount -o bind**. This is taken care automatically in the above init script. | ||
- | |||
- | |||
- | === Startup Apache for the share (and autostart) === | ||
- | |||
- | Since you have already created the share specific startup script symlink and the associated config file, all you need to do is add it to the default runlevel and start it: | ||
- | <code bash> | ||
- | rc-update add webdav.common default | ||
- | / | ||
- | </ | ||
- | |||
- | ===== Reverse Proxy and wrap-up ===== | ||
- | |||
- | Everything is protected behind the [[selfhost: | ||
- | <file - drive.conf> | ||
- | server { | ||
- | server_name drive.mydomain.com; | ||
- | listen 443 ssl; | ||
- | listen 8443 ssl; | ||
- | http2 on; | ||
- | |||
- | access_log / | ||
- | error_log / | ||
- | |||
- | # WebDAV requires basic auth, while normal auth can be used with FileBrowser | ||
- | include " | ||
- | include " | ||
- | |||
- | location / { | ||
- | include " | ||
- | include " | ||
- | root / | ||
- | } | ||
- | |||
- | location = /common { | ||
- | | ||
- | } | ||
- | |||
- | location /common/ { | ||
- | include " | ||
- | include " | ||
- | client_max_body_size 512M; | ||
- | proxy_pass http:// | ||
- | proxy_set_header Connection $http_connection; | ||
- | proxy_set_header Connection ' | ||
- | proxy_cache_bypass $http_upgrade; | ||
- | } | ||
- | |||
- | | ||
- | include " | ||
- | include " | ||
- | |||
- | # https:// | ||
- | # https:// | ||
- | set $dest $http_destination; | ||
- | if ($http_destination ~ " | ||
- | set $dest http:// | ||
- | } | ||
- | |||
- | # Warning: adding / at the end of the proxy_pass will break WebDAV! | ||
- | proxy_pass http:// | ||
- | proxy_buffering off; | ||
- | gzip off; | ||
- | proxy_pass_request_headers on; | ||
- | | ||
- | } | ||
- | client_max_body_size 100M; | ||
- | } | ||
- | </ | ||
- | |||
- | The reverse proxy configuration doesn' | ||
- | |||
- | This example also shows how i have integrated [[selfhost: | ||
- | |||
- | Refer to the [[selfhost: | ||
- | |||
- | ==== Main Directory Page ==== | ||
- | |||
- | As you can spot from the above NGINX configuration, | ||
- | |||
- | For this i am using my [[services: | ||
- | <file - site.json> | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | }, | ||
- | " | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | [ { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | } ] | ||
- | } | ||
- | ], | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | } | ||
- | } | ||
- | </ | ||
- | |||
- | |||
- | |||
- | |||
- | ===== Experimental stuff ===== | ||
- | |||
- | Just some additional experiments i did, for future references. | ||
- | |||
- | === Nephele-Serve === | ||
- | Replacing WebDAV with Nephele-Serve (which will support also CardDAV/ | ||
- | |||
- | https:// | ||
- | https:// | ||
- | |||
- | NPM needs to be enabled for the fileserver user: | ||
- | < | ||
- | NPM_PACKAGES=" | ||
- | mkdir -p " | ||
- | echo " | ||
- | </ | ||
- | |||
- | And in **~/ | ||
- | |||
- | < | ||
- | # NPM packages in homedir | ||
- | NPM_PACKAGES=" | ||
- | # Tell our environment about user-installed node tools | ||
- | PATH=" | ||
- | # Unset manpath so we can inherit from / | ||
- | unset MANPATH # delete if you already modified MANPATH elsewhere in your configuration | ||
- | MANPATH=" | ||
- | # Tell Node about these packages | ||
- | NODE_PATH=" | ||
- | </ | ||
- | |||
- | Install: | ||
- | <code bash> | ||
- | source ~/ | ||
- | npm install -g nephele-serve | ||
- | </ | ||
- | |||
- | Advantages: it's a simple server that supports pam_auth. In the future, it might **also** replace [[services: | ||
- | |||
- | Disadvantages: | ||
- | |||
- | === sFtpGO WebDAV / web browser === | ||
- | |||
- | Interesting [[https:// | ||
- | |||
- | You need to start it once then edit **sftpgo.json**: | ||
- | < | ||
- | " | ||
- | " | ||
- | " | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | } | ||
- | ], | ||
- | </ | ||
- | Advnatages: easier than Apache to setup, support base_url | ||
- | |||
- | Disadvantages: | ||
- | |||
- | === KaraDAV / PicoDAV === | ||
- | |||
- | [[https:// | ||
- | |||
- | [[https:// | ||
- | |||
- | Unfortunately, | ||
- | |||