User Tools

This is an old revision of the document!


The Reverse Proxy concept

The use of a reverse proxy is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services.

A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encyrption (HTTPS/SSL), authentication, load-balancing and security. If your services are properly written (not too many, but the best ones are) they will accept your reverse-proxy authentication directly without even the need to create users for each service, in this case your reverse-proxy will also be your SSO (Single Sign On) solution.

The reverse-proxy will also take care of handling HTTP/SSL certificates in one centralized place making it much easier to configure all your services without HTTPS then converting seamlessly all the HTTP traffic to HTTPS. It's much easier to manage one certificate in one place rather than depending on each service capability to handle HTTPS independently.

Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users.

And as a final note, using a reverse-proxy you can organize easily all your services under one single domain. There are limitations, mostly due to poorly written services or peculiar protocols, that might require independent sub-domains, but i will show you how to handle also these cases easily with the reverse-proxy.

NGINX

My choice for a web server in this case is NGINX between the many available as Open Source because:

  • It's much easier than Apache to setup as a reverse-proxy, also less resource hungry.
  • It has more features than Caddy
  • It is fully integrated in Let's Encrypt SSL infrastructure / CertBot script

In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like PHP support or FastCGI, NGINX will support you without the need for an additional service on your server.

Base URLs and sub-domains

There are two different philosophies on how to host services. The one i like best, i think it's simpler and more elegant, is to use one single domain and expose each service in it's own sub-path or better call it Base URL. The alternative is to allocate one sub-domain for each service.

Let's assume you have your own domain mydomain.com and you want to expose a service called jellyfin (a well known media-server). You can expose it:

Here are the main points and drawbacks of each solution.

As a sub-path:

  • Pros: only one domain needed, no need to create sub-domains (not always possible)
  • Pros: easy to organize services in virtual sub-folders
  • Pros: the service existence is unknown to anybody not authorized
  • Cons: each service must support Base URL setting

As a sub-domain:

  • Pros: any service will work, no need to support Base URL
  • Cons: require additional certificates for HTTPS/SSL for each sub-domain
  • Cons: cannot easily organize together
  • Cons: exposed to public knowledge (DNS records are public) that the service exist

I prefer the sub-path whenever possible, but in some cases you will be forced to use sub-domains. And what if you cannot spin your sub-domains? Well, forget those services that require a subdomain.

Then using sub-paths, the use of a reverse-proxy like NGINX allows you a little bit of flexibility because you can, to an extend, perform rewrite operations on URL's and also on the response to the browser, but this all come to a cost in processing power and, moreover, it's not always feasible. In general for sub-paths to work properly it has to be supported by the service.

Authentication

Having a strong layer of authentication is mandatory for self-hosted services that are exposed to the internet. We talking about authentication it's important to remember that is has a double meaning: to recognize a user rather than another use, and to restrict access to your service based on who the user is.

A few assumptions: self-hosting for home access, which means a limited and trusted list of users which doesn't change often in time. Security is important, but ease of use is also important. Simplicity of user management is also important.

There are a few key points that i want to stress on authentication:

  • 2FA (Two Factor Authentication) will not be considered
  • You want to create users only once, as much as possible.
  • Only selected services will need to differentiate between users
  • Most services will not need to know who is accessing them
  • From outside, all services must require authentication
  • From inside, authentication is required only where a specific user makes a difference
  • Avoid double authentication when possible

For example, a media server will need to know who is connecting to show your preferred shows and your “resume from here…” movies. The printer control page instead should be accessible by anyone inside home.

Authentication will be required when connecting from outside, always, while will be needed only for selected services from inside.

The most simple and effective approach is to enable the PAM Authentication plugin of NGINX and connect your reverse-proxy authentication to your server user management. So that by adding a new user to your server, that will be automagically added to your services, or at least the ones that can link to reverse-proxy authentication.

You have the following combinations:

  • Services that do not require to differentiate the user
  • Services that needs to know who is connecting, and can get this info from the reverse-proxy
  • Services that needs to know who is connecting, and cannot get this info from the reverse-proxy

You will be able to play with the PAM authentication module of NGINX on a per-service base to achieve this.

The general rule is as follow:

Service From inside From outside
do not require authentication auth not required use PAM auth
Require auth, can use reverse-proxy auth use PAM auth use PAM auth
Require auth, cannot use reverse-proxy auth use service auth use service auth

Using PAM Auth on services that cannot understand reverse-proxy auth is great way to increase security as others will not even be able to reach your service, but will require the users to perform the authentication twice and might cause some mobile apps to fail.

Reverse Proxy propagation to external world

The reverse proxy is installed on the local server, you should have already guessed that remote access is performed using the SSH tunnelling described in the specific page. The underlying idea is that you will have your reverse proxy listening to different ports, and these ports will be forwarded to your external server using the SSH tunnels. Differentiating the ports is required to be able to apply PAM authentication depending on where your user connects from.

The setup i am proposing uses three different ports:

  • Port 80: both to local and remote, will just be a redirect to HTTPS
  • Port 443: standard HTTPS for internal access, no PAM authentication
  • Port 8443: HTTPS with PAM authentication for external access

Installing NGINX

NGINX installation on the home server is pretty straightforward, but we need to enable one specific authentication module, the pam authentication module, because i will show you how to link NGINX authentication to your home server users directly, without the need to create more users and passwords. If you prefer to use a different authentication, like basic_auth, i leave this out to you.

So create the file /etc/portage/package.use/nginx with the following lines:

app-misc/mime-types nginx
www-servers/nginx NGINX_MODULES_HTTP: auth_pam dav dav_ext gunzip sub xslt

The dav, dav_ext and xslt modules are required for WebDAV support later on.

(the first line is needed at the time of writing this page, YMMV)

Note: you might want to tweak the second line to your needs, see the flags for nginx and adapt.

A brief explanation of the above USE flags:

  • auth_pam is used to enable PAM based authentication
  • sub is used to allow substitutions inside the pages proxied, to fix web applications that don't play well with reverse-proxies
  • gunzip is used to unzip the requests and let the sub module works also on compressed requests

Now install nginx:

 > emerge -v nginx

NGINX pam_auth

I think it's nice that with NGINX you can authenticate your users directly with your home server users. This means you don't need to add a second set of users, and that the users will only need one password, and no sync is required between HTTP users and server users. This is achieved using the pam_auth module on Linux. You have already built nginx with pam_auth support, but you need to configure it.

Create the file /etc/pam.d/nginx with these lines:

auth required pam_unix.so
account required pam_unix.so

NGINX main configuration

You need two different NINX configurations. One facing the home network, which will serve on HP only, and one facing the external world, which will serve HTTPS only with HTTP as a redirect to HTTPS.

NGINX is very flexible in configuration, i will show you how to properly separate it's configuration file so that the main core is shared between home and remote access.

The main configuration file is located at /etc/nginx/nginx.conf, the default one is fine for the standard stuff, i will let you tweak and adapt it to your needs for everything outside the server sections. You will need to remove all server sections of your file and replace with the following:

        server {
                # Home facing server, HTTP only
                listen 127.0.0.1:80;
                server_name 192.168.0.1;
 
                include "folders/main.conf";
 
                access_log /var/log/nginx/localhost.access_log main;
                error_log /var/log/nginx/localhost.error_log info;
        }
 
        server {
                # remote facing server, HTTPS 
                server_name my_remote_server_name;
                auth_pam "Home";
                auth_pam_service_name "nginx";
 
                include "folders/main.conf";
 
                access_log /var/log/nginx/remote.access_log main;
                error_log /var/log/nginx/remote.error_log info;
 
                listen 127.0.0.1:8443 ssl; # managed by Certbot
                ssl_certificate /etc/letsencrypt/live/my_remote_server_name/fullchain.pem; # managed by Certbot
                ssl_certificate_key /etc/letsencrypt/live/my_remote_server_name/privkey.pem; # managed by Certbot
                include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
                ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
 
                location .well-known/acme-challenge/ {
                        auth_pam off;
                        autoindex on;
                }
 
        }
 
 
        server {
                # remote facing server, HTTP to HTTPS redirection
                listen 8080;
                access_log /var/log/nginx/remote.access_log main;
                error_log /var/log/nginx/remote.error_log info;
                return 301 https://$host$request_uri;
        }

I will walk you trough it a bit.

You have one simple section for the home server: it listen on port 80 and logs to some specific home only files. I choose not to use HTTPS inside the home network because it would be complicated to automatically generate the required certificates. If you still want HTTPS on the home side, you should use self-signed certificates, but i leave this exercise to you.

The remote HTTP server is even simpler: just a redirect to the remote HTTPS server, listening on port 8080 since port 80 is already taken by the home server. You never, ever, want to go not encrypted on the outside world. The remote HTTPS server is on port 8443 and adds all the specific HTTPS certificate stuff. Do not bother with it yet, i will explain a bit more later on.

Please note that due to the HTTPS certificates (which at this point are still to be created) you cannot yet start NGINX.

You can see that i used the include directive to point to a common folders/main.conf configuration file that will contain the gist of the common configuration. So, create the /etc/inginx/folders subfolder and put the following in the main.conf:

# This might be needed to direct upload of NZB files 
client_max_body_size 200M;
# This is required sometimes by Deluge web GUI giant cookies
large_client_header_buffers 4 32k;
 
# Here you will put your dashboard
root /data/daemons/htdocs;
 
# Specific service configurations
include "folders/deluge.conf";
include "folders/transmission.conf";
include "folders/nzbget.conf";
include "folders/radarr.conf";
include "folders/readarr_books.conf";
include "folders/readarr_audiobooks.conf";
include "folders/sonarr.conf";
include "folders/lidarr.conf";
include "folders/jellyfin.conf";
include "folders/ombi.conf";
include "folders/bazarr.conf";

As you can see, beside a few settings on top, it includes each service specific config as a separate file. This will give you lots of flexibility in adding or removing single services. The content of each specific service config file will be described in each service page.

The root directive is where you will need to put your dashboard to put all services together in a nice linked page, more details on this later on.

Generate SSL certificates for HTTPS

Enabling HTTPS requires the generation of valid SSL certificates for your server, and you want HTTPS to have full end-to-end encryption for security and privacy. You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the Let's Encrypt certification authority which is an open-source, public and free Certificate Authority that let's you generate and manage your certificates.

Let's Encrypt depends on Certbot, which is a python script pretty powerful and efficient that can generate and renew all your certificates magically and automatically. It works by sending requests to Let's Encrypt infrastructure then place some response tokens inside your web server htdocs folder, in this way Let's Encrypt can verify that you really have access to your server (to prevent spoofing and other security issues), so you need to ensure that the root path of your nginx is accessible from outside in a specific subfolder. The above configuration file for nginx takes care of this. You can then install certbot:

 > emerge -v certbot-nginx certbot

This will pull in all the required software to perform the exchange with Let's Encrypt infrastructure. At this point you only need to run Certbot to generate a certificate for your external domain name:

 > certbot --nginx certonly -d remote_server_name

this will generate the certificate. Make sure that certbot runs at least once daily to update the certificates. You can put it into crontab, as user root:

 > crontab -e

and write the following lines:

47 5 * * * certbot renew  &>> /var/log/certbot.log
31 16 * * * certbot renew  &>> /var/log/certbot.log

You can now start your nginx server:

 > rc-update add nginx default
 > /etc/init.d/nginx start

This website uses technical cookies only. No information is shared with anybody or used in any way but provide the website in your browser.

More information