Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
selfhost:nginx [2024/03/06 08:39] – willy | selfhost:nginx [2025/03/13 09:29] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The Reverse Proxy concept ====== | + | ====== |
The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services. | The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services. | ||
- | A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encyrption | + | A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encryption |
- | The reverse-proxy will also take care of handling | + | The reverse-proxy will take care of handling |
Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users. | Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users. | ||
- | And as a final note, using a reverse-proxy you can organize easily all your services under one single domain. There are limitations, | + | And as a final note, using a reverse-proxy you can organize easily all your services |
===== NGINX ===== | ===== NGINX ===== | ||
My choice for a web server in this case is [[https:// | My choice for a web server in this case is [[https:// | ||
- | * It's much easier than [[https:// | + | * It's much easier than [[https:// |
- | * It has more features than [[https:// | + | * It has more features than [[https:// |
* It is fully integrated in [[https:// | * It is fully integrated in [[https:// | ||
- | In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https:// | + | In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https:// |
===== Base URLs and sub-domains ===== | ===== Base URLs and sub-domains ===== | ||
- | There are two different philosophies on how to host services. | + | There are two different philosophies on how to host services: serve as a sub-path of a domain, or use sub-domains. I used to like best the // |
Let's assume you have your own domain **mydomain.com** and you want to expose a service called // | Let's assume you have your own domain **mydomain.com** and you want to expose a service called // | ||
Line 31: | Line 31: | ||
As a **sub-path**: | As a **sub-path**: | ||
- | * Pros: only one domain needed, no need to create sub-domains | + | * Pros: only one domain needed, no need to create sub-domains |
- | * Pros: easy to organize services in virtual sub-folders | + | |
* Pros: the service existence is unknown to anybody not authorized | * Pros: the service existence is unknown to anybody not authorized | ||
- | * Cons: each service must support Base URL setting | + | * Cons: each service must support Base URL setting |
+ | * Cons: SSO support must be somehow consistent to avoid headaches (well, SSO support is still spotty today!) | ||
+ | * Cons: security wise, cookies and CORS can bring unintended vulnerabilities between services, because they all share the same subdomain. | ||
+ | * Cons: all services share the same HTTPS certificate. | ||
As a **sub-domain**: | As a **sub-domain**: | ||
* Pros: any service will work, no need to support Base URL | * Pros: any service will work, no need to support Base URL | ||
- | * Cons: require additional certificates for HTTPS/SSL for each sub-domain | + | * Pros: each service can have it's own HTTPS certificate |
- | * Cons: cannot easily organize together | + | * Pros: each service is neatly organized in it's own subdomain |
+ | * Pros: cookies are not shared between services, and CORS protection works | ||
* Cons: exposed to public knowledge (DNS records are public) that the service exist | * Cons: exposed to public knowledge (DNS records are public) that the service exist | ||
* Cons: also public knowledge because there are services indexing all existing certificates. | * Cons: also public knowledge because there are services indexing all existing certificates. | ||
- | __Note:__ you can create //wildcard/ certificates that will match any subdomain, but there are drawbacks to this and it's not a good idea, security wise. You can still mitigate the one certificate per subdomain by adding each subdomain to the same certificate of course, but you will still need to extend your certificate each time you add a subdomain. | + | __Note:__ you can create //wildcard// certificates that will match any subdomain, but there are drawbacks to this and it's not a good idea, security wise. You can still mitigate the one certificate per subdomain by adding each subdomain to the same certificate of course, but you will still need to extend your certificate each time you add a subdomain: this is my approach. |
- | I prefer the sub-path whenever possible, but in some cases you will be forced to use sub-domains. And what if you cannot spin your sub-domains? | + | To make a story short, i go with subdomains for well separated services, while going with sub-paths when sharing stuff that kind belongs together. Also, a deciding factor is whether the selected |
- | Then using sub-paths, the use of a reverse-proxy like NGINX allows you a little bit of flexibility because you can, to an extend, perform rewrite operations on URL's and also on the response to the browser, but this all come to a cost in processing power and, moreover, it's not always feasible. In general for sub-paths to work properly it has to be supported by the service. | ||
+ | ===== Reverse Proxy propagation ===== | ||
- | ===== Authentication ===== | + | The reverse proxy is installed on the local server, i assume your local server is reachable from remote (see [[networking: |
- | Having a strong layer of authentication is mandatory for self-hosted services that are exposed | + | The reverse proxy will need to be accessible to both the internal users and the external users. You could setup two different proxies, but i prefer |
- | A few assumptions: | + | The setup i am describing |
- | + | ||
- | There are a few key points that i want to stress on authentication: | + | |
- | * 2FA (Two Factor Authentication) will not be considered | + | |
- | * You want to create users only once, as much as possible. | + | |
- | * Only selected services will need to differentiate between users | + | |
- | * Most services will not need to know who is accessing them | + | |
- | * From outside, **all** services must require authentication | + | |
- | * From inside, authentication is required only where a specific user makes a difference | + | |
- | * Avoid double authentication when possible | + | |
- | + | ||
- | For example, a //media server// will need to know who is connecting to show your preferred shows and your " | + | |
- | + | ||
- | Authentication will be required when connecting from // | + | |
- | + | ||
- | The most simple and effective approach is to enable the PAM Authentication plugin of NGINX and connect your reverse-proxy authentication to your server user management. So that by adding a new user to your server, that will be automagically added to your services, or at least the ones that can link to reverse-proxy authentication. | + | |
- | + | ||
- | You have the following combinations: | + | |
- | * Services that do not require to differentiate the user | + | |
- | * Services that needs to know who is connecting, and **can** get this info from the reverse-proxy | + | |
- | * Services that needs to know who is connecting, and **cannot** get this info from the reverse-proxy | + | |
- | + | ||
- | You will be able to play with the PAM authentication module of NGINX on a per-service base to achieve this. | + | |
- | + | ||
- | The general rule is as follow: | + | |
- | ^ Service | + | |
- | | do not require authentication | + | |
- | | Require auth, can use reverse-proxy auth | use PAM auth | use PAM auth | | + | |
- | | Require auth, cannot use reverse-proxy auth | use service auth | use service auth | | + | |
- | + | ||
- | Using PAM Auth on services that cannot understand reverse-proxy auth is great way to increase security as others will not even be able to reach your service, but will require the users to perform the authentication twice and might cause some mobile apps to fail. | + | |
- | + | ||
- | Please note that for services that cannot use reverse-proxy auth you will need to create users. | + | |
- | + | ||
- | There is a more complex solution which is using something like [[https:// | + | |
- | + | ||
- | ===== Reverse Proxy propagation to external world ===== | + | |
- | + | ||
- | The reverse proxy is installed on the local server, you should have already guessed that remote access is performed using the SSH tunnelling described in the [[selfhost: | + | |
- | + | ||
- | The setup i am proposing | + | |
* Port 80: both to local and remote, will just be a redirect to HTTPS | * Port 80: both to local and remote, will just be a redirect to HTTPS | ||
- | * Port 443: standard HTTPS for **internal** access, no PAM authentication | + | * Port 443: standard HTTPS for **internal** access |
- | * Port 8443: HTTPS with PAM authentication | + | * Port 8443: HTTPS for **external** access |
- | **Note:** for Let's Encrypt CertBot to work properly you need to redirect port 80 and 443 from your external server to your internal server. CertBot will shutdown your NGINX and spin a custom NGINX server that you cannot tweak so it's critical that your SSH tunnels are properly forwarding ports 80 and 443 from the external server to the internal one, or it will not work. | + | **Note:** for Let's Encrypt CertBot to work properly you **need** to redirect |
===== Installing NGINX ===== | ===== Installing NGINX ===== | ||
- | NGINX installation on the home server is pretty straightforward, | + | NGINX installation on the home server is pretty straightforward, |
+ | * //auth_request//: needed for SSO like authelia | ||
+ | * // | ||
+ | * //sub// is used to allow substitutions inside the pages proxied, to fix web applications | ||
+ | * //gunzip// is used to unzip the requests | ||
+ | * //realip// is needed by SSO like authelia | ||
+ | |||
+ | While NGINX support WebDAV, i strongly suggest | ||
So create the file **/ | So create the file **/ | ||
<file - nginx> | <file - nginx> | ||
app-misc/ | app-misc/ | ||
- | www-servers/ | + | www-servers/ |
</ | </ | ||
- | The **gunzip** and **sub** modules might be useful to support URL rewrite and such. | + | Note: you might want to tweak the second line to your needs, see the [[https:// |
- | + | ||
- | Note: you might want to tweak the second line to your needs, see the [[https:// | + | |
- | + | ||
- | A brief explanation of the above USE flags: | + | |
- | * // | + | |
- | * //sub// is used to allow substitutions inside the pages proxied, to fix web applications that don't play well with reverse-proxies | + | |
- | * //gunzip// is used to unzip the requests and let the //sub// module works also on compressed requests | + | |
Now install nginx: | Now install nginx: | ||
<code bash> | <code bash> | ||
- | > | + | emerge -v nginx |
</ | </ | ||
- | ==== NGINX pam_auth ==== | + | You can start it after you have configured it. |
- | I think it's nice that with NGINX you can authenticate your users directly with your home server users. This means you don't need to add a second set of users, and that the users will only need one password, and no sync is required between HTTP users and server users. This is achieved using the **pam_auth** module on Linux. You have already built nginx with pam_auth support, but you need to configure it. | ||
- | Create the file **/ | + | ===== NGINX main configuration |
- | <file - nginx> | + | |
- | auth required pam_unix.so | + | |
- | account required pam_unix.so | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | ==== NGINX main configuration ==== | + | |
There are many ways to write nice NGINX config files, i will show you mine which i find quite effective, organized and simple. It make use of the //import// directive and splits the configuration to at least one file per service and one file per sub-domain. | There are many ways to write nice NGINX config files, i will show you mine which i find quite effective, organized and simple. It make use of the //import// directive and splits the configuration to at least one file per service and one file per sub-domain. | ||
Assumptions: | Assumptions: | ||
- | * Your domain is **mydomain.com** | + | * Your domain is **mydomain.com**, and it has a static landing page under __/ |
- | * Your service X is reachable under **https:// | + | * Your service X is reachable under **https:// |
- | * Your service Y is reachable under **https:// | + | * Your service Y is reachable under **https:// |
- | * Your domain name is replicated with local addresses inside your home (see [[selfhost: | + | |
* All HTTP traffic is redirected to HTTPS | * All HTTP traffic is redirected to HTTPS | ||
+ | * You have a single Let's Encrypt SSL certificate which covers all the subdomains of your domain (either a wildcard or a comulative cert it's up to you) | ||
+ | * You might have more than one main domain | ||
- | You will need the following files: | + | The top-level **mydomain.com** will have it's own folder, then you will create a set of sub-folders stemming from the main domain, one folder for each sub-domains, |
+ | |||
+ | So you will need the following files: | ||
* **/ | * **/ | ||
- | * **/ | ||
* **/ | * **/ | ||
* **/ | * **/ | ||
- | * **/ | + | * **/ |
* **/ | * **/ | ||
- | * **/ | + | * plus any other SSO specific config files. |
+ | |||
+ | The **certbot.conf** | ||
- | The **certbot.conf** file will be created later on. | ||
- | Note that when multiple services are hosted on the same domain (like serviceX on mydomain.com) for clarity i prefer to split them into separated config files. | + | ==== Top-level configuration |
- | So, here is the content for the **/ | + | So, here is the content for the main **/ |
<file - nginx.conf> | <file - nginx.conf> | ||
user nginx nginx; | user nginx nginx; | ||
Line 194: | Line 152: | ||
} | } | ||
- | # Add domains here (only the main config file for each subdomain!) | + | |
+ | proxy_headers_hash_max_size 512; | ||
+ | proxy_headers_hash_bucket_size 128; | ||
+ | |||
+ | | ||
include com.mydomain/ | include com.mydomain/ | ||
- | include | + | |
+ | # This is for SSL and needs to be included only once for all the domains | ||
+ | | ||
} | } | ||
</ | </ | ||
- | I will show you come raw example | + | This will set your defaults |
- | The following | + | |
+ | ==== mydomain.com configuration | ||
+ | |||
+ | Now, for the specific | ||
<file - mydomain.conf> | <file - mydomain.conf> | ||
- | # Manage tunnels request from external server | + | |
- | # HTTPS listen on a custom port (8443) because this is authenticated | + | access_log / |
+ | error_log / | ||
+ | |||
+ | # simple catch-all server for the domain | ||
server { | server { | ||
- | | + | # You might want to specify also the internal |
+ | | ||
+ | # Port for users from outside | ||
listen 8443 ssl; | listen 8443 ssl; | ||
- | | + | |
- | | + | listen 443 ssl; |
- | root /data/web/htdocs; | + | |
- | | + | |
- | | + | |
- | include | + | location / { |
- | include | + | root /var/www/html; |
+ | | ||
+ | |||
+ | # include | ||
+ | | ||
+ | |||
+ | # | ||
+ | | ||
} | } | ||
- | # Manage direct request inside home network | + | # include all sub-domains entry points: |
- | # It' | + | include com.mydomain/ |
- | # HTTPS on port 443 for direct local connections | + | </ |
+ | |||
+ | This will create the basic setup for your base domain name. I have assumed you want a static landing page, but you might put a // | ||
+ | |||
+ | |||
+ | ==== sub-domains configuration | ||
+ | |||
+ | It should be clear now that each sub-domain will have it' | ||
+ | |||
+ | I will assume that // | ||
+ | <file - y.conf> | ||
server { | server { | ||
- | | + | server_name |
- | listen 8443 ssl; | + | |
- | access_log / | + | listen 443 ssl; # internal access |
- | error_log / | + | access_log / |
- | | + | |
- | include "com.mydomain/ | + | location |
- | include | + | # |
+ | | ||
+ | } | ||
+ | # | ||
+ | | ||
} | } | ||
</ | </ | ||
- | In this case i assume | + | I suggest |
- | Instead, | + | |
- | <file - y.conf> | + | ==== Differentiate between Internal or External access for services ==== |
+ | |||
+ | In my setup i have some differences when a service is accessed from //within// the home network, or from //outside// the home network. | ||
+ | |||
+ | The key point is that // | ||
+ | |||
+ | So, for example, a service _only_ available inside the home network will have something like: | ||
+ | <code> | ||
+ | server { | ||
+ | server_name internal_only.mydomain.com; | ||
+ | listen 443 ssl; # internal access | ||
+ | http2 on; | ||
+ | access_log / | ||
+ | error_log / | ||
+ | location / { | ||
+ | #Generic proxy pass to proxied service | ||
+ | proxy_pass http:// | ||
+ | } | ||
+ | # include HTTPS certs stuff: | ||
+ | | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | While a service that can be accessed both from internal and external: | ||
+ | <code> | ||
server { | server { | ||
- | server_name | + | server_name |
listen 8443 ssl; # external access | listen 8443 ssl; # external access | ||
listen 443 ssl; # internal access | listen 443 ssl; # internal access | ||
- | | + | |
- | #auth_pam " | + | access_log / |
- | # | + | error_log / |
- | access_log / | + | |
- | error_log / | + | |
location / { | location / { | ||
#Generic proxy pass to proxied service | #Generic proxy pass to proxied service | ||
proxy_pass http:// | proxy_pass http:// | ||
} | } | ||
- | | + | # |
+ | | ||
} | } | ||
- | </file> | + | </code> |
- | Of course, if instead your service | + | A service |
+ | < | ||
+ | server { | ||
+ | server_name serviceZ.mydomain.com; | ||
+ | listen 443 ssl; # internal access | ||
+ | http2 on; | ||
+ | access_log / | ||
+ | error_log / | ||
+ | location / { | ||
+ | # | ||
+ | proxy_pass http:// | ||
+ | } | ||
+ | # include HTTPS certs stuff: | ||
+ | | ||
+ | } | ||
+ | server { | ||
+ | server_name serviceZ.mydomain.com; | ||
+ | listen 8443 ssl; # external access | ||
+ | http2 on; | ||
+ | [[[ put here your SSO lines ]]] | ||
+ | access_log / | ||
+ | error_log / | ||
+ | location / { | ||
+ | #Generic proxy pass to proxied service | ||
+ | proxy_pass http:// | ||
+ | } | ||
+ | # include HTTPS certs stuff: | ||
+ | | ||
+ | } | ||
+ | </ | ||
- | Again, detailed configurations will be provided for each service. | + | In this case, you can even optimize more by moving the **location** lines, which are identical, inside another file that you __include__ twice. Better to avoid redundancy! |
+ | Of course, refer to the [[selfhost: | ||
+ | ===== Generate SSL certificates for HTTPS ===== | ||
- | ==== Generate SSL certificates for HTTPS ==== | + | Nowadays HTTPS is a must for many reasons, including privacy and security. I assume this is a mandatory requirement. A lot of services will not even work without HTTPS. |
- | + | ||
- | Nowadays HTTPS is a must for many reasons, including privacy and security. I assume this is a mandatory requirement. | + | |
Enabling HTTPS requires the generation of valid SSL certificates for your domain(s). You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https:// | Enabling HTTPS requires the generation of valid SSL certificates for your domain(s). You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https:// | ||
Line 295: | Line 340: | ||
Now, you **must** generate certificates that chains toghether all the subdomains you use. This means that if you add, later on, another sub-domain to host a new service you will **need to** re-run the above //certbot// command adding //-d newsubdomain.mydomain.com// | Now, you **must** generate certificates that chains toghether all the subdomains you use. This means that if you add, later on, another sub-domain to host a new service you will **need to** re-run the above //certbot// command adding //-d newsubdomain.mydomain.com// | ||
- | |||
- | This is also why using sub-paths is simpler: you do not have to extend your certificate for a new service. | ||
Put this content into your **/ | Put this content into your **/ | ||
Line 310: | Line 353: | ||
Let's Encrypt certificates last 90 days, then they need to be renewed. This is automated by CertBot but you need to call it periodically. You can use crontab for this. Edit root crontab: | Let's Encrypt certificates last 90 days, then they need to be renewed. This is automated by CertBot but you need to call it periodically. You can use crontab for this. Edit root crontab: | ||
<code bash> | <code bash> | ||
- | > | + | crontab -e |
</ | </ | ||
Line 324: | Line 367: | ||
<code bash> | <code bash> | ||
- | > | + | rc-update add nginx default |
- | > / | + | / |
</ | </ | ||
- | ==== Enable CGI support with NGINX ==== | + | |
+ | |||
+ | ==== Quick and dirty script for new subdomains ==== | ||
+ | |||
+ | When you need to **add** a new subbomain to your certificate, | ||
+ | <file - certbot_script.sh> | ||
+ | # | ||
+ | |||
+ | DOMAINS=" | ||
+ | |||
+ | domains= | ||
+ | for i in ${DOMAINS} | ||
+ | do | ||
+ | domains=" | ||
+ | done | ||
+ | |||
+ | certbot certonly --expand --nginx ${domains} | ||
+ | </ | ||
+ | |||
+ | So __FIRST__ you **update** the script adding the new domain at the end of the DOMAINS line, then you run the script and restart your NGINX. | ||
+ | |||
+ | |||
+ | ===== Enable CGI support with NGINX ===== | ||
To be able to run system scripts and, in general, [[https:// | To be able to run system scripts and, in general, [[https:// | ||
- | For using CGI directly with NGGINX | + | For using CGI directly with NGINX (another option |
<code bash> | <code bash> | ||
emerge www-misc/ | emerge www-misc/ | ||
Line 345: | Line 410: | ||
The contents of the config file sohuld be: | The contents of the config file sohuld be: | ||
<file - spawn-fcgi.fcgiwrap> | <file - spawn-fcgi.fcgiwrap> | ||
- | FCGI_SOCKET=/ | + | # The " |
+ | FCGI_SOCKET=/ | ||
FCGI_PORT= | FCGI_PORT= | ||
- | FCGI_PROGRAM=/ | + | # The -f send stderr to nginx log |
+ | FCGI_PROGRAM="/ | ||
FCGI_USER=nginx | FCGI_USER=nginx | ||
FCGI_GROUP=nginx | FCGI_GROUP=nginx | ||
Line 367: | Line 434: | ||
fastcgi_param DOCUMENT_ROOT / | fastcgi_param DOCUMENT_ROOT / | ||
fastcgi_param SCRIPT_NAME | fastcgi_param SCRIPT_NAME | ||
- | fastcgi_pass unix:/ | + | fastcgi_pass unix:/ |
} | } | ||
</ | </ | ||
- | ==== In short: add & enable a service ==== | + | |
+ | ===== In short: add & enable a service | ||
Assuming you want to add a new service to your Reverse Proxy and the relative configuration has been written in **service.conf** file, you need to **include** it inside your URL's configuration file. If the service needs to be under **https:// | Assuming you want to add a new service to your Reverse Proxy and the relative configuration has been written in **service.conf** file, you need to **include** it inside your URL's configuration file. If the service needs to be under **https:// | ||
Line 382: | Line 450: | ||
/ | / | ||
</ | </ | ||
- | |||
- | |||
- | |||