Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
selfhost:nginx [2024/02/01 13:40] – willy | selfhost:nginx [2025/03/13 09:29] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The Reverse Proxy concept ====== | + | ====== |
The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services. | The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services. | ||
- | A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encyrption | + | A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encryption |
- | The reverse-proxy will also take care of handling | + | The reverse-proxy will take care of handling |
Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users. | Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users. | ||
- | And as a final note, using a reverse-proxy you can organize easily all your services under one single domain. There are limitations, | + | And as a final note, using a reverse-proxy you can organize easily all your services |
===== NGINX ===== | ===== NGINX ===== | ||
My choice for a web server in this case is [[https:// | My choice for a web server in this case is [[https:// | ||
- | * It's much easier than [[https:// | + | * It's much easier than [[https:// |
- | * It has more features than [[https:// | + | * It has more features than [[https:// |
* It is fully integrated in [[https:// | * It is fully integrated in [[https:// | ||
- | In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https:// | + | In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https:// |
===== Base URLs and sub-domains ===== | ===== Base URLs and sub-domains ===== | ||
- | There are two different philosophies on how to host services. | + | There are two different philosophies on how to host services: serve as a sub-path of a domain, or use sub-domains. I used to like best the // |
Let's assume you have your own domain **mydomain.com** and you want to expose a service called // | Let's assume you have your own domain **mydomain.com** and you want to expose a service called // | ||
Line 31: | Line 31: | ||
As a **sub-path**: | As a **sub-path**: | ||
- | * Pros: only one domain needed, no need to create sub-domains | + | * Pros: only one domain needed, no need to create sub-domains |
- | * Pros: easy to organize services in virtual sub-folders | + | |
* Pros: the service existence is unknown to anybody not authorized | * Pros: the service existence is unknown to anybody not authorized | ||
- | * Cons: each service must support Base URL setting | + | * Cons: each service must support Base URL setting |
+ | * Cons: SSO support must be somehow consistent to avoid headaches (well, SSO support is still spotty today!) | ||
+ | * Cons: security wise, cookies and CORS can bring unintended vulnerabilities between services, because they all share the same subdomain. | ||
+ | * Cons: all services share the same HTTPS certificate. | ||
As a **sub-domain**: | As a **sub-domain**: | ||
* Pros: any service will work, no need to support Base URL | * Pros: any service will work, no need to support Base URL | ||
- | * Cons: require additional certificates for HTTPS/SSL for each sub-domain | + | * Pros: each service can have it's own HTTPS certificate |
- | * Cons: cannot easily organize together | + | * Pros: each service is neatly organized in it's own subdomain |
+ | * Pros: cookies are not shared between services, and CORS protection works | ||
* Cons: exposed to public knowledge (DNS records are public) that the service exist | * Cons: exposed to public knowledge (DNS records are public) that the service exist | ||
+ | * Cons: also public knowledge because there are services indexing all existing certificates. | ||
- | I prefer | + | __Note:__ you can create // |
- | Then using sub-paths, | + | To make a story short, i go with subdomains for well separated services, while going with sub-paths |
- | ===== Authentication | + | ===== Reverse Proxy propagation |
- | Having a strong layer of authentication | + | The reverse proxy is installed on the local server, i assume your local server |
- | A few assumptions: | + | The reverse proxy will need to be accessible to both the internal users and the external |
- | There are a few key points that i want to stress on authentication: | + | The setup i am describing uses three different ports: |
- | * 2FA (Two Factor Authentication) will not be considered | + | * Port 80: both to local and remote, will just be a redirect |
- | * You want to create users only once, as much as possible. | + | * Port 443: standard HTTPS for **internal** access |
- | * Only selected services | + | * Port 8443: HTTPS for **external** access |
- | * Most services will not need to know who is accessing them | + | |
- | * From outside, | + | |
- | * From inside, authentication is required only where a specific user makes a difference | + | |
- | | + | |
- | For example, a //media server// will need to know who is connecting | + | **Note:** for Let's Encrypt CertBot to work properly you **need** to redirect **both** port 80 and 443 from your external server |
- | Authentication will be required when connecting from // | ||
- | The most simple and effective approach is to enable the PAM Authentication plugin of NGINX and connect your reverse-proxy authentication to your server user management. So that by adding a new user to your server, that will be automagically added to your services, or at least the ones that can link to reverse-proxy authentication. | + | ===== Installing |
- | You have the following combinations: | + | NGINX installation on the home server is pretty straightforward, |
- | * Services that do not require to differentiate the user | + | * // |
- | * Services that needs to know who is connecting, and **can** get this info from the reverse-proxy | + | * // |
- | * Services that needs to know who is connecting, | + | * //sub// is used to allow substitutions inside the pages proxied, to fix web applications that don't play well with reverse-proxies |
+ | * //gunzip// is used to unzip the requests | ||
+ | | ||
- | You will be able to play with the PAM authentication module of NGINX on a per-service base to achieve this. | + | While NGINX support WebDAV, i strongly suggest you __dont__ enable it as you will not be using it. NGINX WebDAV support is lacking and not really recomended. |
- | The general rule is as follow: | + | So create the file **/ |
- | ^ Service | + | <file - nginx> |
- | | do not require authentication | + | app-misc/ |
- | | Require auth, can use reverse-proxy auth | use PAM auth | use PAM auth | | + | www-servers/ |
- | | Require auth, cannot use reverse-proxy auth | use service auth | use service auth | | + | </ |
- | Using PAM Auth on services that cannot understand reverse-proxy auth is great way to increase security as others will not even be able to reach your service, but will require | + | Note: you might want to tweak the second line to your needs, see the [[https:// |
- | Please note that for services that cannot use reverse-proxy auth you will need to create users. | + | Now install nginx: |
+ | <code bash> | ||
+ | emerge | ||
+ | </ | ||
- | There is a more complex solution which is using something like [[https:// | + | You can start it after you have configured |
- | ===== Reverse Proxy propagation to external world ===== | ||
- | The reverse proxy is installed on the local server, you should have already guessed that remote access is performed using the SSH tunnelling described in the [[selfhost: | + | ===== NGINX main configuration ===== |
- | The setup i am proposing uses three different ports: | + | There are many ways to write nice NGINX config files, i will show you mine which i find quite effective, organized and simple. It make use of the //import// directive and splits the configuration to at least one file per service and one file per sub-domain. |
- | * Port 80: both to local and remote, will just be a redirect to HTTPS | + | |
- | * Port 443: standard HTTPS for **internal** access, no PAM authentication | + | |
- | * Port 8443: HTTPS with PAM authentication for **external** access | + | |
+ | Assumptions: | ||
+ | * Your domain is **mydomain.com**, | ||
+ | * Your service X is reachable under **https:// | ||
+ | * Your service Y is reachable under **https:// | ||
+ | * All HTTP traffic is redirected to HTTPS | ||
+ | * You have a single Let's Encrypt SSL certificate which covers all the subdomains of your domain (either a wildcard or a comulative cert it's up to you) | ||
+ | * You might have more than one main domain | ||
- | ===== Installing NGINX ===== | + | The top-level **mydomain.com** will have it's own folder, then you will create a set of sub-folders stemming from the main domain, one folder for each sub-domains, |
- | NGINX installation on the home server is pretty straightforward, | + | So you will need the following files: |
+ | * **/etc/nginx/nginx.conf**: main config file, entry point. | ||
+ | * **/etc/nginx/com.mydomain/certbot.conf**: | ||
+ | * **/ | ||
+ | * **/ | ||
+ | * **/ | ||
+ | * plus any other SSO specific config files. | ||
- | So create the file **/ | + | The **certbot.conf** file will be created later on, the specific SSO config files are described in the [[selfhost:sso|Authentication]] page. |
- | <file - nginx> | + | |
- | app-misc/ | + | |
- | www-servers/ | + | |
- | </ | + | |
- | The **gunzip** and **sub** modules might be useful to support URL rewrite and such. | ||
- | Note: you might want to tweak the second line to your needs, see the [[https:// | + | ==== Top-level configuration |
- | A brief explanation of the above USE flags: | + | So, here is the content for the main **/etc/nginx/nginx.conf**: |
- | | + | < |
- | * //sub// is used to allow substitutions inside the pages proxied, to fix web applications that don't play well with reverse-proxies | + | user nginx nginx; |
- | * //gunzip// is used to unzip the requests and let the //sub// module works also on compressed requests | + | |
- | Now install nginx: | + | error_log /var/log/nginx/error_log info; |
- | <code bash> | + | |
- | > emerge -v nginx | + | |
- | </code> | + | |
- | ==== NGINX pam_auth ==== | + | events { |
+ | worker_connections 1024; | ||
+ | use epoll; | ||
+ | } | ||
- | I think it's nice that with NGINX you can authenticate your users directly with your home server users. This means you don't need to add a second set of users, and that the users will only need one password, and no sync is required between HTTP users and server users. This is achieved using the **pam_auth** module on Linux. You have already built nginx with pam_auth support, but you need to configure it. | + | http { |
+ | include / | ||
+ | # Unknown stuff is considered | ||
+ | default_type application/ | ||
+ | # Set a reasonably informing log format | ||
+ | log_format main | ||
+ | ' | ||
+ | '" | ||
+ | '" | ||
+ | '" | ||
+ | # Improve file upload | ||
+ | tcp_nopush on; | ||
+ | sendfile on; | ||
+ | # Indexes are html by default | ||
+ | index index.html; | ||
- | Create | + | # General catch-all for HTTPS redirection, |
- | < | + | server { |
- | auth required pam_unix.so | + | listen 80 default_server; |
- | account required pam_unix.so | + | return 301 https:// |
+ | } | ||
+ | |||
+ | # Using Authelia SSO can lead to longer headers, better increase buffers | ||
+ | proxy_headers_hash_max_size 512; | ||
+ | proxy_headers_hash_bucket_size 128; | ||
+ | |||
+ | # Add domains here (only the main config | ||
+ | include com.mydomain/ | ||
+ | |||
+ | # This is for SSL and needs to be included only once for all the domains | ||
+ | include | ||
+ | } | ||
</ | </ | ||
+ | This will set your defaults for every service and site served by this reverse proxy, then will load the // | ||
- | ==== NGINX main configuration ==== | ||
- | xxxx | + | ==== mydomain.com configuration |
+ | Now, for the specific **mydomain.com**, | ||
+ | <file - mydomain.conf> | ||
+ | access_log / | ||
+ | error_log / | ||
+ | # simple catch-all server for the domain | ||
+ | server { | ||
+ | # You might want to specify also the internal | ||
+ | server_name mydomain.com; | ||
+ | # Port for users from outside | ||
+ | listen 8443 ssl; | ||
+ | # Port for users from inside | ||
+ | listen 443 ssl; | ||
+ | http2 on; | ||
+ | # unauthenticated static landing page (maybe a "get off my lawn" GIF...) | ||
+ | location / { | ||
+ | root / | ||
+ | } | ||
+ | # include all sub-paths for mydomain.com: | ||
+ | | ||
- | You need two different NINX configurations. One facing the home network, which will serve on HP only, and one facing the external world, which will serve HTTPS only with HTTP as a redirect to HTTPS. | + | # include HTTPS certs stuff: |
+ | | ||
+ | } | ||
- | NGINX is very flexible in configuration, | + | # include all sub-domains entry points: |
+ | include com.mydomain/ | ||
+ | </ | ||
- | The main configuration file is located at **/ | + | This will create |
- | <code nginx> | + | |
- | server { | + | |
- | # Home facing server, HTTP only | + | |
- | listen 127.0.0.1: | + | |
- | server_name 192.168.0.1; | + | |
- | include " | ||
- | access_log / | + | ==== sub-domains configuration |
- | error_log / | + | |
- | } | + | |
- | server { | + | It should be clear now that each sub-domain will have it's own sub-folder and contain at least one (or more) configuration files inside for each sub-path, like the one for serviceY. |
- | # remote facing server, HTTPS | + | |
- | server_name my_remote_server_name; | + | |
- | auth_pam " | + | |
- | auth_pam_service_name " | + | |
- | include " | + | I will assume that //serviceY// perform it's own authentication and cannot use SSO: |
+ | <file - y.conf> | ||
+ | server { | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | # | ||
+ | | ||
+ | } | ||
+ | # include HTTPS certs stuff: | ||
+ | | ||
+ | } | ||
+ | </ | ||
- | access_log | + | I suggest you split all sub-paths for each sub-domain in a separate config file and //include// them inside the //server// block, like i did above for // |
- | error_log | + | |
- | listen 127.0.0.1: | ||
- | ssl_certificate / | ||
- | ssl_certificate_key / | ||
- | include / | ||
- | ssl_dhparam / | ||
- | location .well-known/ | + | ==== Differentiate between Internal or External access for services ==== |
- | auth_pam off; | + | |
- | autoindex on; | + | |
- | } | + | |
- | } | + | In my setup i have some differences when a service is accessed from //within// the home network, or from //outside// the home network. |
+ | The key point is that // | ||
- | | + | So, for example, a service _only_ available inside the home network will have something like: |
- | # remote facing server, HTTP to HTTPS redirection | + | < |
- | listen | + | server { |
- | access_log / | + | |
- | error_log / | + | listen |
- | | + | http2 on; |
+ | | ||
+ | error_log / | ||
+ | location / { | ||
+ | | ||
+ | proxy_pass http://127.0.0.1: | ||
} | } | ||
+ | # include HTTPS certs stuff: | ||
+ | | ||
+ | } | ||
</ | </ | ||
- | I will walk you trough it a bit. | + | While a service that can be accessed both from internal and external: |
- | + | < | |
- | You have one simple section for the home server: it listen on port 80 and logs to some specific home only files. I choose not to use HTTPS inside the home network because it would be complicated to automatically generate the required certificates. If you still want HTTPS on the home side, you should use self-signed certificates, | + | server |
+ | server_name serviceZ.mydomain.com; | ||
+ | | ||
+ | listen 443 ssl; # internal access | ||
+ | http2 on; | ||
+ | access_log / | ||
+ | error_log / | ||
+ | location / { | ||
+ | #Generic proxy pass to proxied service | ||
+ | proxy_pass http://127.0.0.1: | ||
+ | } | ||
+ | # include | ||
+ | | ||
+ | } | ||
+ | </ | ||
- | The remote HTTP server is even simpler: just a redirect to the remote HTTPS server, listening on port 8080 since port 80 is already taken by the home server. You never, ever, want to go not encrypted on the outside world. The remote HTTPS server is on port 8443 and adds all the specific HTTPS certificate stuff. Do not bother with it yet, i will explain a bit more later on. | + | A service where you want to differentiate between internal |
- | + | < | |
- | Please note that due to the HTTPS certificates (which at this point are still to be created) you cannot yet start NGINX. | + | server { |
- | + | | |
- | You can see that i used the **include** directive to point to a common **folders/ | + | |
- | < | + | http2 on; |
- | # This might be needed to direct upload of NZB files | + | |
- | client_max_body_size 200M; | + | |
- | # This is required sometimes by Deluge web GUI giant cookies | + | |
- | large_client_header_buffers 4 32k; | + | #Generic proxy pass to proxied |
- | + | | |
- | # Here you will put your dashboard | + | } |
- | root /data/daemons/htdocs; | + | # |
- | + | | |
- | # Specific | + | } |
- | include " | + | server { |
- | include | + | server_name serviceZ.mydomain.com; |
- | include | + | |
- | include " | + | http2 on; |
- | include " | + | [[[ put here your SSO lines ]]] |
- | include " | + | access_log |
- | include " | + | |
- | include " | + | |
- | include " | + | # |
- | include " | + | proxy_pass http://127.0.0.1:8000; |
- | include | + | } |
+ | # | ||
+ | | ||
+ | } | ||
</ | </ | ||
- | As you can see, beside a few settings on top, it includes each service specific config as a separate | + | In this case, you can even optimize more by moving the **location** lines, which are identical, inside another |
- | The //root// directive is where you will need to put your dashboard to put all services together in a nice linked | + | Of course, refer to the [[selfhost: |
+ | ===== Generate SSL certificates for HTTPS ===== | ||
+ | Nowadays HTTPS is a must for many reasons, including privacy and security. I assume this is a mandatory requirement. A lot of services will not even work without HTTPS. | ||
+ | Enabling HTTPS requires the generation of valid SSL certificates for your domain(s). You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https:// | ||
+ | How does it work? | ||
+ | first of all: | ||
+ | - You ask Let's Encrypt to create a certificate for each one of your sub-domains (automated by CertBot) | ||
+ | - You setup the certificate (automated by CertBot) | ||
+ | - You renew periodically the certificate (automated by CertBot) | ||
- | ==== Generate SSL certificates for HTTPS ==== | + | Then: |
+ | - You connect with browser to **https:// | ||
+ | - Your server provide the certificate | ||
+ | - Your browser verify that the certificate is valid against the Let's Encrypt Root Certificate | ||
+ | - You are good to go! | ||
- | Enabling HTTPS requires the generation of valid SSL certificates for your server, and you want HTTPS to have full end-to-end encryption for security and privacy. You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https:// | + | Using //self-signed// certificates |
- | Let's Encrypt | + | Luckly, |
+ | Install CertBot and the NGINX plugin: | ||
<code bash> | <code bash> | ||
> emerge -v certbot-nginx certbot | > emerge -v certbot-nginx certbot | ||
Line 247: | Line 336: | ||
<code bash> | <code bash> | ||
- | > certbot --nginx certonly -d remote_server_name | + | > certbot --nginx certonly -d mydomain.com -d y.mydomain.com -d xxxx |
</ | </ | ||
- | this will generate the certificate. | + | Now, you **must** |
- | Make sure that certbot | + | |
+ | Put this content into your **/ | ||
+ | <file - certbot.conf> | ||
+ | ssl_certificate / | ||
+ | ssl_certificate_key / | ||
+ | include / | ||
+ | ssl_dhparam / | ||
+ | </ | ||
+ | |||
+ | Of course, adapt the paths for your specific case. | ||
+ | |||
+ | Let's Encrypt | ||
<code bash> | <code bash> | ||
- | > | + | crontab -e |
</ | </ | ||
Line 261: | Line 361: | ||
31 16 * * * certbot renew &>> | 31 16 * * * certbot renew &>> | ||
</ | </ | ||
+ | |||
+ | there you go! | ||
You can now start your nginx server: | You can now start your nginx server: | ||
<code bash> | <code bash> | ||
- | > | + | rc-update add nginx default |
- | > / | + | / |
</ | </ | ||
+ | |||
+ | |||
+ | ==== Quick and dirty script for new subdomains ==== | ||
+ | |||
+ | When you need to **add** a new subbomain to your certificate, | ||
+ | <file - certbot_script.sh> | ||
+ | #!/bin/bash | ||
+ | |||
+ | DOMAINS=" | ||
+ | |||
+ | domains= | ||
+ | for i in ${DOMAINS} | ||
+ | do | ||
+ | domains=" | ||
+ | done | ||
+ | |||
+ | certbot certonly --expand --nginx ${domains} | ||
+ | </ | ||
+ | |||
+ | So __FIRST__ you **update** the script adding the new domain at the end of the DOMAINS line, then you run the script and restart your NGINX. | ||
+ | |||
+ | |||
+ | ===== Enable CGI support with NGINX ===== | ||
+ | |||
+ | To be able to run system scripts and, in general, [[https:// | ||
+ | |||
+ | For using CGI directly with NGINX (another option could be to run Apache or another web server in addition, but why?) you can install and setup [[https:// | ||
+ | <code bash> | ||
+ | emerge www-misc/ | ||
+ | </ | ||
+ | |||
+ | Spawn-fcgi allows you to run one instance of fcgiwrap for each service you need to run. This is an excellent approach to keep services separated and each one in it's own user. | ||
+ | |||
+ | Since you want to run // | ||
+ | * Setup your // | ||
+ | * Create a start script in **/ | ||
+ | |||
+ | The contents of the config file sohuld be: | ||
+ | <file - spawn-fcgi.fcgiwrap> | ||
+ | # The " | ||
+ | FCGI_SOCKET=/ | ||
+ | FCGI_PORT= | ||
+ | # The -f send stderr to nginx log | ||
+ | FCGI_PROGRAM="/ | ||
+ | FCGI_USER=nginx | ||
+ | FCGI_GROUP=nginx | ||
+ | FCGI_EXTRA_OPTIONS=" | ||
+ | ALLOWED_ENV=" | ||
+ | </ | ||
+ | |||
+ | And to do all the above: | ||
+ | <code bash> | ||
+ | cp / | ||
+ | ln -s / | ||
+ | rc-update add spawn-fcgi.fcgiwrap default | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | Then enable it in your NGINX config by adding the following directives | ||
+ | <file - cgi.conf> | ||
+ | | ||
+ | fastcgi_param DOCUMENT_ROOT / | ||
+ | fastcgi_param SCRIPT_NAME | ||
+ | fastcgi_pass unix:/ | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== In short: add & enable a service ===== | ||
+ | |||
+ | Assuming you want to add a new service to your Reverse Proxy and the relative configuration has been written in **service.conf** file, you need to **include** it inside your URL's configuration file. If the service needs to be under **https:// | ||
+ | < | ||
+ | include " | ||
+ | </ | ||
+ | |||
+ | and then restart nginx: | ||
+ | <code bash> | ||
+ | / | ||
+ | </ | ||