User Tools

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
selfhost:nginx [2024/02/01 13:30] willyselfhost:nginx [2025/03/13 09:29] (current) – external edit 127.0.0.1
Line 1: Line 1:
-====== The Reverse Proxy concept ======+====== F) The Reverse Proxy concept ======
  
 The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services. The use of a **reverse proxy** is the key at the foundation of ensuring security, isolation and flexibility in accessing your self-hosted services.
  
-A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encyrption (HTTPS/SSL), authentication, load-balancing and security. If your services are properly written (not too many, but the best ones are) they will accept your reverse-proxy authentication directly without even the need to create users for each service, in this case your reverse-proxy will also be your SSO (Single Sign On) solution.+A reverse-proxy is a web server that sits in the middle and handles all requests toward your services adding, on top, layers of encryption (HTTPS/SSL), authentication, load-balancing and security. If your services are properly written (not too many, but the best ones are) they will accept your SSO authentication directly without even the need to create users for each service, in this case your reverse-proxy will also cater for your SSO (Single Sign On) solution . More on this on the dedicated page [[selfhost:sso|Single Sign On]], but keep reading this page first.
  
-The reverse-proxy will also take care of handling HTTP/SSL certificates in one centralized place making it much easier to configure all your services without HTTPS then converting seamlessly all the HTTP traffic to HTTPS. It's much easier to manage one certificate in one place rather than depending on each service capability to handle HTTPS independently.+The reverse-proxy will take care of handling HTTPS/SSL certificates in one centralized place making it much easier to configure all your services without HTTPS then converting seamlessly all the HTTP traffic to HTTPS. It's much easier to manage all the certificates in one place rather than depending on each service capability to handle HTTPS independently.
  
 Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users.  Also, using a well known, solid and proven web server will alleviate the risk that each service might expose a poorly written, non-scalable or worse, internal web server to end users. 
  
-And as a final note, using a reverse-proxy you can organize easily all your services under one single domain. There are limitations, mostly due to poorly written services or peculiar protocols, that might require independent sub-domains, but i will show you how to handle also these cases easily with the reverse-proxy.+And as a final note, using a reverse-proxy you can organize easily all your services either under one single domain or with sub-domains, according to your specific needs.
  
 ===== NGINX ===== ===== NGINX =====
  
 My choice for a web server in this case is [[https://nginx.org|NGINX]] between the many available as Open Source because:  My choice for a web server in this case is [[https://nginx.org|NGINX]] between the many available as Open Source because: 
-  * It's much easier than [[https://www.apache.org|Apache]] to setup as a reverse-proxy, also less resource hungry. +  * It's much easier than [[https://www.apache.org|Apache]] to setup as a reverse-proxy, also less resource hungry, and works with more SSOs than Apache
-  * It has more features than [[https://caddyserver.com/|Caddy]] +  * It has more features than [[https://caddyserver.com/|Caddy]]
   * It is fully integrated in [[https://letsencrypt.org|Let's Encrypt]] SSL infrastructure / CertBot script   * It is fully integrated in [[https://letsencrypt.org|Let's Encrypt]] SSL infrastructure / CertBot script
  
-In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https://www.php.net|PHP]] support or FastCGI, NGINX will support you without the need for an additional service on your server+In general NGINX is fully featured but still very lightweight and secure HTTP server that shines as reverse-proxy. If you need to add more features, like [[https://www.php.net|PHP]] support or FastCGI, NGINX will support you but with a little bit more effort than Apache.
  
 ===== Base URLs and sub-domains ===== ===== Base URLs and sub-domains =====
  
-There are two different philosophies on how to host services. The one i like best, i think it's simpler and more elegant, is to use one single domain and expose each service in it's own //sub-path// or better call it **Base URL**. The alternative is to allocate one sub-domain for each service.+There are two different philosophies on how to host services: serve as a sub-path of a domain, or use sub-domainsI used to like best the //sub-path// approach, but indeed a good mix of the two ways is preferable.
  
 Let's assume you have your own domain **mydomain.com** and you want to expose a service called //jellyfin// (a well known media-server). You can expose it: Let's assume you have your own domain **mydomain.com** and you want to expose a service called //jellyfin// (a well known media-server). You can expose it:
Line 31: Line 31:
  
 As a **sub-path**:  As a **sub-path**: 
-  * Pros: only one domain needed, no need to create sub-domains (not always possible) +  * Pros: only one domain needed, no need to create sub-domains
-  * Pros: easy to organize services in virtual sub-folders+
   * Pros: the service existence is unknown to anybody not authorized   * Pros: the service existence is unknown to anybody not authorized
-  * Cons: each service must support Base URL setting+  * Cons: each service must support Base URL setting (well, not all do!) 
 +  * Cons: SSO support must be somehow consistent to avoid headaches (well, SSO support is still spotty today!) 
 +  * Cons: security wise, cookies and CORS can bring unintended vulnerabilities between services, because they all share the same subdomain. 
 +  * Cons: all services share the same HTTPS certificate.
  
 As a **sub-domain**: As a **sub-domain**:
   * Pros: any service will work, no need to support Base URL   * Pros: any service will work, no need to support Base URL
-  * Consrequire additional certificates for HTTPS/SSL for each sub-domain +  * Proseach service can have it's own HTTPS certificate  
-  * Conscannot easily organize together+  * Pros: each service is neatly organized in it's own subdomain 
 +  * Proscookies are not shared between services, and CORS protection works
   * Cons: exposed to public knowledge (DNS records are public) that the service exist   * Cons: exposed to public knowledge (DNS records are public) that the service exist
 +  * Cons: also public knowledge because there are services indexing all existing certificates.
  
-I prefer the sub-path whenever possible, but in some cases you will be forced to use sub-domains. And what if you cannot spin your sub-domains? Well, forget those services that require a subdomain.+__Note:__ you can create //wildcard// certificates that will match any subdomain, but there are drawbacks to this and it's not a good idea, security wise. You can still mitigate the one certificate per subdomain by adding each subdomain to the same certificate of course, but you will still need to extend your certificate each time you add a subdomain: this is my approach.
  
-Then using sub-paths, the use of reverse-proxy like NGINX allows you a little bit of flexibility because you can, to an extend, perform rewrite operations on URL's and also on the response to the browser, but this all come to a cost in processing power and, moreover, it'not always feasible. In general for sub-paths to work properly it has to be supported by the service.+To make a story short, i go with subdomains for well separated services, while going with sub-paths when sharing stuff that kind belongs together. Also, a deciding factor is whether the selected services do support SSO properly or not.
  
  
-===== Authentication =====+===== Reverse Proxy propagation =====
  
-Having a strong layer of authentication is mandatory for self-hosted services that are exposed to the internet. We talking about authentication it's important to remember that is has a double meaningto recognize a user rather than another use, and to restrict access to your service based on who the user is.+The reverse proxy is installed on the local server, i assume your local server is reachable from remote (see [[networking:external_access|Remote Access to your Home Server]])
  
-A few assumptions: self-hosting for home access, which means a limited and trusted list of users which doesn't change often in timeSecurity is important, but ease of use is also importantSimplicity of user management is also important.+The reverse proxy will need to be accessible to both the internal users and the external users. You could setup two different proxies, but i prefer to have only one listening to both worldsI will assume that there might be differences between internal and external users in terms of authentication or service availability. The underlying idea is that you will have your reverse proxy listening to different ports: one for internal access and one for external access.
  
-There are a few key points that want to stress on authentication+The setup am describing uses three different ports
-  * 2FA (Two Factor Authentication) will not be considered +  * Port 80: both to local and remote, will just be a redirect to HTTPS 
-  * You want to create users only onceas much as possible. +  * Port 443: standard HTTPS for **internal** access 
-  * Only selected services will need to differentiate between users +  * Port 8443: HTTPS for **external*access 
-  * Most services will not need to know who is accessing them +
-  * From outside, **all** services must require authentication +
-  * From inside, authentication is required only where a specific user makes a difference +
-  Avoid double authentication when possible+
  
-For example, a //media server// will need to know who is connecting to show your preferred shows and your "resume from here..." movies. The printer control page instead should be accessible by anyone inside home.+**Note:** for Let's Encrypt CertBot to work properly you **need** to redirect **both** port 80 and 443 from your external server to your internal server. CertBot will shutdown your NGINX and spin a custom NGINX server that you cannot tweak so it's critical that your SSH tunnels are properly forwarding ports 80 and 443 from the external server to the internal one, or it will not work.
  
-Authentication will be required when connecting from //outside//, always, while will be needed only for selected services from //inside//. 
  
-The most simple and effective approach is to enable the PAM Authentication plugin of NGINX and connect your reverse-proxy authentication to your server user management. So that by adding a new user to your server, that will be automagically added to your services, or at least the ones that can link to reverse-proxy authentication. +===== Installing NGINX =====
  
-You have the following combinations+NGINX installation on the home server is pretty straightforward, but you need to enable some specific modules
-  * Services that do not require to differentiate the user +  * //auth_request//: needed for SSO like authelia 
-  * Services that needs to know who is connectingand **can** get this info from the reverse-proxy +  * //auth_pam//: needed for PAM SSO 
-  * Services that needs to know who is connecting, and **cannot** get this info from the reverse-proxy+  * //sub// is used to allow substitutions inside the pages proxiedto fix web applications that don't play well with reverse-proxies 
 +  * //gunzip// is used to unzip the requests and let the //sub// module works also on compressed requests 
 +  //realip// is needed by SSO like authelia
  
-You will be able to play with the PAM authentication module of NGINX on a per-service base to achieve this.+While NGINX support WebDAV, i strongly suggest you __dont__ enable it as you will not be using it. NGINX WebDAV support is lacking and not really recomended.
  
-The general rule is as follow+So create the file **/etc/portage/package.use/nginx** with the following lines
-^ Service                                     ^ From inside       ^ From outside     ^ +<file - nginx> 
-| do not require authentication               | auth not required | use PAM auth     | +app-misc/mime-types nginx 
-| Require auth, can use reverse-proxy auth    | use PAM auth      | use PAM auth     | +www-servers/nginx NGINX_MODULES_HTTP: auth_request auth_pam dav dav_ext gunzip sub realip xslt  
-| Require auth, cannot use reverse-proxy auth | use service auth  | use service auth |+</file>
  
-Using PAM Auth on services that cannot understand reverse-proxy auth is great way to increase security as others will not even be able to reach your servicebut will require the users to perform the authentication twice and might cause some mobile apps to fail.+Note: you might want to tweak the second line to your needssee the [[https://wiki.gentoo.org/wiki/Nginx|flags for nginx]] and adapt
  
-===== Reverse Proxy propagation to external world =====+Now install nginx: 
 +<code bash> 
 +emerge -v nginx 
 +</code>
  
-The reverse proxy is installed on the local server, you should have already guessed that remote access is performed using the SSH tunnelling described in the [[selfhost:ssh_tunnel|specific page]]. The underlying idea is that you will have your reverse proxy listening to different ports, and these ports will be forwarded to your external server using the SSH tunnels. Differentiating the ports is required to be able to apply PAM authentication depending on where your user connects from.+You can start it after you have configured it.
  
-The setup i am proposing uses three different ports: 
-  * Port 80: both to local and remote, will just be a redirect to HTTPS 
-  * Port 443: standard HTTPS for **internal** access, no PAM authentication 
-  * Port 8443: HTTPS with PAM authentication for **external** access  
  
 +===== NGINX main configuration =====
  
 +There are many ways to write nice NGINX config files, i will show you mine which i find quite effective, organized and simple. It make use of the //import// directive and splits the configuration to at least one file per service and one file per sub-domain.
  
-===== Installing NGINX =====+Assumptions: 
 +  * Your domain is **mydomain.com**, and it has a static landing page under __/var/www/html/index.html__ 
 +  * Your service X is reachable under **https://mydomain.com/serviceX** (subpath) 
 +  * Your service Y is reachable under **https://y.mydomain.com** (subdomain) 
 +  * All HTTP traffic is redirected to HTTPS 
 +  * You have a single Let's Encrypt SSL certificate which covers all the subdomains of your domain (either a wildcard or a comulative cert it's up to you) 
 +  * You might have more than one main domain
  
-NGINX installation on the home server is pretty straightforwardbut we need to enable one specific authentication module, the //pam// authentication module, because i will show you how to link NGINX authentication to your home server users directly, without the need to create more users and passwords. If you prefer to use different authenticationlike basic_authi leave this out to you.+The top-level **mydomain.com** will have it's own folderthen you will create a set of sub-folders stemming from the main domainone folder for each sub-domainsand inside each folder one configuration file for each sub-path served on that sub-domain.
  
-So create the file **/etc/portage/package.use/nginx** with the following lines+So you will need the following files: 
-<code> +  * **/etc/nginx/nginx.conf**: main config file, entry point. 
-app-misc/mime-types nginx +  * **/etc/nginx/com.mydomain/certbot.conf**: SSL certificates configuration for //mydomain.com// 
-www-servers/nginx NGINX_MODULES_HTTPauth_pam dav dav_ext gunzip sub xslt +  * **/etc/nginx/com.mydomain/mydomain.conf**: global config for //mydomain.com// 
-</code>+  * **/etc/nginx/com.mydomain/serviceX.conf**config for //serviceX// on //mydomain.com// 
 +  * **/etc/nginx/com.mydomain/y/y.conf**: config for //serviceY// on //y.mydomain.com// 
 +  * plus any other SSO specific config files.
  
-The **dav**, **dav_ext** and **xslt** modules are required for WebDAV support later on.+The **certbot.conf** file will be created later onthe specific SSO config files are described in the [[selfhost:sso|Authentication]] page.
  
-(the first line is needed at the time of writing this page, YMMV) 
  
-Note: you might want to tweak the second line to your needs, see the [[https://wiki.gentoo.org/wiki/Nginx|flags for nginx]] and adapt.+==== Top-level configuration  ====
  
-A brief explanation of the above USE flags: +So, here is the content for the main **/etc/nginx/nginx.conf**: 
-  * //auth_pam// is used to enable PAM based authentication +<file nginx.conf> 
-  * //sub// is used to allow substitutions inside the pages proxied, to fix web applications that don't play well with reverse-proxies +user nginx nginx;
-  * //gunzip// is used to unzip the requests and let the //sub// module works also on compressed requests+
  
-Now install nginx: +error_log /var/log/nginx/error_log info;
-<code bash> +
- > emerge -v nginx +
-</code>+
  
-==== NGINX pam_auth ====+events { 
 +        worker_connections 1024; 
 +        use epoll; 
 +}
  
-I think it's nice that with NGINX you can authenticate your users directly with your home server usersThis means you don't need to add second set of users, and that the users will only need one password, and no sync is required between HTTP users and server users. This is achieved using the **pam_auth** module on Linux. You have already built nginx with pam_auth support, but you need to configure it.+http { 
 +        include /etc/nginx/mime.types; 
 +        # Unknown stuff is considered to be binaries 
 +        default_type application/octet-stream; 
 +        # Set reasonably informing log format 
 +        log_format main 
 +                '$remote_addr - $remote_user [$time_local] ' 
 +                '"$request" $status $bytes_sent ' 
 +                '"$http_referer" "$http_user_agent"
 +                '"$gzip_ratio"'; 
 +        # Improve file upload to client by avoiding userspace copying 
 +        tcp_nopush on; 
 +        sendfile on; 
 +        # Indexes are html by default 
 +        index index.html;
  
-Create the file **/etc/pam.d/nginx** with these lines: +        # General catch-all for HTTPS redirection, we don't like serving plain HTTP 
-<code> +        server { 
-auth required pam_unix.so +                listen 80 default_server; 
-account required pam_unix.so +                return 301 https://$host$request_uri; 
-</code>+        }
  
 +        # Using Authelia SSO can lead to longer headers, better increase buffers
 +        proxy_headers_hash_max_size 512;
 +        proxy_headers_hash_bucket_size 128;
  
-==== NGINX main configuration ====+        # Add domains here (only the main config file for each domain!) 
 +        include com.mydomain/mydomain.conf; 
 +         
 +        # This is for SSL and needs to be included only once for all the domains 
 +        include /etc/letsencrypt/options-ssl-nginx.conf; 
 +
 +</file>
  
-You need two different NINX configurations. One facing the home network, which will serve on HP only, and one facing the external worldwhich will serve HTTPS only with HTTP as a redirect to HTTPS+This will set your defaults for every service and site served by this reverse proxythen will load the //mydomain.com// specific configuration file.
  
-NGINX is very flexible in configuration, i will show you how to properly separate it's configuration file so that the main core is shared between home and remote access.  
  
-The main configuration file is located at **/etc/nginx/nginx.conf**, the default one is fine for the standard stuff, i will let you tweak and adapt it to your needs for everything outside the **server** sections. You will need to remove all server sections of your file and replace with the following: +==== mydomain.com configuration  ====
-<code nginx> +
-        server { +
-                # Home facing server, HTTP only +
-                listen 127.0.0.1:80; +
-                server_name 192.168.0.1;+
  
-                include "folders/main.conf";+Now, for the specific **mydomain.com**, you need the following config file under **/etc/nginx/com.mydomain/mydomain.conf**: 
 +<file - mydomain.conf>
  
-                access_log /var/log/nginx/localhost.access_log main; +access_log /var/log/nginx/mydomain.com_access_log main; 
-                error_log /var/log/nginx/localhost.error_log info;+error_log /var/log/nginx/mydomain.com_error_log info
 + 
 +# simple catch-all server for the domain 
 +server { 
 +       # You might want to specify also the internal  
 +        server_name mydomain.com; 
 +        # Port for users from outside 
 +        listen 8443 ssl; 
 +        # Port for users from inside 
 +        listen 443 ssl; 
 +        http2 on; 
 + 
 +        # unauthenticated static landing page (maybe a "get off my lawn" GIF...) 
 +        location / { 
 +               root /var/www/html;
         }         }
  
-        server { +       include all sub-paths for mydomain.com: 
-                remote facing server, HTTPS  +       include serviceX.conf;
-                server_name my_remote_server_name; +
-                auth_pam "Home"; +
-                auth_pam_service_name "nginx";+
  
-                include "folders/main.conf";+       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf; 
 +}
  
-                access_log /var/log/nginx/remote.access_log main+# include all sub-domains entry points: 
-                error_log /var/log/nginx/remote.error_log info;+include com.mydomain/y/y.conf
 +</file>
  
-                listen 127.0.0.1:8443 ssl; # managed by Certbot +This will create the basic setup for your base domain nameI have assumed you want a static landing page, but you might put a //redirect// to service Y or service X... Or add a dashboard, of course protected by your SSO...
-                ssl_certificate /etc/letsencrypt/live/my_remote_server_name/fullchain.pem; # managed by Certbot +
-                ssl_certificate_key /etc/letsencrypt/live/my_remote_server_name/privkey.pem; # managed by Certbot +
-                include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot +
-                ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot+
  
-                location .well-known/acme-challenge/ { 
-                        auth_pam off; 
-                        autoindex on; 
-                } 
  
-        }+==== sub-domains configuration  ====
  
 +It should be clear now that each sub-domain will have it's own sub-folder and contain at least one (or more) configuration files inside for each sub-path, like the one for serviceY. 
  
-        server { +I will assume that //serviceY// perform it's own authentication and cannot use SSO: 
-                remote facing server, HTTP to HTTPS redirection +<file - y.conf> 
-                listen 8080+server { 
-                access_log /var/log/nginx/remote.access_log main; +       server_name y.mydomain.com; 
-                error_log /var/log/nginx/remote.error_log info; +       listen 8443 ssl; external access 
-                return 301 https://$host$request_uri+       listen 443 ssl# internal access 
-        +       access_log /var/log/nginx/y.mydomain.com_access_log main; 
-</code>+       error_log /var/log/nginx/y.mydomain.com_error_log info; 
 +       location / { 
 +               #Generic proxy pass to proxied service 
 +               proxy_pass http://127.0.0.1:8000; 
 +       } 
 +       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf
 +
 +</file>
  
-will walk you trough it bit.+suggest you split all sub-paths for each sub-domain in separate config file and //include// them inside the //server// block, like i did above for //mydomain.com//.
  
-You have one simple section for the home server: it listen on port 80 and logs to some specific home only files. I choose not to use HTTPS inside the home network because it would be complicated to automatically generate the required certificates. If you still want HTTPS on the home side, you should use self-signed certificates, but i leave this exercise to you. 
  
-The remote HTTP server is even simpler: just a redirect to the remote HTTPS server, listening on port 8080 since port 80 is already taken by the home server. You never, ever, want to go not encrypted on the outside world. The remote HTTPS server is on port 8443 and adds all the specific HTTPS certificate stuff. Do not bother with it yet, i will explain a bit more later on.+==== Differentiate between Internal or External access for services ====
  
-Please note that due to the HTTPS certificates (which at this point are still to be created) you cannot yet start NGINX.+In my setup i have some differences when a service is accessed from //within// the home network, or from //outside// the home network.
  
-You can see that i used the **include** directive to point to a common **folders/main.conf** configuration file that will contain the gist of the common configuration. Socreate the **/etc/inginx/folders** subfolder and put the following in the **main.conf**: +The key point is that //external// access comes trough port 8443while //internal// aces comes trough port 443. This allows you to differentiate your setup with __server__ blocks.
-<code nginx> +
-This might be needed to direct upload of NZB files  +
-client_max_body_size 200M; +
-# This is required sometimes by Deluge web GUI giant cookies +
-large_client_header_buffers 4 32k;+
  
-# Here you will put your dashboard +So, for example, a service _only_ available inside the home network will have something like: 
-root /data/daemons/htdocs;+<code> 
 +server { 
 +        server_name internal_only.mydomain.com; 
 +        listen 443 ssl; # internal access 
 +        http2 on; 
 +        access_log /var/log/nginx/internal_only.mydomain.com_access_log main; 
 +        error_log /var/log/nginx/inernal_only.mydomain.com_error_log info; 
 +        location / { 
 +                #Generic proxy pass to proxied service 
 +                proxy_pass http://127.0.0.1:8000; 
 +        } 
 +       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf; 
 +
 +</code>
  
-# Specific service configurations +While a service that can be accessed both from internal and external: 
-include "folders/deluge.conf"+<code> 
-include "folders/transmission.conf"+server { 
-include "folders/nzbget.conf"+        server_name serviceZ.mydomain.com
-include "folders/radarr.conf"+        listen 8443 ssl# external access 
-include "folders/readarr_books.conf"+        listen 443 ssl# internal access 
-include "folders/readarr_audiobooks.conf"+        http2 on
-include "folders/sonarr.conf"; +        access_log /var/log/nginx/serviceZ.mydomain.com_access_log main
-include "folders/lidarr.conf"+        error_log /var/log/nginx/serviceZ.mydomain.com_error_log info
-include "folders/jellyfin.conf"; +        location { 
-include "folders/ombi.conf"+                #Generic proxy pass to proxied service 
-include "folders/bazarr.conf";+                proxy_pass http://127.0.0.1:8000
 +        } 
 +       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf; 
 +}
 </code> </code>
  
-As you can seebeside a few settings on top, it includes each service specific config as a separate fileThis will give you lots of flexibility in adding or removing single servicesThe content of each specific service config file will be described in each service page.+A service where you want to differentiate between internal and externalfor example adding SSO authentication only for external access: 
 +<code> 
 +server { 
 +        server_name serviceZ.mydomain.com; 
 +        listen 443 ssl; # internal access 
 +        http2 on
 +        access_log /var/log/nginx/serviceZ.mydomain.com_access_log main; 
 +        error_log /var/log/nginx/serviceZ.mydomain.com_error_log info; 
 +        location / { 
 +                #Generic proxy pass to proxied service 
 +                proxy_pass http://127.0.0.1:8000; 
 +        } 
 +       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf; 
 +
 +server { 
 +        server_name serviceZ.mydomain.com; 
 +        listen 8443 ssl; # external access 
 +        http2 on; 
 +        [[[ put here your SSO lines ]]] 
 +        access_log /var/log/nginx/serviceZ.mydomain.com_access_log main; 
 +        error_log /var/log/nginx/serviceZ.mydomain.com_error_log info; 
 +        location / { 
 +                #Generic proxy pass to proxied service 
 +                proxy_pass http://127.0.0.1:8000; 
 +        } 
 +       # include HTTPS certs stuff: 
 +       include org.gardiol/certbot.conf; 
 +
 +</code>
  
-The //root// directive is where you will need to put your dashboard to put all services together in a nice linked page, more details on this later on.+In this caseyou can even optimize more by moving the **location** lines, which are identical, inside another file that you __include__ twiceBetter to avoid redundancy!
  
 +Of course, refer to the [[selfhost:sso|SSI]] page for more details on SSO.
  
  
 +===== Generate SSL certificates for HTTPS =====
  
 +Nowadays HTTPS is a must for many reasons, including privacy and security. I assume this is a mandatory requirement. A lot of services will not even work without HTTPS.
  
 +Enabling HTTPS requires the generation of valid SSL certificates for your domain(s). You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https://letsencrypt.org|Let's Encrypt]] certification authority which is an open-source, public and free Certificate Authority that let's you generate and manage your certificates.
  
 +How does it work?
  
-==== Generate SSL certificates for HTTPS ====+first of all: 
 +  - You ask Let's Encrypt to create a certificate for each one of your sub-domains (automated by CertBot) 
 +  - You setup the certificate (automated by CertBot) 
 +  - You renew periodically the certificate (automated by CertBot)
  
-Enabling HTTPS requires the generation of valid SSL certificates for your server, and you want HTTPS to have full end-to-end encryption for security and privacy. You can do that with self-signed certificates but that will still flag as insecure on your browser and some client apps might even not work properly. A better solution is to use the [[https://letsencrypt.org|Let's Encrypt]] certification authority which is an open-source, public and free Certificate Authority that let's you generate and manage your certificates.+Then: 
 +  - You connect with browser to **https://mydomain.com** 
 +  - Your server provide the certificate 
 +  - Your browser verify that the certificate is valid against the Let's Encrypt Root Certificate 
 +  You are good to go!
  
-Let's Encrypt depends on Certbot, which is a python script pretty powerful and efficient that can generate and renew all your certificates magically and automatically. It works by sending requests to Let's Encrypt infrastructure then place some response tokens inside your web server //htdocs// folderin this way Let's Encrypt can verify that you really have access to your server (to prevent spoofing and other security issues)so you need to ensure that the root path of your nginx is accessible from outside in a specific subfolder. The above configuration file for nginx takes care of thisYou can then install certbot:+Using //self-signed// certificates works toobut since for the browser to validate the certificate needs to already know the associated Certificate Authority, the site will still appear as untrusted. Since Let's Encrypt is **A nonprofit Certificate Authority providing TLS certificates** with the mission to provide everybody with security and trustthere is no reason not to use it.
  
 +Luckly, Let's Encrypt provides a neat software called **CertBot** that can automate all the steps for the major web servers, including NGINX. CertBot will send requests to Let's Encrypt, spin up an NGINX server for you and store the certificate. The only thing you need to do is including the proper config file into NGINX and restart it.
 +
 +Install CertBot and the NGINX plugin:
 <code bash> <code bash>
  > emerge -v certbot-nginx certbot  > emerge -v certbot-nginx certbot
Line 239: Line 336:
  
 <code bash> <code bash>
- > certbot --nginx certonly -d remote_server_name+ > certbot --nginx certonly -d mydomain.com -d y.mydomain.com -d xxxx
 </code> </code>
  
-this will generate the certificate +Now, you **must** generate certificates that chains toghether all the subdomains you useThis means that if you add, later on, another sub-domain to host a new service you will **need to** re-run the above //certbot// command adding //-d newsubdomain.mydomain.com//. And do not forget all the older ones! Luckly, domain names can be chained to on single certificate, so you do not have to edit your NGINX config ever again for CertBot to work. 
-Make sure that certbot runs at least once daily to update the certificates. You can put it into crontab, as user root:+ 
 +Put this content into your **/etc/nginx/com.mydomain/certbot.conf**: 
 +<file - certbot.conf> 
 +ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; 
 +ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; 
 +include /etc/letsencrypt/options-ssl-nginx.conf; 
 +ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 
 +</file> 
 + 
 +Of course, adapt the paths for your specific case. 
 + 
 +Let's Encrypt certificates last 90 days, then they need to be renewed. This is automated by CertBot but you need to call it periodically. You can use crontab for this. Edit root crontab:
 <code bash> <code bash>
- > crontab -e+crontab -e
 </code> </code>
  
Line 253: Line 361:
 31 16 * * * certbot renew  &>> /var/log/certbot.log 31 16 * * * certbot renew  &>> /var/log/certbot.log
 </code> </code>
 +
 +there you go!
  
 You can now start your nginx server: You can now start your nginx server:
  
 <code bash> <code bash>
- > rc-update add nginx default +rc-update add nginx default 
- /etc/init.d/nginx start+/etc/init.d/nginx start
 </code> </code>
  
 +
 +
 +==== Quick and dirty script for new subdomains ====
 +
 +When you need to **add** a new subbomain to your certificate, you can copy (and adapt) the following script i use:
 +<file - certbot_script.sh>
 +#!/bin/bash
 +
 +DOMAINS="mydomain.con y.mydomain.com other.mydomain.com"
 +
 +domains=
 +for i in ${DOMAINS}
 +do
 +        domains="${domains} -d ${i}"
 +done
 +
 +certbot certonly --expand --nginx ${domains}
 +</file>
 +
 +So __FIRST__ you **update** the script adding the new domain at the end of the DOMAINS line, then you run the script and restart your NGINX.
 +
 +
 +===== Enable CGI support with NGINX =====
 +
 +To be able to run system scripts and, in general, [[https://en.wikipedia.org/wiki/CGI|CGIs]] on NGINX you need to do some additional configuration. NGINX is not capable of running CGI scripts at all. It has only support for [[https://en.wikipedia.org/wiki/FastCGI|FastCGI]] protocol, which is **quite different** and **not directly compatible** with standard CGI.
 +
 +For using CGI directly with NGINX (another option could be to run Apache or another web server in addition, but why?) you can install and setup [[https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/|fcgiwrap]] and it's companion spawn package:
 +<code bash>
 +emerge www-misc/fcgiwrap www-servers/spawn-fcgi
 +</code>
 +
 +Spawn-fcgi allows you to run one instance of fcgiwrap for each service you need to run. This is an excellent approach to keep services separated and each one in it's own user.
 +
 +Since you want to run //fcgiwrap// set up like this:
 +  * Setup your //spawn-fcgi// config file in **/etc/conf.d/spawn-fcgi.fcgiwrap**
 +  * Create a start script in **/etc/init.d/spawn-fcgi.my-cgi**.
 +
 +The contents of the config file sohuld be:
 +<file - spawn-fcgi.fcgiwrap>
 +# The "-1" is added on my system, check on your YMMV!
 +FCGI_SOCKET=/var/run/fcgiwrap.sock-1
 +FCGI_PORT=
 +# The -f send stderr to nginx log
 +FCGI_PROGRAM="/usr/sbin/fcgiwrap -f"
 +FCGI_USER=nginx
 +FCGI_GROUP=nginx
 +FCGI_EXTRA_OPTIONS="-M 0700"
 +ALLOWED_ENV="PATH"
 +</file>
 +
 +And to do all the above:
 +<code bash>
 +cp /etc/conf.d/spawn-fcgi /etc/conf.d/spawn-fcgi.fcgiwrap
 + ln -s /etc/init.d/spawn-fcgi /etc/init.d/spawn-fcgi.fcgiwrap
 +rc-update add spawn-fcgi.fcgiwrap default
 +/etc/init.d/spawn-fcgi.fcgiwrap start
 +</code>
 +
 +Then enable it in your NGINX config by adding the following directives 
 +<file - cgi.conf>
 +       location /my_cgi {
 +            fastcgi_param DOCUMENT_ROOT /path/to/gci/executable/folder/;
 +            fastcgi_param SCRIPT_NAME   my_cgi;
 +            fastcgi_pass unix:/var/run/fcgiwrap.sock;
 +       }
 +</file>
 +
 +
 +===== In short: add & enable a service =====
 +
 +Assuming you want to add a new service to your Reverse Proxy and the relative configuration has been written in **service.conf** file, you need to **include** it inside your URL's configuration file. If the service needs to be under **https://mydomain.com** you will need to add it like:
 +<code>
 +include "com.mydomain/service.conf";
 +</code>
 +
 +and then restart nginx:
 +<code bash>
 +/etc/init.d/nginx restart
 +</code>
  

This website uses technical cookies only. No information is shared with anybody or used in any way but provide the website in your browser.

More information