Remote Access

Accessing your home server from outside can be tricky. Please note that i am referring to accessing your home server from outside, not your home network. There is a big difference!

Access to the home server means access to all your home services. You can, and should, remap using the reverse proxy any service which is not on the home server itself. Access to the home network is not desirable because it will expose the internal devices to additional risks (what if your mobile device, while outside, is compromised? You just opened an unsecured route to access home…). If you need access to the home network, and please think twice because 99% it is not what you really need, you should use a VPN.

There is only one method that allows you external access without having an external server which is Direct Port forwarding, but it only possible if allowed by your ISP and if you are given a public IP address by your ISP. Doesn't work with CG-NAT.

Since, as i stated, CG-NAT is today the norm and having a public IP is basically impossible, it means the only reliable and safe way for you to remote access your home is by having access to some external resource with a public IP address. So go ahead and buy some cheap VPS or similar service. You will not need anything fancy, you will not be running any service on it.

I assume you have at least one external server, let's call it external-server1 and maybe even two, let's call the second external-server2 which is good to have for resilience, specially if you have two ISPs at home. I will show you here how to set it up.

There are many ways to create an encrypted tunnel between the home server and the external servers like:

  • OpenVPN or Wireguard
  • SSH Tunnels

I am a big fan of the simplicity of the SSH tunnels solution, so here it is!

SSH Tunneling

Advantages of SSH tunnels:

  • Easy to setup
  • Do not require any additional package on Linux (SSH is installed by default)
  • Works in any settings, always
  • Easily perform all kinds of port-forwarding natively and in both directions

To create SSH tunnels, you do not need to install anything.

OpenSSH is the default- SSH package installed in Gentoo. It creates a safe, encrypted tunnel between two hosts and it's based on OpenSSL. OpenSSH supports many useful features for your use case:

  • Strong encryption of all exchanged data
  • Host identification based on public&private keys
  • User authentication based on public&private keys
  • Strong password-less authentication with 2FA (certificate + password)
  • Port forwarding from local to remote
  • Port forwarding from remote to local
  • TCP Keep-Alive support

To autmate and make SSH tunnels resillient to network disconnect or other failures, you need to install AutoSSH, i will sho wyou how.

The Basic Idea

You will create a specific user, called tunnel both on the local and the remote servers and allow the local user tunnel to login via private/public key exchange to the remote servers. This is always possible because the remote server is, by definition, accessible from the home server.

The user tunnel will then activate a number of port-forwarding from the remote server to the local server, exposing directly only your home most-secured only services:

  • Home SSH port: so you can access your home server from outside
  • Home Reverse Proxy ports (80 & 443): so you can access all your secured services from outside.

Please note that you must expose port 80 of your reverse-proxy if you plan to use Let's Encrypt Certbot tool, or it will not work.

What you will need:

  • Create a tunnel user on home server
  • Create a tunnel user on remote server(s)
  • Share home tunnel user public key to the remote server's tunnel user authorized keys
  • Login from home to remote as user tunnel with appropriate options to create all the needed tunnels.

Everything will be properly encrypted and safe.

Local Host Setup

Create a new user called tunnel, but make sure it's home folder is on root partition because if, by any chance, your partitions will not mount, you would lose the tunnels as well.

useradd -m tunnel

Create the SSH identity for the user tunnel:

su - tunnel
ssh-keygen

I suggest you do not setup a password for this key. It is slightly less secure, but you can always revoke the key if it gets leaked. If you setup a password, then you will need to provide it in the connection script, which kind of make it a mooth point anyway.

Your SSH public key will be stored under ~/.ssh/id_rsa.pub

Remote Host Setup

Login to your remote server, create the user tunnel:

useradd -m tunnel

And add the home user tunnel public key into the ~/.ssh/authorized_hosts file as a new line. You should create the folder and the file if missing.

BE CAREFUL: you must set permissions properly for the .ssh folder and the authorized_keys file! They must not be accessible or readable by anybody but the user.

You might want to edit SSH daemon configuration and set in your server /etc/ssh/sshd_config:

ClientAliveInterval 10
ClientAliveCountMax 2

so that clients will be dropped and port freed automatically if they stop responding within 10 seconds for maximum two consecutive times.

Note: for security reasons, you might prefer to move the external server SSH port from the default 22 to some other value, like 222, in case you do so, remember to add -p 222 to your ssh and autossh command below.

SSH tunnel creation

Which kind of port-forwarding do you need? And how many?

SSH provides two kinds of port forwarding: remote-to-local and local-to-remote.

Remote to local: the specified port on remote server will be opened by SSH and any traffic trough that port will be redirected, encrypted and transparently, to a specific port (and possibly ip address) on the home server. This is useful to access a home service from outside.

Local to remote: the specified port on home server will be opened by SSH and any traffic will be redirected, encrypted and transparently, to a port of the remote server (and possibly also a specific IP address). This can be used to remap en external service on the local server, without giving an user direct access to the remote server.

You will mostly need the first one, of course.

The syntax is: -R<remote_ip>:<remote_port>:<local_ip>:<local_port>

  • -R : specify it's a remote to local forwarding
  • remote_ip: can always be 127.0.0.1, to use remote public IPs or 0.0.0.0 you need special configuration on remote server.
  • remote_port: which port to listen to on remote server. Must be > 1024 (unprivileged)
  • local_ip: destination IP for connection incoming from remote. Usually 127.0.0.1 but can be whatever you want
  • local_port: destination port for connections. It's your exported service port.

A good idea is to forward home server SSH port to some custom port on the remote server, so that you can access your home via SSH:

ssh tunnel@external-server1 -R0.0.0.0:2022:127.0.0.1:22

will allow you to SSH on port 2022 of the remote server, and actually login to your home server SSH.

To ensure that on the remote server the user tunnel is allowed to create port-forwarding, add the following to your /etc/ssh/sshd.conf on the remote server:

Match User tunnel
        GatewayPorts clientspecified

and restart the SSHD service.

Note: remember to login once manually to ensure remote host keys are accepted of your tunnels will fail.

Note: running SSH with port forwarding like i showed you above is ok, but will not reconnect automatically in case the tunnels fails nor will survive a reboot. A better approach is discussed down below, for now, you can test your connections with SSH.

Forwarding the home services

I will show you how to place all your home services safely tucked behind an authenticating Reverse Proxy over SSL/HTTPS (here), so all you need to do it actually forward the HTTP(80) and HTTPS(443) services from your home to your remote server.

There is a catch that you need to be aware of: SSH is not capable (by design, and because of security) to forward ports below 1024. Unfortunately ports 80 (HTTP) and 443 (HTTPS) are both below 1024. The solution to this issue is to let SSH forward those ports only on the external server 127.0.0.1 (non-public) network on higher ports (8080 instead of 80 and 8443 instead of 443) and then add a second internal forwarding on the remote server from port 127.0.0.1:8080 to 0.0.0.0:80 and from port 127.0.0.1:8443 to 0.0.0.0:443. (Check note below, you might need 8443 local as well if using my reverse proxy setup)

Due to this limitation, the command, to be run from the home server looks like:

ssh tunnel@external-server1 -R0.0.0.0:2022:127.0.0.1:22 -R127.0.0.1:8080:127.0.0.1:80 -R127.0.0.1:8443:127.0.0.1:443 #(or 8443!)

The SSH rule (port 2022/22) it has been already discussed above.

Note: of you follow my reverse proxy guide, you will have two local nginx ports: 443 unauthenticated for local access only and 8443 authenticated, for remote access. In this case you want -R127.0 0.1:8443:127.0.0.1:8443 instead of the above rule.

At this point you need to add the two internal redirections from 8080→80 and 8443→443 on the remote server. This can be achieved in many ways, but the simplest is using the socat tool that allows you to create local redirections between the ports as needed. Emerge it:

emerge socat

and create the redirect start file under /etc/local.d/01-redirect.start:

01-redirect.start
#!/bin/bash
(socat TCP-LISTEN:443,fork,reuseaddr TCP:127.0.0.1:8443)&
(socat TCP-LISTEN:80,fork,reuseaddr TCP:127.0.0.1:8080)&

make it executable and manually start it, as root.

At this point, you should be able to reach your home services from your external server.

Autoreconnect & Autostart the client side

There is a simple and pretty solid tool, AutoSSH, that will help you pack it all up and make is more resilient with autoreconnection capabilties. Autossh will start SSH for you and restart it when it fails. but there are some caveats as autossh will exit and might end up quitting in some special cases, like if the remote server is not reachable at boot or the network connection gives up for a prolonged period of time. To ensure this does not happen, you need to set AUTOSSH_GATETIME=0.

First of all, install autossh:

emerge autossh

Now create the following start-up script in /home/tunnel/tunnels.sh:

tunnels.sh
#!/bin/bash 
export AUTOSSH_GATETIME=0

function run_tunnel()
{       
        while true
        do
                autossh -M 0 \
                        -nNT -q \
                        -o ServerAliveInterval=30 \
                        -o ServerAliveCountMax=2 \
                        -o ExitOnForwardFailure=yes \
                        -R0.0.0.0:5022:127.0.0.1:22 \
                        -R127.0.0.1:8080:127.0.0.1:80 \
                        -R127.0.0.1:8443:127.0.0.1:8443 \
                        $1
                sleep 30
        done
}

run_tunnel <<ip address of external-server1>>&
tun_tunnel <<ip address of external-server2>>&
wait

Note: remeber to add your external servers SSH port if you changed it from the default 22! (-p newport).

Note: as you can see I used port 8443 also locally due to reverse proxy setup! Check here for more details. Local 443 is without authentication, while local 8443 has reverse proxy with enabled.

You need to set AUTOSSH_GATETIME as stated above, and for additional safety i have also wrapped those tunnels with a while loop, and notice that due to this reason i am not using the -f option for SSH or AutoSSH. You might never be carefull enough when it comes to losing remote access to your server, and maybe you are traveling abroad.

Also, for additional resillience, you might want to use the real ip address instead of external-server1&2 so that is DNS fails but network is working, the tunnels will connect anyway.

To start it on boot, create /etc/local.d/99-tunnels.start:

99-tunnels.start
#!/bin/bash
start-stop-daemon -b -m -p /var/run/tunnel.pid -n tunnel -u tunnel /home/tunnel/tunnels.sh

and make it executable.