User Tools

Remote Access via SSH tunneling

This is common between simple and advanced approaches, if you choose SSH tunneling for remote access.

I choose the external server + SSH tunneling, because i want to be able to access my services even if my devices are broken / unavailable at times. Is it secure? Yes, security is provided by the reverse-proxy + SSO, and privacy is guaranteed by using HTTPS on all services + encrypted SSH tunnels.

Architecture

So, here are the assumption:

  • You choose the “external server” + “SSH tunnel” solution
  • You have two external servers: external and failback.
  • You have two ISPs at home (optional)

I will try with some ASCII art to illustrate:

        ┌───────────┐         
        │Home       │         
        │ ▲ Server ▲│         
        └─┼────────┼┘         
          │        │          
          │        │          
          │        │          
   ┌──────┼─┐    ┌─┼──────┐   
   │ISP 1 │ │    │ │ISP 2 │   
   └──────┼─┘    └─┼──────┘   
          │        │          
          │        │          
          │        │          
┌─────────┴┐      ┌┴─────────┐
│External  │      │External  │
│ Server 1 │      │ Server 2 │
│          │      │          │
│┌────────┐│      │┌────────┐│
││ SSH n.1││      ││ SSH n.2││
│└────────┘│      │└────────┘│
└──────────┘      └──────────┘

The home-server will port-forward to external.mydomain.com via ISP 1 and, at the same time, to failsafe.mydomain.com via ISP 2.

If you have only one ISP or one external server, just ignore the other. Of course you can use any mix of two ISPs and one external server, or one ISP and two external servers as you like, each step is a bit less redundant than the other of course.

Prerequisites

I have already stated above, but i will again state it here since it's very important. For SSH tunnel approach to be safe, you must:

  • Use HTTPS on all your service. And i mean all.
  • Use a reverse-proxy in front of all your services.
  • Use strong authentication (proxy-auth, or Authelia or a similar services)

All these three points are covered by using NGINX reverse proxy + SSO as i describe in these pages.

One last time: HTTPS is mandatory.

The Basic Idea

You will create a specific user, called tunnel both on the local and the remote servers and allow the local user tunnel to login via private/public key exchange to the remote servers. This is always possible because the remote server is, by definition of internet, accessible from the home server.

The user tunnel will then activate a number of port-forwarding from the remote server to the local server, exposing directly only your home most-secured only services:

  • Home SSH port: so you can access your home server from outside. This will be mapped to something different than 22 both for security reason and not to clash with the external server port 22.
  • Home Reverse Proxy ports (80 & 443): so you can access all your secured services from outside.

Please note that you must expose port 80 of your reverse-proxy if you plan to use Let's Encrypt Certbot tool, or it will not work.

What you will need:

  • Create a tunnel user on home server
  • Create a tunnel user on remote server(s)
  • Share home tunnel user public key to the remote server's tunnel user authorized keys
  • Login from home to remote as user tunnel with appropriate options to create all the needed tunnels.

Everything will be properly encrypted and safe, and automated.

Home Server Setup

Create a new user called tunnel, but make sure it's home folder is on root partition because if, by any chance, your data partitions will not mount, you would lose the tunnels as well.

useradd -m tunnel

Create the SSH identity for the user tunnel:

su - tunnel
ssh-keygen

I suggest you do not setup a password for this key. It is slightly less secure, but you can always revoke the key if it gets leaked. If you setup a password, then you will need to provide it in the connection script, which kind of make it a mooth point anyway.

Your SSH public key will be stored under ~/.ssh/id_rsa.pub (this might change in newer SSH versions where RSA is phased out for better keys, YMMV).

Remote Server Setup

Login to your remote server, create the user tunnel:

useradd -m tunnel

Copy / paste the home server user tunnel public key into the ~/.ssh/authorized_hosts file of the remote server tunnel user, as a new line. You should create the folder and the file if missing.

BE CAREFUL: you must set permissions properly for the .ssh folder and the authorized_keys file! They must not be accessible or readable by anybody but the user.

You might want to edit SSH daemon configuration and set in your server /etc/ssh/sshd_config:

ClientAliveInterval 10
ClientAliveCountMax 2

To ensure that on the remote server the user tunnel is allowed to create port-forwarding, add the following to your /etc/ssh/sshd.conf on the remote server:

Match User tunnel
        GatewayPorts clientspecified

and restart the SSHD service.

so that clients will be dropped and port freed automatically if they stop responding within 10 seconds for maximum two consecutive times.

Since you will need to redirect low-ports (<1024), and this is not possible with default SSH by design, you will need also the socat tool that allows you to create local redirections between the ports as needed. Emerge it:

emerge socat

Note: for security reasons, you might prefer to move the external server SSH port from the default 22 to some other value, like 222, in case you do so, remember to add -p 222 to your ssh and autossh command below.

SSH tunnel creation

Which kind of port-forwarding do you need? And how many?

SSH provides two kinds of port forwarding: remote-to-local and local-to-remote.

Remote to local: the specified port on remote server will be opened by SSH and any traffic trough that port will be redirected, encrypted and transparently, to a specific port (and possibly ip address) on the home server. This is useful to access a home service from outside.

Local to remote: the specified port on home server will be opened by SSH and any traffic will be redirected, encrypted and transparently, to a port of the remote server (and possibly also a specific IP address). This can be used to remap en external service on the local server, without giving an user direct access to the remote server.

You will mostly need the first one, of course.

The syntax is: -R<remote_ip>:<remote_port>:<local_ip>:<local_port>

  • -R : specify it's a remote to local forwarding
  • remote_ip: can always be 127.0.0.1, to use remote public IPs or 0.0.0.0 you need special configuration on remote server.
  • remote_port: which port to listen to on remote server. Must be > 1024 (unprivileged)
  • local_ip: destination IP for connection incoming from remote. Usually 127.0.0.1 but can be whatever you want
  • local_port: destination port for connections. It's your exported service port.

A good idea is to forward home server SSH port to some custom port on the remote server, so that you can access your home via SSH. From your home server, as tunnel user run:

ssh tunnel@external-server1 -R0.0.0.0:2022:127.0.0.1:22

will allow you to SSH on port 2022 of the remote server, and actually login to your home server SSH. Try that now, from the home server, as any user from a new terminal, run:

ssh myuser@external-server1 -p 2022

and you will login as user myuser… on your home server!

Note: remember to login once manually, as user tunnel, to ensure remote host keys are accepted of your tunnels will fail.

Note: running SSH with port forwarding like i showed you above is ok, but will not reconnect automatically in case the tunnels fails nor will survive a reboot. A better approach is discussed down below, for now, you can test your connections with SSH.

Forwarding the home services

So, assuming the prerequisites are satisfied, you have all your home services behind a reverse-proxy protected with HTTPS and Let's Encrypt real certificates, all you need to do it actually forward the HTTP(80) and HTTPS(443) services from your home to your remote server. Yes, also port 80 must be forwarded for Let's Encrypt automatic certificate renewal to work.

There is a catch that you need to be aware of: SSH is not capable (by design, and because of security) to forward ports below 1024. Unfortunately ports 80 (HTTP) and 443 (HTTPS) are both below 1024. The solution to this issue is to let SSH forward those ports only on the external server 127.0.0.1 (non-public) network on higher ports (8080 instead of 80 and 8443 instead of 443) and then add a second internal forwarding on the remote server from port 127.0.0.1:8080 to 0.0.0.0:80 and from port 127.0.0.1:8443 to 0.0.0.0:443. (Check note below, you might need 8443 local as well if using my reverse proxy setup)

Due to this limitation, the command, to be run from the home server looks like:

ssh tunnel@external-server1 -R0.0.0.0:2022:127.0.0.1:22 -R127.0.0.1:8080:127.0.0.1:80 -R127.0.0.1:8443:127.0.0.1:8443 

The SSH rule (port 2022/22) it has been already discussed above.

Note: my reverse proxy guide will use port 8443 for external connections to differentiate from internal connections. This is why the above rule forwards to home server port 8443 instead of port 443.

At this point you need to add the two internal redirections from 80 → 127.0.0.18080 and 443 → 127.0.0.1:8443 on the remote server using socat. Create the redirect start file under /etc/local.d/01-redirect.start:

01-redirect.start
#!/bin/bash
(socat TCP-LISTEN:443,fork,reuseaddr TCP:127.0.0.1:8443)&
(socat TCP-LISTEN:80,fork,reuseaddr TCP:127.0.0.1:8080)&

make it executable and manually start it, as root, for this first time (it will autostart next reboot):

chmod +x /etc/local.d/01-redirect.start
/etc/local.d/01-redirect.start

At this point, you should be able to reach your home services from your external server.

Autoreconnect & Autostart the home server side

There is a simple and pretty solid tool, AutoSSH, that will help you pack it all up and make is more resilient with autoreconnection capabilties. Autossh will start SSH for you and restart it when it fails. but there are some caveats as autossh will exit and might end up quitting in some special cases, like if the remote server is not reachable at boot or the network connection gives up for a prolonged period of time. To ensure this does not happen, you need to set AUTOSSH_GATETIME=0.

First of all, install autossh:

emerge autossh

Now create the following start-up script in /home/tunnel/tunnels.sh:

tunnels.sh
#!/bin/bash 
export AUTOSSH_GATETIME=0

function run_tunnel()
{       
        while true
        do
                autossh -M 0 \
                        -nNT -q \
                        -o ServerAliveInterval=30 \
                        -o ServerAliveCountMax=2 \
                        -o ExitOnForwardFailure=yes \
                        -R0.0.0.0:5022:127.0.0.1:22 \
                        -R127.0.0.1:8080:127.0.0.1:80 \
                        -R127.0.0.1:8443:127.0.0.1:8443 \
                        -p $2
                        $1
                sleep 30
        done
}

run_tunnel <<ip address of external-server1>> <<SSH port of external-server1>>&
run_tunnel <<ip address of external-server2>> <<SSH port of external-server1>>&
wait

For additional resillience i have also wrapped those tunnels with a while loop, and notice that due to this reason i am not using the -f option for SSH or AutoSSH. You might never be carefull enough when it comes to losing remote access to your server, and maybe you are traveling abroad.

Also, for additional resillience, you want to use the real ip address instead of external-server1&2 so that is DNS fails but network is working, the tunnels will connect anyway.

To start it on boot, drop this init script to /etc/init.d/tunnel:

tunnel
#!/sbin/openrc-run
# Copyright 2024 Willy Garidol
# Distributed under the terms of the GNU General Public License v3

depend() {
        need localmount net
}

description="SSH tunnel manager"
pidfile="/run/tunnel.pid"
command_background=true
command="/home/tunnel/tunnel.sh"
command_args=""
command_user="tunnel:tunnel"
start_stop_daemon_args="--stdout /var/log/tunnel/tunnel.log --stderr /var/log/tunnel/tunnel.log"

start_pre() {
        test -e "/var/log/tunnel" || {
                mkdir "/var/log/tunnel"
        } && chown -R tunnel:tunnel "/var/log/tunnel"
}

and make it executable, set it on boot and start it now:

chmod +x /etc/init.d/tunnel
rc-update add tunnel default
/etc/init.d/tunnel start

You can improve and generalize these script as much as you like. My suggestion is: keep it simple, this is core instrastructure and any complexity might introduce errors, bugs where they are not needed.

This website uses cookies. By using the website, you agree with storing cookies on your computer. Also, you acknowledge that you have read and understand our Privacy Policy. If you do not agree, please leave the website.

More information