yrutschle / sslh

Applicative Protocol Multiplexer (e.g. share SSH and HTTPS on the same port)

Home Page:https://www.rutschle.net/tech/sslh/README.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

sslh Incorrectly Identifies all Protocols as SSH

DavidBerdik opened this issue · comments

I am using the sslh Docker container (the image's build date is 2023-07-30) to expose an NGINX server on http and https as well as an OpenVPN server on the same port. To do this, I use this command:

--foreground --listen=0.0.0.0:443 --openvpn=openvpn-tcp:1194 --http=lets-encrypt:80 --tls=lets-encrypt:443

This setup works fine, but I also want to expose an SSH server. To try doing this, I have updated the command used by the container to this:

--foreground --listen=0.0.0.0:443 --openvpn=openvpn-tcp:1194 --http=lets-encrypt:80 --tls=lets-encrypt:443 --ssh=ssh-server:22

After making this change, I verified that SSH was accessible as intended, and it was, but unfortunately, this change has the undesirable side effect of causing traffic intended for HTTP, HTTPS, or OpenVPN to be incorrectly classified as SSH traffic. Effectively, the setup becomes no different than if I were exposing the SSH server directly.

I don't see why this would matter for my issue, but in case it does, the Docker container for sslh uses a bridge network through which I expose port 443 to the host system. The NGINX and OpenVPN servers each run in their own Docker containers and are connected to the same bridge network. sslh is configured to use the hostnames of the containers to hand off the requests to them. In the case of the SSH server, I want to make the host system's SSH server accessible. As I have said, making the SSH server accessible works fine, but comes at the cost of causing everything else to not work.

Ultimately, I want to eventually use a transparent configuration, but since I was unable to get a transparent configuration working with my setup even after #388 was merged, I have opted to leave that alone for now.

Hi, I am the author of the PR #388. The updated script really only does 2 things: looking for the '--transparent' flag and automatically configure the iptables / routing rules given in the example here (it is not enabled by default). From my limited understanding of the iptable rules, it seems that every outgoing connection to the various services made by sslh will marked, and response is intercepted and send back to sslh to be sent out from port 443. Thus, it is only possible for transparent mode to work only when the services are reachable via localhost.

From your examples, it look to me that the transparent mode is not enabled (as '--transparent' flag is missing and the packets will be lost if you are using the hostname of the containers and nothing will work) and the bug is likely caused by something else.

I have a similar setup to yours which i was able to get it working as follows:

  sslh:
    build: https://github.com/klementng/sslh.git
    container_name: system-sslh
    environment:
      - TZ=${TZ}
    cap_add:
      - NET_ADMIN
      - NET_RAW
    sysctls:
      - net.ipv4.conf.default.route_localnet=1
      - net.ipv4.conf.all.route_localnet=1
    volumes:
      - ./sslh:/config
    command:
      - '--transparent'
      - '--config=/config/sslh.conf'
    ports:
      - 0.0.0.0:443:443
      #- 0.0.0.0:443:443/udp

      - 0.0.0.0:80:80 #nginx
      - 0.0.0.0:8443:8443 # nginx
    extra_hosts:
      - localbox:host-gateway
    restart: unless-stopped

  nginx:
    image: lscr.io/linuxserver/nginx:latest
    container_name: system-nginx
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
    volumes:
      - ./nginx:/config
      - ./certbot:/certs
    network_mode: service:sslh #### notice that nginx is using sslh container network.  this makes nginx contactable by sslh using localhost

My sslh config:

foreground: true;
transparent: true;
timeout: 3;

# Logging configuration
# Value: 1: stdout; 2: syslog; 3: stdout+syslog; 4: logfile; ...; 7: all

verbose-config: 7; #  print configuration at startup
verbose-config-error: 7;  # print configuration errors
verbose-connections: 7; # trace established incoming address to forward address
verbose-connections-error: 7; # connection errors
verbose-connections-try: 0; # connection attempts towards targets
verbose-fd: 0; # file descriptor activity, open/close/whatnot
verbose-packets: 0; # hexdump packets on which probing is done
verbose-probe-info: 7; # what's happening during the probe process
verbose-probe-error: 7; # failures and problems during probing
verbose-system-error: 7; # system call problem, i.e.  malloc, fork, failing
verbose-int-error: 7; # internal errors, the kind that should never happen

logfile: "/config/sslh.log"

listen:
(
    { host: "0.0.0.0"; port: "443"; },
#    { host: "0.0.0.0"; port: "443"; is_udp: true; },
);

protocols:
(
     { name: "tls"; host: "localhost"; port: "8443";},     
     { name: "openvpn"; host: "localhost"; port: "1194";},
     { name: "ssh"; host: "localhost"; port: "22"; keepalive: true;},
     { name: "anyprot"; host: "localhost"; port: "8443";},
     
#     { name: "wireguard"; host: "localbox"; port: "51820"; is_udp: true; transparent:false; fork: false},
#     { name: "anyprot"; host: "localbox"; port: "51820"; is_udp: true; transparent:false; keepalive: true},
);

on-timeout: "tls";

As I could not attach my openvpn container to sslh (due to other networking requirements) i used the stream method in nginx to proxy my openvpn and ssh connections

my nginx config

stream{
  server {
    listen 22;
    proxy_pass localbox:22;
  }
  
  
  server {
    listen 1194;
    proxy_pass localbox:1194;
  }
}

The above example could probably be removed if sslh networking is set to network_mode: host (the script will modify the host iptables / routing rules). I have tested it before but I decided to keep all my networking within docker.

The only problem i faced is when using sslh for wireguard would cause the speed would drop all the way to <1mbps. which i just redirected port 443 udp to the wireguard port since it was the only udp service I have

@klementng Thanks for responding and providing such detailed information!

From your examples, it look to me that the transparent mode is not enabled (as '--transparent' flag is missing and the packets will be lost if you are using the hostname of the containers and nothing will work) and the bug is likely caused by something else.

I tried using the --transparent flag with the configuration you described as part of your PR documentation, but it wasn't working, so I ended up reverting back to the setup I currently use with the bridged network for now.

My sslh config

I don't use a config file, as my setup uses the command line flags approach, but your setup looks very similar to what I tried but couldn't get to work.

Thus, it is only possible for transparent mode to work only when the services are reachable via localhost.

Are you sure about this? Assuming I'm understanding it correctly, this seems to suggest otherwise.

Are you sure about this? Assuming I'm understanding it correctly, this seems to suggest otherwise.

Yes I'm sure. The iptables rule that are automatically set are for 'transparent proxy to one host'. In order to proxy to multiple host as shown in the link, a different set of iptables rules must be set for all the container (i.e. the nginx container must have its own iptables rule to forward the traffic back to sslh container). This make everything very complicated and infeasible as likely not all images will have iptables installed, and rules must be manually set each time the container is recreated.

Also I was able to reproduce the bug that you were facing which oddly only occurs with the cli. The bug magically went away when I manually deleted and re-set the iptables rules. After some investigating I have isolated the issue to be caused by ipv6 rules ( there were error printing out if it was not enabled, i figured it would be a non-issue but i guess not).

I have created a patch that seemed to be working:

sslh:
  build: https://github.com/klementng/sslh.git#docker/transparent-patch
sudo docker compose build sslh

My docker compose config:

  sslh:
    build: https://github.com/klementng/sslh.git#docker/transparent-patch
    container_name: system-sslh
    environment:
      - TZ=${TZ}
    cap_add:
      - NET_ADMIN
      - NET_RAW
    sysctls:
      - net.ipv4.conf.default.route_localnet=1
      - net.ipv4.conf.all.route_localnet=1
    volumes:
      - ./sslh:/config
    command:
      - '--transparent'
      - '--foreground'
      - '--listen=0.0.0.0:443'
      - '--openvpn=localhost:1194'
      - '--http=localhost:80'
      - '--tls=localhost:8443'
      - '--ssh=localhost:22'
      - '--verbose-probe-info=7'
      - '--verbose-probe-error=7'
#      - '--config=/config/sslh.conf'
    networks:
      default:
        ipv4_address: 172.20.0.50
    ports:
      - 0.0.0.0:443:443 #sslh
      - 0.0.0.0:80:80 #nginx http
    restart: unless-stopped

Yes I'm sure. The iptables rule that are automatically set are for 'transparent proxy to one host'. In order to proxy to multiple host as shown in the link, a different set of iptables rules must be set for all the container (i.e. the nginx container must have its own iptables rule to forward the traffic back to sslh container). This make everything very complicated and infeasible as likely not all images will have iptables installed, and rules must be manually set each time the container is recreated.

Ah okay. I understand now. Thanks for the explanation!

Also I was able to reproduce the bug that you were facing which oddly only occurs with the cli. The bug magically went away when I manually deleted and re-set the iptables rules. After some investigating I have isolated the issue to be caused by ipv6 rules ( there were error printing out if it was not enabled, i figured it would be a non-issue but i guess not).

I see that your PR for the fix has been merged in. I have not had a chance to try it, but once I do, I'll let you know if I encounter any further issues with it. That said, I am still going to leave this ticket open since, as far as I know, the SSH issue that I am seeing has not been resolved.

This is happening to me as well, only in docker.
If I use the debian version installed through apt, things are working fine.
But when I do exactly the same setup in docker, it identifies all protocols as ssh

How can I fix that? Thanks!

@paulhybryant which sslh versions? Try to to build from latest source too and test

The latest one. This is my container configuration

+  gogs-sslh:
+    profiles: ["sslh"]
+    build: https://github.com/yrutschle/sslh.git
+    container_name: gogs-sslh
+    environment:
+      - TZ=${TZ}
+    cap_add:
+      - NET_ADMIN
+      - NET_RAW
+      - NET_BIND_SERVICE
+    command: --foreground --listen=0.0.0.0:443 --http=gogs:3000 --ssh=gogs:22 --verbose-config=1 --verbose-probe-info=1 -n
+    restart: unless-stopped
+    ports:
+      - "8443:443"
+    networks:
+      - v2ray

I can confirm that the forwarding works, it is just not selecting the right protocol
When I run curl http://localhost:8443, it forwards to port 22
When I run git clone ssh://git@localhost:8443/paulhybryant/foo
It can clone the repo fine.

From the logs, I can see
gogs-sslh | ssh:connection from 192.168.65.1:21449 to 172.28.0.4:443 forwarded from 172.28.0.4:53038 to 172.28.0.3:22
gogs-sslh | ssh:connection from 192.168.65.1:21450 to 172.28.0.4:443 forwarded from 172.28.0.4:51674 to 172.28.0.3:22

The first one is from curl, and the next is from git clone

It seems that sslh is redirecting only to SSH because a timeout occurs.
When using the following opts for a Podman container (bridged with port 443 published) every HTTPS request was redirected to SSH instead, sslh was started spinning at ~ 20% CPU as soon as the first SSH connection happened, and connecting via SSH was significantly delayed.

--foreground --listen=0.0.0.0:443 --ssh=host.containers.internal:22 --tls=host.containers.internal:8443

From the code documentation, in case of a timeout and unless specified otherwise, sslh will redirect to the first enabled protocol in its internal list, which is ssh.

Changing the above to:

--foreground --listen=0.0.0.0:443 --ssh=host.containers.internal:22 --tls=host.containers.internal:8443 --on-timeout=tls

solved the issue in my case. CPU usage went down to ~8%, SSH logins became noticeable faster and HTTPS was detected successfully. I have not (yet) tested with more protocols.

Changing the above to:

--foreground --listen=0.0.0.0:443 --ssh=host.containers.internal:22 --tls=host.containers.internal:8443 --on-timeout=tls

solved the issue in my case.

This is a nice workaround to be aware of, but unfortunately, I am not sure that it is going to be useful for me since I use more than two protocols with sslh.