siomiz / SoftEtherVPN

A Docker Automated Build Repository for SoftEther VPN

Home Page:https://hub.docker.com/r/siomiz/softethervpn/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cannot connect to containers in swarm overlay network

foverzar opened this issue · comments

I am trying to create a container that will serve as a gateway to docker overlay network. Unfortunately there seem to be some issues with routing/forwarding as the container always responds with "Destination host unreachable" when I try to access anything on the overlay network.

I used the default compose configuration (from repo) and added an additional (overlay) network to my vpn container. The container is started with simple docker-compose because swarm stacks don't support cap_add.

Here are the contents of the docker-compose.yml

version: "3.5"

services:
  vpn:
    image: siomiz/softethervpn
    restart: unless-stopped
    volumes:
      - ./vpn_server.config:/usr/vpnserver/vpn_server.config
    cap_add:
      - NET_ADMIN
    privileged: true
    ports:
      - 500:500/udp
      - 4500:4500/udp
      - 1701:1701/tcp
      - 1194:1194/udp
      - 5555:5555/tcp

networks:
  default:
    name: project_network
    external: true

project_network is my overlay net.

DNS requests work properly as well as reverse DNS -- I can resolve containers from overlay network by name. Also I can propely access internet resources and everything seems to work fine, except for accessing IPs from overlay subnet. If i try to ping a host from the overlay net from within the container (via docker exec) everything works fine, resources are accessible.

The container starts with two network interfaces.
eth0 is overlay

358: eth0@if359: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:00:03:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.153/24 brd 10.0.3.255 scope global eth0
       valid_lft forever preferred_lft forever
360: eth1@if361: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:1e brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 172.18.0.30/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever

And with the following output of route command

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         172.18.0.1      0.0.0.0         UG    0      0        0 eth1
10.0.3.0        *               255.255.255.0   U     0      0        0 eth0
172.18.0.0      *               255.255.0.0     U     0      0        0 eth1

Here is a tracert output to one of the contatiners.

C:\Users\foverzar>tracert 10.0.3.93
Tracing route to core_mariadb.1.ddfqttfmoqqezq50xtosy2lum.project_network [10.0.3.93] over a maximum of 30 hops:
1    11 ms    11 ms    11 ms  192.168.30.1
2  172.18.0.30  reports: Destination host unreachable. 

I assume it has something to do with forwarding from 172.18.0.0 subnet to 10.0.3.0 subnet, but I have no idea how to proceed further. Any tips?

Hmm. It works for my environment (Docker 19.03.2 on Ubuntu 19.10):
Weirdly the image works without privileged: true or cap_add... maybe because it's using user-mode NAT now? (I don't know the full consequences other than many iptables errors in the log)

 docker network create --driver overlay --attachable project_network
pe6ccskr00p2lqacuubrc7x09

 cat docker-compose.yml
version: "3.5"

services:
  vpn:
    image: siomiz/softethervpn
    environment:
      - USERNAME=test
      - PASSWORD=test
    ports:
      - 500:500/udp
      - 4500:4500/udp
      - 1701:1701/tcp
      - 1194:1194/udp
      - 5555:5555/tcp
  app:
    image: nginx

networks:
  default:
    name: project_network
    external: true

 docker stack deploy -c docker-compose.yml test
Creating service test_vpn
Creating service test_app

 docker service inspect test_app -f='{{ .Endpoint.VirtualIPs }}'
[{pe6ccskr00p2lqacuubrc7x09 10.0.4.5/24}]

And on a separate machine, connected to this host via L2TP/IPsec, and curl http://10.0.4.5 and curl http://test_app worked.
I noticed that ping worked but traceroute didn't work.

What is your Docker host environment?

I had no luck trying to run it without cap_add as my clients immediately disconnected upon recieving IP. privileged: true was to disable permissions warning, but I did not notice any other effects.

I have tried to create a minimal reproducable example of what I'm trying to create and ended up with the following config, which conceptually replicates what I intended to build, but ACTUALLY WORKS.

docker-compose.yml:

version: "3.5"

services:
  vpn:
    image: siomiz/softethervpn
    restart: unless-stopped
    environment:
      - USERNAME=test
      - PASSWORD=test
    cap_add:
      - NET_ADMIN
    privileged: true
    ports:
      - 500:500/udp
      - 4500:4500/udp
      - 1701:1701/tcp
      - 1194:1194/udp
      - 5555:5555/tcp

networks:
  default:
    name: project_network
    external: true

docker-stack.yml

version: "3.5"

services:
  app:
    image: nginx

networks:
  default:
    name: project_network
    external: true

run script:

docker network create --driver overlay --attachable project_network

docker-compose up -d

docker stack deploy -c docker-stack.yml test

...it works, but only on my local docker desktop. I've tried to use absolutely the same config on my server, but it also failed with destination host unreachable responses.

This is wierd. I've tried update & restart, but to no effect. Here is the docker info output of the misbehaving host:

Containers: 48
 Running: 31
 Paused: 0
 Stopped: 17
Images: 78
Server Version: 18.09.7
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
 NodeID: l3ay2g4kpsp0ykz1q7wtup3qn
 Is Manager: true
 ClusterID: ifyvh2359ln5dd3278zek18ie
 Managers: 1
 Nodes: 2
 Default Address Pool: 10.0.0.0/8
 SubnetSize: 24
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 10
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: XX.XX.XX.XX
 Manager Addresses:
  XX.XX.XX.XX:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version: N/A
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-66-generic
Operating System: Ubuntu 18.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.66GiB
Name: XXXXXXXXXX
ID: XY6M:PLF6:D76N:NBGP:PDTI:5NCD:PCL4:IVCC:22C5:WKBB:NOMH:JQ2Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support