sickcodes / Docker-OSX

Run macOS VM in a Docker! Run near native OSX-KVM in Docker! X11 Forwarding! CI/CD for OS X Security Research! Docker mac Containers.

Home Page:https://hub.docker.com/r/sickcodes/docker-osx

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How do I grant the same network access to the Mac VM as the underlying Arch Linux Host

Buzz-Lightyear opened this issue · comments

commented

When I docker run using the following command:

docker run -it \
    -d \
    --name sosumi \
    --network host \
    --device /dev/kvm \
    -e NETWORKING=vmxnet3 \
    -e RAM=64 \
    -e CPU=60 \
    -e SMP=60 \
    -v "${PWD}/mac_hdd_ng.img:/image" \
    sickcodes/docker-osx:naked

I can ensure that the Arch Linux container has the same network access as my host machine. However, the Mac VM does not share the same network access. How can I resolve this? I verified that ipv4 forward is already set to 1:

$ cat /etc/sysctl.conf | grep 'net.ipv4.ip_forward'
net.ipv4.ip_forward = 1
NAME="CentOS Linux"
PRETTY_NAME="CentOS Linux 7 (Core)"
CPE_NAME="cpe:/o:centos:centos:7"

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6       845G  207G  596G  26% /export

              total        used        free      shared  buff/cache   available
Mem:           125G         12G         27G        4.0G         86G        104G
Swap:           63G          0B         63G

$ nproc
64

egrep -c '(svm|vmx)' /proc/cpuinfo
64

$ ls -lha /dev/kvm
crw-rw---- 1 kvm kvm 10, 232 Mar 11 18:11 /dev/kvm

root      8253  0.0  0.0 2982776 73696 ?       Ssl  Mar04   5:32 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

$ whoami
smuthu

$ grep "docker\|kvm\|virt" /etc/group
kvm:x:2147483578:tester
docker:x:2147410894:smuthu

See this #72

In bridged networking you need to start a bridge inside the container /dev/run stuff.

We use user mode networking and forward ports to the machine because otherwise the guest has the same IP as the container and won't be able to SSH in.

Also, vmxnet3 is on by default now, do a speed test inside the container.

To open more ports, see line 233 https://github.com/sickcodes/Docker-OSX/blob/master/Dockerfile

commented

Thanks for the speedy response @sickcodes, I removed the vmxnet3 parameter. I went through #72 and specifically #162 (comment) but I can't replicate the "Inside the Container" part you're referring to.

This is how I created the container:

docker run -it \
    -d \
    --name sosumi \
    --device /dev/kvm \
    -e RAM=64 \
    -e CPU=60 \
    -e SMP=60 \
    -p 50922:10022 \
    -e ADDITIONAL_PORTS='hostfwd=tcp::10023-:80,' \
    -p 10023:10023 \
    -v "${PWD}/mac_hdd_ng.img:/image" \
    sickcodes/docker-osx:naked
$ docker ps
CONTAINER ID        IMAGE                        COMMAND                   CREATED              STATUS              PORTS                                                NAMES
349bc7250a18        sickcodes/docker-osx:naked   "/bin/bash -c '[[ \"$…"   About a minute ago   Up About a minute   0.0.0.0:10023->10023/tcp, 0.0.0.0:50922->10022/tcp   sosumi

However the Arch Linux VM doesn't have the same network access as my host. To access the Mac VM, I SSH-ed into localhost via port 50922:

[smuthu@lva1-app22578 sosumi]$ ssh tester@localhost -p 50922
Password:
Last login: Thu Mar 11 14:18:55 2021 from 172.17.0.1
tester@testers-iMac-Pro ~ %

And obviously I can't curl Homebrew in. I read about bridged vs user mode networking and don't have clarity around the exact fit for me. I'm completely new to containerization, so I apologize in advance if I'm all over the place.

What I need is a way to create the Docker container from my CentOS host and let the Mac VM have the same network access as my host. When I use --network=host, I'm able to achieve that with the Arch Linux VM but not the Mac inside QEMU. Could you walk me through the steps involved here?

Maybe same DNS problem.

The problem is we have multiple DNS server defined in Host, the first of them not working. I guess qemu just use the first one.

Originally posted by @shifujun in #122 (comment)

Do you have internet inside any Docker container? VPN on? This sounds like a local issue.

Run ip link; ip addr; ip n and post some output here so we can see your networking setup.

cat /etc/resolv.conf; cat /etc/hosts

commented

I'm on a corporate network and don't have internet access from my host machine but do have intranet access. At this moment, I'd merely like the Mac VM to the have the same network access as the host machine, which I'm checking by pinging the artifactory.

$ ip link; ip addr; ip n
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether b8:cb:29:9b:8b:d7 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 1c:34:da:76:f6:a0 brd ff:ff:ff:ff:ff:ff
4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 1c:34:da:76:f6:a1 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1c:34:da:76:f6:a0 brd ff:ff:ff:ff:ff:ff
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:6f:84:d6:6c brd ff:ff:ff:ff:ff:ff
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:cb:29:9b:8b:d7 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 1c:34:da:76:f6:a0 brd ff:ff:ff:ff:ff:ff
4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 1c:34:da:76:f6:a1 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1c:34:da:76:f6:a0 brd ff:ff:ff:ff:ff:ff
    inet 10.139.194.122/26 brd 10.139.194.127 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 2a04:f547:16:6009::c27a/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::1e34:daff:fe76:f6a0/64 scope link 
       valid_lft forever preferred_lft forever
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:6f:84:d6:6c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6fff:fe84:d66c/64 scope link 
       valid_lft forever preferred_lft forever
10.139.194.110 dev bond0 lladdr 1c:34:da:6f:d0:fe STALE
10.139.194.77 dev bond0 lladdr 1c:34:da:72:eb:66 STALE
10.139.194.97 dev bond0 lladdr 1c:34:da:6f:c8:de STALE
10.139.194.83 dev bond0 lladdr 1c:34:da:6f:c0:b6 STALE
10.139.194.89 dev bond0 lladdr 1c:34:da:77:08:e8 STALE
10.139.194.103 dev bond0 lladdr 1c:34:da:75:7f:f2 STALE
10.139.194.109 dev bond0 lladdr 1c:34:da:72:e8:66 STALE
10.139.194.76 dev bond0 lladdr 1c:34:da:72:eb:de STALE
10.139.194.82 dev bond0 lladdr 1c:34:da:6f:c0:5e STALE
10.139.194.121 dev bond0  FAILED
10.139.194.88 dev bond0  FAILED
10.139.194.102 dev bond0 lladdr 1c:34:da:62:d0:00 STALE
169.254.169.254 dev bond0  FAILED
10.139.194.108 dev bond0 lladdr 1c:34:da:75:92:da STALE
10.139.194.75 dev bond0  FAILED
10.139.194.114 dev bond0 lladdr 1c:34:da:72:f4:8e STALE
10.139.194.81 dev bond0 lladdr 1c:34:da:76:f5:a8 STALE
10.139.194.87 dev bond0 lladdr 1c:34:da:76:f4:e8 STALE
10.139.194.93 dev bond0 lladdr 1c:34:da:76:f7:08 STALE
10.139.194.107 dev bond0 lladdr 1c:34:da:72:45:e6 STALE
10.139.194.113 dev bond0 lladdr 1c:34:da:6f:c0:06 STALE
10.139.194.86 dev bond0 lladdr 1c:34:da:62:cf:90 STALE
10.139.194.100 dev bond0 lladdr 1c:34:da:6f:c3:46 STALE
10.139.194.92 dev bond0 lladdr 1c:34:da:77:08:f8 STALE
10.139.194.106 dev bond0 lladdr 1c:34:da:75:8c:8a STALE
10.139.194.79 dev bond0 lladdr 1c:34:da:76:f5:20 STALE
10.139.194.99 dev bond0 lladdr 1c:34:da:6f:c5:b6 STALE
172.17.0.2 dev docker0 lladdr 02:42:ac:11:00:02 STALE
10.139.194.91 dev bond0 lladdr 1c:34:da:77:09:e8 STALE
10.139.194.111 dev bond0 lladdr 1c:34:da:72:31:8e STALE
10.139.194.117 dev bond0 lladdr 1c:34:da:6f:d6:ce STALE
10.139.194.84 dev bond0 lladdr 1c:34:da:72:44:56 STALE
172.17.0.3 dev docker0 lladdr 02:42:ac:11:00:03 STALE
10.139.194.90 dev bond0 lladdr 1c:34:da:76:f6:58 STALE
10.139.194.65 dev bond0 lladdr 00:e0:ec:e4:6c:39 REACHABLE
fe80::1 dev bond0 lladdr 00:e0:ec:e4:6c:39 router REACHABLE
fe80::2e0:ecff:fee4:6c39 dev bond0 lladdr 00:e0:ec:e4:6c:39 router STALE

You might not get far without internet unless you use the auto image.

Honestly, without knowing much about your networking setup, probably just need to route docker0 thru your intranet interface. Do it using iptables or ip.

As much as I'd like to help, I'd find out: "how to route Docker through VPN/Intranet/bridge"

commented

Absolutely, thanks for the pointers @sickcodes!

commented

As it turns out, I had network access from the Mac VM all along when I used --net=host. I was thrown off because ping reported that the destination port was unreachable. However, I was able to curl the resources I needed over intranet from the same URL

tester@testers-iMac-Pro proj % ping <url>
PING<url> (10.250.25.99): 56 data bytes
92 bytes from <url> (10.250.25.99): Destination Port Unreachable
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 16b4   0 0000  40  01 338a 10.0.2.15  10.250.25.99 
curl <url>/resource.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2295k  100 2295k    0     0  2157k      0  0:00:01  0:00:01 --:--:-- 2155k