thecodeteam / vagrant

All {code] by Dell EMC related Vagrant projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ScaleIO Docker & REX-Ray Service doesn't persist after reboot

kacole2 opened this issue · comments

After rebooting the mdm1 and mdm2 node, the docker service daemon is no longer running.

In addition, REX-Ray cannot do a sudo rexray volume ls after node has rebooted

Hi @kacole2, after testing I see a similar behavior.

Restarted tb, then logged in:

$ vagrant ssh tb
Last login: Thu Jul 21 00:20:17 2016 from gateway
[vagrant@tb ~]$ sudo docker run -ti --volume-driver=rexray -v busybox:/data busybox
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
[vagrant@tb ~]$ sudo service docker restart
Redirecting to /bin/systemctl restart  docker.service
[vagrant@tb ~]$ sudo docker run -ti --volume-driver=rexray -v busybox:/data busybox
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mapper/docker-253:0-33911113-b2709b503a704d1e71d6a2964ca6ba613b7427b75b090d6d910150e4ac657046
                         10.0G     33.8M     10.0G   0% /
tmpfs                   496.8M         0    496.8M   0% /dev
tmpfs                   496.8M         0    496.8M   0% /sys/fs/cgroup
/dev/scinib              15.6G     44.0M     14.8G   0% /data

Restarting the Docker service on tb makes it work again.

When you're rebooting mdm1 and mdm2, make sure that the libStorage configuration still points to the right ScaleIO Gateway host.

After restarting mdm1:

[vagrant@mdm1 ~]$ sudo docker run -ti --volume-driver=rexray -v busybox:/data busybox
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
[vagrant@mdm1 ~]$ sudo service docker restart
Redirecting to /bin/systemctl restart  docker.service
[vagrant@mdm1 ~]$ sudo docker run -ti --volume-driver=rexray -v busybox:/data busybox
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mapper/docker-253:0-50680504-2471ca0dafc348a309aba13b3bcb44e7a38fa129bafc09f9172b277764afae82
                         10.0G     33.8M     10.0G   0% /
tmpfs                     1.4G         0      1.4G   0% /dev
tmpfs                     1.4G         0      1.4G   0% /sys/fs/cgroup
/dev/scinib              15.6G     44.0M     14.8G   0% /data