deis / logger

In-memory log buffer used by Deis Workflow.

Home Page:https://deis.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ErrImagePull in pod

pfeodrippe opened this issue · comments

I've been dealing with deis-logger and while it was not working for my rails app (another issue), it was being created without issues, now it's giving me

$ kd get pods -w
NAME                          READY     STATUS              RESTARTS   AGE
deis-builder-gomgx            1/1       Running             1          6m
deis-controller-bg6ub         1/1       Running             4          6m
deis-database-78y0a           1/1       Running             0          6m
deis-logger-fluentd-iizlc     0/1       ContainerCreating   0          8s
deis-logger-fluentd-jsc27     0/1       ContainerCreating   0          8s
deis-logger-yni8m             0/1       ErrImagePull        0          10s
deis-minio-1x9de              1/1       Running             0          6m
deis-registry-pttcw           1/1       Running             1          6m
deis-router-1tvf7             1/1       Running             0          6m
deis-workflow-manager-8nmr8   1/1       Running             0          6m
NAME                        READY     STATUS              RESTARTS   AGE
deis-logger-fluentd-jsc27   0/1       ContainerCreating   0          8s
deis-logger-yni8m   0/1       ImagePullBackOff   0         17s
deis-logger-fluentd-iizlc   1/1       Running   0         27s
deis-logger-yni8m   0/1       ErrImagePull   0         42s
deis-logger-fluentd-jsc27   1/1       Running   0         44s
deis-logger-yni8m   0/1       ImagePullBackOff   0         55s
deis-logger-yni8m   0/1       ErrImagePull   0         1m
deis-logger-yni8m   0/1       ImagePullBackOff   0         1m

Was the image removed from the deis registry?

Do a kubectl describe pod <logger pod>
On Apr 11, 2016 9:38 PM, "Paulo Rafael Feodrippe" notifications@github.com
wrote:

I've been dealing with deis-logger and while it was not working for my
rails app (another issue), it was being created without issues, now it's
giving me

kd get pods -w
NAME READY STATUS RESTARTS AGE
deis-builder-gomgx 1/1 Running 1 6m
deis-controller-bg6ub 1/1 Running 4 6m
deis-database-78y0a 1/1 Running 0 6m
deis-logger-fluentd-iizlc 0/1 ContainerCreating 0 8s
deis-logger-fluentd-jsc27 0/1 ContainerCreating 0 8s
deis-logger-yni8m 0/1 ErrImagePull 0 10s
deis-minio-1x9de 1/1 Running 0 6m
deis-registry-pttcw 1/1 Running 1 6m
deis-router-1tvf7 1/1 Running 0 6m
deis-workflow-manager-8nmr8 1/1 Running 0 6m
NAME READY STATUS RESTARTS AGE
deis-logger-fluentd-jsc27 0/1 ContainerCreating 0 8s
deis-logger-yni8m 0/1 ImagePullBackOff 0 17s
deis-logger-fluentd-iizlc 1/1 Running 0 27s
deis-logger-yni8m 0/1 ErrImagePull 0 42s
deis-logger-fluentd-jsc27 1/1 Running 0 44s
deis-logger-yni8m 0/1 ImagePullBackOff 0 55s
deis-logger-yni8m 0/1 ErrImagePull 0 1m
deis-logger-yni8m 0/1 ImagePullBackOff 0 1m

Was the image removed from the deis registry?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#54

appears to be that the tag git-bc95ebd doesn't exist

Name:       deis-logger-smvws
Namespace:  deis
Node:       ip-172-20-0-180.ec2.internal/172.20.0.180
Start Time: Tue, 12 Apr 2016 09:45:24 -0300
Labels:     app=deis-logger
Status:     Pending
IP:     10.244.1.10
Controllers:    ReplicationController/deis-logger
Containers:
  deis-logger:
    Container ID:
    Image:      quay.io/deis/logger:git-bc95ebd
    Image ID:
    Ports:      8088/TCP, 1514/UDP
    QoS Tier:
      memory:       BestEffort
      cpu:      BestEffort
    State:      Waiting
      Reason:       ErrImagePull
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:8088/healthz delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8088/healthz delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment Variables:
Conditions:
  Type      Status
  Ready     False
Volumes:
  deis-logger-token-9qnkt:
    Type:   Secret (a volume populated by a Secret)
    SecretName: deis-logger-token-9qnkt
Events:
  FirstSeen LastSeen    Count   From                    SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----                    -------------           --------    ------      -------
  2m        2m      1   {default-scheduler }                Normal      Scheduled   Successfully assigned deis-logger-smvws to ip-172-20-0-180.ec2.internal
  2m        27s     4   {kubelet ip-172-20-0-180.ec2.internal}  spec.containers{deis-logger}    Normal      Pulling     pulling image "quay.io/deis/logger:git-bc95ebd"
  2m        27s     4   {kubelet ip-172-20-0-180.ec2.internal}  spec.containers{deis-logger}    Warning     Failed      Failed to pull image "quay.io/deis/logger:git-bc95ebd": image pull failed for quay.io/deis/logger:git-bc95ebd, this may be because there are no credentials on this request.  details: (Tag git-bc95ebd not found in repository quay.io/deis/logger)
  2m        27s     4   {kubelet ip-172-20-0-180.ec2.internal}      Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-logger" with ErrImagePull: "image pull failed for quay.io/deis/logger:git-bc95ebd, this may be because there are no credentials on this request.  details: (Tag git-bc95ebd not found in repository quay.io/deis/logger)"

  2m    15s 5   {kubelet ip-172-20-0-180.ec2.internal}  spec.containers{deis-logger}    Normal  BackOff     Back-off pulling image "quay.io/deis/logger:git-bc95ebd"
  2m    15s 5   {kubelet ip-172-20-0-180.ec2.internal}              Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-logger" with ImagePullBackOff: "Back-off pulling image \"quay.io/deis/logger:git-bc95ebd\""

images with git-sha tags normally live in the deisci org not the deis org. Is this the image you tried to build?

That image exists here - quay.io/deisci/logger:git-bc95ebd. Also if you want to test the latest changes I just pushed you can use this image quay.io/deisci/logger:git-ca997b6 or quay.io/deisci/logger:canary. The canary image is mutable and will change every time we update master.

To use these images you can edit the chart in ~/.helm/workspace/charts

Im not sure if you are using the deis-logger chart or the deis-logger-beta1 chart but editing the deis-logger-rc.yaml and replacing the image with one of these should help.

I'm using deis-logger, should I use deis-logger-beta1? I'll test the new image in deis-logger, I'll say something soon

Still nothing... maybe my app is not initiating properly, how could I check this?

UPDATED

What does kubectl logs <pod name> --namespace=<pod namespace> say you can find your pod name by doing kubectl get pods -o wide --all-namespaces and finding the pod that matches the name of your deployed app.

Another thing. Do you see any values when doing kubectl exec <deis-controller-pod> --namespace=deis env | grep LOGGER

With kubectl logs I couldn't get nothing, but using

$ kubectl --namespace=vulcan-ziggurat describe pod vulcan-ziggurat-v3-cmd-7z152
...
3m  3m  1   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Created     Created container with docker id 879a0aa6f91d
  3m    3m  1   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Started     Started container with docker id 879a0aa6f91d
  3m    2m  7   {kubelet ip-172-20-0-234.ec2.internal}              Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "vulcan-ziggurat-cmd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=vulcan-ziggurat-cmd pod=vulcan-ziggurat-v3-cmd-7z152_vulcan-ziggurat(dacdf2c7-00e4-11e6-b862-0a3c6899a115)"

  6m    2m  6   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Pulled      Successfully pulled image "quay.io/deis/slugrunner:v2.0.0-beta1"
  6m    2m  6   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Pulling     pulling image "quay.io/deis/slugrunner:v2.0.0-beta1"
  2m    2m  1   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Created     Created container with docker id 11136744b83a
  2m    2m  1   {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Normal  Started     Started container with docker id 11136744b83a
  2m    6s  10  {kubelet ip-172-20-0-234.ec2.internal}              Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "vulcan-ziggurat-cmd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=vulcan-ziggurat-cmd pod=vulcan-ziggurat-v3-cmd-7z152_vulcan-ziggurat(dacdf2c7-00e4-11e6-b862-0a3c6899a115)"

  5m    6s  24  {kubelet ip-172-20-0-234.ec2.internal}  spec.containers{vulcan-ziggurat-cmd}    Warning BackOff Back-off restarting failed docker container

...

and

$ kd exec deis-controller-eqvmw env | grep LOGGER
DEIS_LOGGER_SERVICE_PORT_TRANSPORT=514
DEIS_LOGGER_PORT_80_TCP=tcp://10.0.130.236:80
DEIS_LOGGER_PORT_80_TCP_PROTO=tcp
DEIS_LOGGER_PORT_80_TCP_PORT=80
DEIS_LOGGER_SERVICE_HOST=10.0.130.236
DEIS_LOGGER_PORT_514_UDP_PORT=514
DEIS_LOGGER_SERVICE_PORT_HTTP=80
DEIS_LOGGER_SERVICE_PORT=80
DEIS_LOGGER_PORT_514_UDP_PROTO=udp
DEIS_LOGGER_PORT_80_TCP_ADDR=10.0.130.236
DEIS_LOGGER_PORT_514_UDP=udp://10.0.130.236:514
DEIS_LOGGER_PORT_514_UDP_ADDR=10.0.130.236
DEIS_LOGGER_PORT=tcp://10.0.130.236:80

Your app is in a crash loop that is what the describe pod is telling you. Thats why you aren't see logs.

Is there a way I can see the logs of why my app is crashing?

kubectl will not allow you to see the logs of an app that is in a bad state. So the only way to do it is to ssh into the host and do docker logs <container id>.

Assuming your application actually produces logs when failing to boot and the logger and everything else is working (which is what it sounds like are now), logs should be available via deis logs. If you're still trying to get the logger working, you can also try kubectl logs --previous --namespace=vulcan-ziggurat vulcan-ziggurat-v3-cmd-7z152 and see if something comes up.

To ssh in a aws cluster, I need the key pair, right? Do I have the key in my pc already when I do export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash?

I'll try it, @bacongobbler =)

Thanks for the fast answers, guys! You're awesome, and sorry for my lack of understanding of both the deis and kubernetes

When you provisioned the cluster you had to provide it with a keypair. So it exists somewhere (on your system)

Yeah I believe kubernetes saves the SSH key as ~/.ssh/kube_aws_rsa (that's what is on my filesystem at least)

Maybe my rails app is too big for the size of the instances I'm choosing, t2.small with two minions and one master, I'm going to raise and test it

And kube_aws_rsa is indeed the right key =)

The logger has changed quite a bit in recent releases, so this is likely no longer a problem. If you're still having issues with deis v2.5.0 feel free to open a new issue!

I experience similar issue with Workflow 2.5.0. I experimented with deis/example-java-jetty and deis/example-ruby-sinatra.

$ git push deis master
Enter passphrase for key '/home/gkiko/.ssh/id_rsa': 
Counting objects: 144, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (73/73), done.
Writing objects: 100% (144/144), 30.00 KiB | 0 bytes/s, done.
Total 144 (delta 60), reused 144 (delta 60)
Starting build... but first, coffee!
...
...
(truncated)
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS             RESTARTS   AGE
deis          deis-builder-2205901545-nnuu1                 1/1       Running            0          16h
deis          deis-controller-3055305287-tqtik              1/1       Running            0          16h
deis          deis-database-nk1kv                           1/1       Running            0          16h
deis          deis-logger-fluentd-ztiaj                     1/1       Running            0          16h
deis          deis-logger-ii2xa                             1/1       Running            5          16h
deis          deis-logger-redis-hlk3u                       1/1       Running            0          16h
deis          deis-minio-wijsm                              1/1       Running            0          16h
deis          deis-monitor-grafana-0zgol                    1/1       Running            0          16h
deis          deis-monitor-influxdb-gg1oh                   1/1       Running            0          16h
deis          deis-monitor-telegraf-fv47v                   1/1       Running            0          16h
deis          deis-nsqd-ofahq                               1/1       Running            0          16h
deis          deis-registry-3758253254-8co5m                1/1       Running            0          16h
deis          deis-registry-proxy-j3l15                     1/1       Running            0          16h
deis          deis-router-1536869610-39n38                  1/1       Running            0          16h
deis          deis-workflow-manager-2813685453-4xghi        1/1       Running            0          16h
deis          slugbuild-yuppie-quacking-6e823755-ccf31800   0/1       ErrImagePull       0          14m
deis          slugbuild-zydeco-keepsake-f404a360-38e2e791   0/1       ImagePullBackOff   0          1h
deis          slugbuild-zydeco-keepsake-f404a360-4bf18ebf   0/1       ImagePullBackOff   0          1h
deis          slugbuild-zydeco-keepsake-f404a360-8ae8e03d   0/1       ImagePullBackOff   0          1h
deis          slugbuild-zydeco-keepsake-f404a360-d3cf887a   0/1       ImagePullBackOff   0          1h
kube-system   heapster-v1.1.0-2101778418-cqra1              4/4       Running            0          16h
kube-system   kube-dns-v17.1-evzuu                          3/3       Running            0          17h
kube-system   kube-proxy-kubernetes-node-1                  1/1       Running            0          17h
kube-system   kubernetes-dashboard-v1.1.1-r8svt             1/1       Running            0          17h
kube-system   monitoring-influxdb-grafana-v3-kuvzj          2/2       Running            0          17h
$ kubectl --namespace=deis describe pod slugbuild-yuppie-quacking-6e823755-ccf31800
Name:       slugbuild-yuppie-quacking-6e823755-ccf31800
Namespace:  deis
Node:       kubernetes-node-1/10.245.1.3
Start Time: Sat, 10 Sep 2016 10:36:49 +0400
Labels:     heritage=slugbuild-yuppie-quacking-6e823755-ccf31800
Status:     Pending
IP:     10.246.75.25
Controllers:    <none>
Containers:
  deis-slugbuilder:
    Container ID:   
    Image:      quay.io/deis/slugbuilder:v2.3.1
    Image ID:       
    Port:       
    State:      Waiting
      Reason:       ImagePullBackOff
    Ready:      False
    Restart Count:  0
    Environment Variables:
      TAR_PATH:     home/yuppie-quacking:git-6e823755/tar
      PUT_PATH:     home/yuppie-quacking:git-6e823755/push
      BUILDER_STORAGE:  minio
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  objectstorage-keyfile:
    Type:   Secret (a volume populated by a Secret)
    SecretName: objectstorage-keyfile
  default-token-utdts:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-utdts
QoS Tier:   BestEffort
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath               Type        Reason      Message
  --------- --------    -----   ----                -------------               --------    ------      -------
  15m       15m     1   {default-scheduler }                            Normal      Scheduled   Successfully assigned slugbuild-yuppie-quacking-6e823755-ccf31800 to kubernetes-node-1
  15m       3m      4   {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Normal      Pulling     pulling image "quay.io/deis/slugbuilder:v2.3.1"
  13m       1m      4   {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Warning     Failed      Failed to pull image "quay.io/deis/slugbuilder:v2.3.1": image pull failed for quay.io/deis/slugbuilder:v2.3.1, this may be because there are no credentials on this request.  details: (net/http: request canceled)
  13m       1m      4   {kubelet kubernetes-node-1}                     Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-slugbuilder" with ErrImagePull: "image pull failed for quay.io/deis/slugbuilder:v2.3.1, this may be because there are no credentials on this request.  details: (net/http: request canceled)"

  13m   27s 10  {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Normal  BackOff     Back-off pulling image "quay.io/deis/slugbuilder:v2.3.1"
  13m   27s 10  {kubelet kubernetes-node-1}                     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-slugbuilder" with ImagePullBackOff: "Back-off pulling image \"quay.io/deis/slugbuilder:v2.3.1\""
$ kubectl logs slugbuild-yuppie-quacking-6e823755-ccf31800 --namespace=deis
Error from server: container "deis-slugbuilder" in pod "slugbuild-yuppie-quacking-6e823755-ccf31800" is waiting to start: trying and failing to pull image

slugbuild-* pods' status changes from ImagePullBackOff to ErrImagePull periodically.

deis v2.4.0
helmc version 0.8.1+a9c55cf
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:29:08Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:21:58Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

@gkiko the logs indicate that the request to quay.io was cancelled, usually indicating a networking issue. Can you check and ensure your networking settings are correct?

How can I check this? Executing docker pull quay.io/deis/slugbuilder:v2.3.1 from the same machine works correctly. But
I suppose it doesn't tell much.

On Sep 10, 2016 17:44, "Matthew Fisher" notifications@github.com wrote:

@gkiko https://github.com/gkiko the logs indicate that the request to
quay.io was cancelled, usually indicating a networking issue. Can you
check and ensure your networking settings are correct?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#54 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/ACdXhKpQpXb43UXsma9lurQGQKpCHCmZks5qorQbgaJpZM4IFBFw
.

kubernetes/kubernetes#25277 seems to show the same behaviour, though that's in their end-to-end environment. Essentially you're going to have to check your network within your kubernetes cluster and debug from there why the network is slow/unreliable

@bacongobbler running describe pod now shows additional data

Events:
  FirstSeen LastSeen    Count   From                SubobjectPath       Type        Reason  Message
  --------- --------    -----   ----                -------------       --------    ------  -------
  1d        1m      527 {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Normal      Pulling pulling image "quay.io/deis/slugbuilder:v2.3.1"
  1d        1m      473 {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Warning     Failed  Failed to pull image "quay.io/deis/slugbuilder:v2.3.1": image pull failed for quay.io/deis/slugbuilder:v2.3.1, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://quay.io/v0/
v2 ping attempt failed with error: Get https://quay.io/v2/: dial tcp: lookup quay.io: no such host
 v1 ping attempt failed with error: Get https://quay.io/v1/_ping: dial tcp: lookup quay.io: no such host)
  1d    1m  473 {kubelet kubernetes-node-1}     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-slugbuilder" with ErrImagePull: "image pull failed for quay.io/deis/slugbuilder:v2.3.1, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://quay.io/v0/\nv2 ping attempt failed with error: Get https://quay.io/v2/: dial tcp: lookup quay.io: no such host\n v1 ping attempt failed with error: Get https://quay.io/v1/_ping: dial tcp: lookup quay.io: no such host)"

  1d    12s 11524   {kubelet kubernetes-node-1} spec.containers{deis-slugbuilder}   Normal  BackOff     Back-off pulling image "quay.io/deis/slugbuilder:v2.3.1"
  1d    12s 11524   {kubelet kubernetes-node-1}                     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "deis-slugbuilder" with ImagePullBackOff: "Back-off pulling image \"quay.io/deis/slugbuilder:v2.3.1\""

I checked from my machine if ping quay.io or ping 184.73.225.107(DNS A record in quay.io) would work. No response on my pings. Maybe they block ICMP. But curl https://quay.io/v1/_ping works.

Then I ssh-ed in kubernetes master and ran curl https://quay.io/v1/_ping. Still worked. Then I ran same command from kubernetes node-1:

$ curl https://quay.io/v1/_ping
curl: (6) Could not resolve host: quay.io
$ curl 184.73.225.107
curl: (7) Couldn't connect to server

kubernetes master

$ ifconfig enp0s3
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe7a:d4a8  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:7a:d4:a8  txqueuelen 1000  (Ethernet)
        RX packets 411436  bytes 581275958 (554.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 207985  bytes 11892396 (11.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 enp0s3
10.245.1.0      0.0.0.0         255.255.255.0   U     0      0        0 enp0s8
10.246.0.0      0.0.0.0         255.255.0.0     U     0      0        0 flannel0
10.246.7.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 enp0s8

kubernetes node-1

$ ifconfig enp0s3
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 08:00:27:7a:d4:a8  txqueuelen 1000  (Ethernet)
        RX packets 668887  bytes 954633555 (910.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 186189  bytes 10866598 (10.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.245.1.0      0.0.0.0         255.255.255.0   U     0      0        0 enp0s8
10.246.0.0      0.0.0.0         255.255.0.0     U     0      0        0 flannel0
10.246.75.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 enp0s8

So there is no way for node-1 to communicate to the world. Is node-1 configured correctly? Sorry to take your time, I'm new to kubernetes.

UPDATE
I deleted vbox images and installed deis from scratch using vagrant. This time networks on master and node-1 are configured correctly and I'm able to deploy applications 😁