UnknownHostException
Opalo opened this issue · comments
Hi,
First of all, thanks for cool tutorial - but I've some problems setting it all up:
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Mon Oct 30 17:41:23 UTC 2017
There was an unexpected error (type=Internal Server Error, status=500).
I/O error on GET request for "http://productcatalogue:8020/products": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue
I'm getting it after executing:
minikube service shopfront
What's the problem? All containers run correctly. Am I missing something?
Hey @Opalo,
Thanks for the feedback about the tutorial! In regards to your issue, could you let me know what URL you are hitting when you get this issue?
Could you also show me the output of kubectl get svc
please? You should get something like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 31d productcatalogue 10.0.0.37 8020:31803/TCP 30d shopfront 10.0.0.216 8010:31208/TCP 30d stockmanager 10.0.0.149 8030:30723/TCP 30d
What I'm getting after running kubectl get svc
is as follows:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4h
productcatalogue NodePort 10.0.0.242 <none> 8020:32010/TCP 4h
shopfront NodePort 10.0.0.193 <none> 8010:30403/TCP 4h
stockmanager NodePort 10.0.0.173 <none> 8030:31954/TCP 4h
Running: minikube service shopfront
opens the following URL: http://192.168.99.100:30403/
Thanks for prompt reply!
Here's also an interesting part:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl get pods
NAME READY STATUS RESTARTS AGE
productcatalogue-gm487 1/1 Running 39 11h
shopfront-hwdlw 1/1 Running 0 11h
stockmanager-clnh9 1/1 Running 0 11h
Why does productcatalogue-gm487
gets restarted?
Thanks for this, and a nice piece of debugging on your part, with a look at the pods. My guess is that minikube has not been given enough RAM to play with, and so the productcatalogue JVM is getting OOM-killed due to the small amount of RAM given to this container, which leads the pod to be restarted.
I think I should have made clearer in the article that you really need to apply 3Gb for minikube, and ideally 4GB+. This can be done when starting minikube:
$ minikube start --cpus 2 --memory 4096
If you stop and restart minikube with these flags does this solve your issue?
Thanks! I run minikube as follows:
minikube start --cpus 2 --memory 8192
Then applied all the pods. Same result. See the output below:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl logs -p productcatalogue-b5tr4
INFO [2017-10-31 08:27:40,366] org.eclipse.jetty.util.log: Logging initialized @1432ms to org.eclipse.jetty.util.log.Slf4jLog
INFO [2017-10-31 08:27:40,443] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO [2017-10-31 08:27:40,445] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO [2017-10-31 08:27:40,779] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO [2017-10-31 08:27:40,779] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO [2017-10-31 08:27:40,780] io.dropwizard.server.ServerFactory: Starting product-list-service
INFO [2017-10-31 08:27:40,980] org.eclipse.jetty.setuid.SetUIDListener: Opened application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:27:40,981] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:27:40,984] org.eclipse.jetty.server.Server: jetty-9.4.z-SNAPSHOT
INFO [2017-10-31 08:27:41,571] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET /products (uk.co.danielbryant.djshopping.productcatalogue.resources.ProductResource)
GET /products/{id} (uk.co.danielbryant.djshopping.productcatalogue.resources.ProductResource)
INFO [2017-10-31 08:27:41,572] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@7a682d35{/,null,AVAILABLE}
INFO [2017-10-31 08:27:41,577] io.dropwizard.setup.AdminEnvironment: tasks =
POST /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask)
POST /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)
INFO [2017-10-31 08:27:41,584] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@b791a81{/,null,AVAILABLE}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.AbstractConnector: Started application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.AbstractConnector: Started admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:27:41,596] org.eclipse.jetty.server.Server: Started @2665ms
172.17.0.1 - - [31/Oct/2017:08:28:08 +0000] "GET /health HTTP/1.1" 404 43 "-" "kube-probe/1.8" 96
172.17.0.1 - - [31/Oct/2017:08:28:18 +0000] "GET /health HTTP/1.1" 404 43 "-" "kube-probe/1.8" 2
INFO [2017-10-31 08:28:18,654] org.eclipse.jetty.server.AbstractConnector: Stopped application@6c0e13b7{HTTP/1.1,[http/1.1]}{0.0.0.0:8020}
INFO [2017-10-31 08:28:18,656] org.eclipse.jetty.server.AbstractConnector: Stopped admin@22eaa86e{HTTP/1.1,[http/1.1]}{0.0.0.0:8025}
INFO [2017-10-31 08:28:18,657] org.eclipse.jetty.server.handler.ContextHandler: Stopped i.d.j.MutableServletContextHandler@b791a81{/,null,UNAVAILABLE}
INFO [2017-10-31 08:28:18,667] org.eclipse.jetty.server.handler.ContextHandler: Stopped i.d.j.MutableServletContextHandler@7a682d35{/,null,UNAVAILABLE}
Any ideas how can it further investigate it?
I think you might have to delete and re-start your minikube for this config change to take effect? (kubernetes/minikube#567)
You can get cluster info by looking at kubectl describe nodes
and also if you have jq installed, you can use this kubectl cluster-info dump | jq '.Items[0].Status.Capacity'
I did delete the minikube several times. It for sure has 8 GB of memory. It still does not work, no idea why.
Running:
kubectl cluster-info dump | jq '.Items[0].Status.Capacity'
gives:
[ ~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master*] kubectl cluster-info dump | jq '.Items[0].Status.Capacity'
{
"cpu": "2",
"memory": "8175252Ki",
"pods": "110"
}
null
null
null
null
null
null
null
parse error: Invalid numeric literal at line 627, column 5](url)
Thank you for your time, once again!
I think I've got it @Opalo - it looks like I configured the healthcheck of the productcatalogue incorrectly. The productcatalogue service is different than the other two, in that it is a Dropwizard-based service. This means that the health check endpoint is exposed on a different port and endpoint (productcatalogue:8025/healthcheck and not productcatalogue:8020/health)
I've now updated the Kubernetes yaml files, and so if you git pull and re-"kubectl apply -f X" all of the services you should be good to go!
I would like to say a massive thanks for reporting this, and apologies for any confusion caused! I'm slightly puzzled how this ever worked, although I'm sure it did, as I took the screenshot of the shopfront UI for the article when I was running this in Kubernetes? The only thing I can think of, is that I initially built the app in Kubernetes 1.7 (minikube 0.22). I saw in your debug info that you were running 1.8, and so upgraded my local minikube this afternoon, before testing everything again.
I've asked around to see if anyone else has this issue with healthcheck, and will update this issue if I find anything.
Thanks again - I'm sure others must have experienced this issue, but no-one else reported it!
As an FYI for anyone interested, I debugged this issue by using Datawire's Telepresence to proxy to the cluster, and then curled all of the endpoints as if it was the shopfront calling the endpoints.
I got a 404 when curling the productcatalogue health check (curl productcatalogue:8020/health
), and so then looked through the logs of the productcatalogue (kubectl logs productcatalogue-4ltp9
), and then I saw the mention of the 'admin' endpoint being active.
I then ran the productcatalogue locally via Docker (mapping the app and admin ports) and after a few curls I realised that the health check endpoint is exposed only on the admin port and under 'healthcheck' (not 'health').
After this I fixed the Kubernetes productcatalogue yaml, and tested in minikube - everything looked good :-)
Thanks @danielbryantuk. It's stopped to restart all the time but when I hit shopfront in the browser I'm still getting this Whitelabel Error Page as in the first post here. I've retcreated minikube, applied all yaml files again, before that all docker images were rebuilt and pushed. /healthcheck
gives me 404
:
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] curl $(minikube service productcatalogue --url)/healthcheck
{"code":404,"message":"HTTP 404 Not Found"}%
Of course I've synchronized the repo.
Hey @Opalo, I can't seem to recreate the issue? Are you using your own Docker Hub account and pushing your own builds of the containers? If so, I'm assuming that you've updated the k8s yaml files to use your containers?
As an FYI, I started with a clean K8s cluster, and then kubectl apply -f
the three yaml files. I then do minikube service shopfront
, and after a minute of so (when the containers have downloaded into minikube/k8s and the apps have initialised), I refresh the browser and get the expected UI.
You won't be able to curl the healthcheck endpoint on the productcatalogue using the method you've shown, because of the port issue I mentioned in my earlier comment i.e. we aren't exposing the admin port used by healthcheck in the k8s Service yaml (therefore minikube can't expose anything via this port)
However, you can exec (kubectl exec -it <<pod>> -- /bin/bash
) into the container and curl the admin port endpoint via localhost e.g.
(master) kubernetes $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 22h productcatalogue 10.0.0.115 8020:32608/TCP 8s shopfront 10.0.0.54 8010:32734/TCP 15s stockmanager 10.0.0.207 8030:32558/TCP 1s (master) kubernetes $ minikube service shopfront Opening kubernetes service default/shopfront in default browser... (master) kubernetes $ kubectl get pods NAME READY STATUS RESTARTS AGE productcatalogue-brdvk 1/1 Running 0 37s shopfront-jvlsm 1/1 Running 0 44s stockmanager-m9tcp 1/1 Running 0 30s (master) kubernetes $ kubectl exec -it productcatalogue-brdvk -- /bin/bash root@productcatalogue-brdvk:/# curl localhost:8025/healthcheck {"deadlocks":{"healthy":true},"template":{"healthy":true,"message":"Ok with version: 1.0-SNAPSHOT"}}root@productcatalogue-brdvk:/#
At the very beginning I created my own docker images, pushed them do docker hub and altered all *yaml files. But now I just use the project as it was provided. I still have this issue:
I/O error on GET request for "http://productcatalogue:8020/products": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue
It looks like shopfront
could not connect to productcatalogue
. As if they were not in the same net. Does it make any difference that I'm using Mac OS? I've uninstalled minikube and kubectl before trying again. productcatalogue
behaves much better now, it does nor restart once and again.
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl get pods
NAME READY STATUS RESTARTS AGE
productcatalogue-vq2hj 1/1 Running 0 17m
shopfront-ff7x9 1/1 Running 0 17m
stockmanager-nmwds 1/1 Running 0 17m
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 23m
productcatalogue NodePort 10.0.0.181 <none> 8020:31731/TCP 17m
shopfront NodePort 10.0.0.130 <none> 8010:31271/TCP 17m
stockmanager NodePort 10.0.0.254 <none> 8030:32640/TCP 17m
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master] kubectl exec -it productcatalogue-vq2hj -- /bin/bash
root@productcatalogue-vq2hj:/# curl localhost:8025/healthcheck
{"deadlocks":{"healthy":true},"template":{"healthy":true,"message":"Ok with version: 1.0-SNAPSHOT"}}root@productcatalogue-vq2hj:/#
root@productcatalogue-vq2hj:/# exit
exit
~/tutorial/kubernetes/oreilly-docker-java-shopping/kubernetes/ [master]
it seem the pods shopfront can't resolve the ip address for productcatalogue , as work around
execute :
kubectl exec -it shopfront-ID /bin/bash ( using kubect get pods )
echo "ipadress productcatalogue" >> /etc/hosts