xuwang / kube-aws-terraform

KAT - Kubernetes cluster on AWS with Terraform

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

make ui problem

stiks opened this issue · comments

Doing fresh install, and it's look like coreutils now required, even if I install it, last error still exists

1025 17:31:22.593716   60114 factory_object_mapping.go:423] Failed to download OpenAPI (tls: private key does not match public key), falling back to swagger
error: tls: private key does not match public key
make[1]: *** [kubernetes-dashboard-admin-role-binding] Error 1
make: *** [ui] Error 2

Full log

make ui
Download vault generated ca cert from the api server
Identity added: /Users/stiks/.ssh/q8s-master.pem (/Users/stiks/.ssh/q8s-master.pem)
Permitted 22 from MY_IP/32 to master...
Warning: Permanently added 'kSOMETHING' (ECDSA) to the list of known hosts.
admin.pem                                               100% 2074   276.8KB/s   00:00
Warning: Permanently added 'SOMETHING' (ECDSA) to the list of known hosts.
admin-key.pem                                           100% 1675   225.5KB/s   00:00
Warning: Permanently added 'SOMETHING' (ECDSA) to the list of known hosts.
kube-apiserver-ca.pem                                   100% 1862   280.2KB/s   00:00
Revoked 22 from MY_IP/32 to master...
kubectl config set-cluster kubernetes...
Cluster "q8s" set.
kubectl config set-credentials q8s-admin...
User "q8s-admin" set.
/bin/sh: gshred: command not found
make[1]: [kube-reconfig] Error 127 (ignored)
/Library/Developer/CommandLineTools/usr/bin/make switch-context
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
q8s is configured. Skip configuration.
Run kubectl config delete-context q8s if you want to re-configure.
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
W1025 17:28:22.869117   58979 factory_object_mapping.go:423] Failed to download OpenAPI (tls: private key does not match public key), falling back to swagger
error: tls: private key does not match public key
make[1]: *** [kubernetes-dashboard-admin-role-binding] Error 1
make: *** [ui] Error 2

@stiks I updated the code to add coreuitls for gshred. Thanks!
For the private key does not match the public key error, I just updated the repo to do a kubectl delete context in kube-reconfig. Could you try to make kube-reconfig again? I just tested it seems working for me.

You can validate cluster running status before you try do install add-ons. See troubleshooting.

Sorry for delay. Nope, still the same. I'll try teardown cluster, and then put it back up

cd resources/add-ons; make ui
q8s is configured. Skip configuration.
Run kubectl config delete-context q8s if you want to re-configure.
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
W1027 10:53:43.652189   64538 factory_object_mapping.go:423] Failed to download OpenAPI (tls: private key does not match public key), falling back to swagger
error: tls: private key does not match public key
make[1]: *** [kubernetes-dashboard-admin-role-binding] Error 1
make: *** [ui] Error 2

Also teardown doesn't really work without having correct key

/bin/bash: ./teardown.sh: No such file or directory
make: [teardown] Error 127 (ignored)
/bin/bash: ./teardown.sh: No such file or directory
make: [teardown] Error 127 (ignored)
/Library/Developer/CommandLineTools/usr/bin/make destroy-add-ons
cd resources/add-ons; make kube-cleanup
q8s is configured. Skip configuration.
Run kubectl config delete-context q8s if you want to re-configure.
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
error: tls: private key does not match public key
make[3]: *** [delete-dashboard] Error 1
make[2]: [kube-cleanup] Error 2 (ignored)
q8s is configured. Skip configuration.
Run kubectl config delete-context q8s if you want to re-configure.
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
kubectl delete -f monitor/
error: tls: private key does not match public key
make[3]: *** [delete-monitor] Error 1
make[2]: [kube-cleanup] Error 2 (ignored)
q8s is configured. Skip configuration.
Run kubectl config delete-context q8s if you want to re-configure.
kubectl config set-context q8s ...
Context "q8s" modified.
kubectl config use-context q8s
Switched to context "q8s".
kubectl delete -f kubedns/;
error: tls: private key does not match public key
make[3]: *** [delete-kubedns] Error 1
make[2]: [kube-cleanup] Error 2 (ignored)
/Library/Developer/CommandLineTools/usr/bin/make destroy-all
make[1]: *** [plan-destroy-all] Error 2
make: *** [teardown] Error 2

I think problem maybe in multi master setup. In my setup, I have 3 master servers, I've logged in on every single one of them to check if they're working. Even if I generate token from the any master (new SA account), it keep saying: "http: proxy error: x509: certificate signed by unknown authority"

I have had similar issue last time. I could not login using public keys (for some reason). I've connected to one of the masters, and created new ServiceAccount with cluster-admin role attached to it. Then I've got TOKEN from that user, and I can login using this token

Step by step:

  1. Create user
$ kubectl create serviceaccount stiks
  1. Add user stiks required permissions
$ kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin --serviceaccount=default:stiks
  1. Get Token from user account:
$ secret=$(kubectl get sa stiks -o json | jq -r .secrets[].name)
$ kubectl get secret $secret -o go-template="{{.data.token}}" | base64 -d
  1. Use that token in ~/.kube/config
...
- name: q8s-admin
  user:
    as-user-extra: {}
    token: PUT_TOKEN_HERE
...

And no issues with connection.

@stiks ah, there is a deeper issue here. When in multi-master mode, each master would ask Vault to generate a new public cert and key for the admin role. Although on each server, the key and cert match, when we download the cert/key, going through ELB, we are very likely ended up with pulling public key from one maser, private key from the other! Then you get error: tls: private key does not match public key. Make sense. I will file an issue and try to resolve this as soon as possible. Your workaround is perfectly good!

@stiks could you try again with make kube-reconfig? 1af116d should fix this. Thank you!

I've realised certificates different from master to master, when was trying to make system work. Just haven't through script is pulling them from wrong server. Yeah, you fix now resolve issue, Thanks.