kubectl -- Unable to connect to the server: net/http: TLS handshake timeout
Slyracoon23 opened this issue Β· comments
What happened:
Started a kind cluster using
kind create cluster
Outputs
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! π
Then ran kubectl cluster-info --context kind-kind
Got after a long period:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: net/http: TLS handshake timeout
If I run kubectl cluster-info dump
.
I get: Unable to connect to the server: net/http: TLS handshake timeout
What you expected to happen:
Should switch cluster contexts and be connected to the cluster.
How to reproduce it (as minimally and precisely as possible):
I don't know, I ran it on another Ubuntu 20.04 machine and it worked fine.
Anything else we need to know?:
The exported Kind files for your convenience
docker-info.txt
kind-version.txt
kubelet.log
kubernetes-version.txt
journal.log
containerd.log
serial.log
alternatives.log
Environment:
-
kind version: (use
kind version
):
kind v0.11.1 go1.16.4 linux/amd64
-
Kubernetes version: (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:
"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
- Docker version: (use
docker info
):
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
scan: Docker Scan (Docker Inc., v0.9.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 20
Server Version: 20.10.10
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.11.0-40-generic
Operating System: Ubuntu 20.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.14GiB
Name: slyracoon23-Dell-G15-5510
ID: 4HUR:6SPS:FCEH:TH26:OGLS:4WWB:VUSE:LDBL:V22T:IOKL:2ZXW:5ILH
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
- OS (e.g. from
/etc/os-release
):
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Can you upload the rest of the logs (perhaps as a zip or tarball)? There should be logs from the cluster containers (api-server etc.), everything looks fine in the logs you sent, kubelet is succesfully connected to api-server etc.
I restarted my computer and the problem disappeared. I don't know how to reproduce it.
What is the procedure for recurring problems? Do I reopen the issue?
Yes, if you can figure out how to reproduce it please re-open with more details π
As-is we don't know how to reproduce it and don't have reason to believe it's a kind bug, kind isn't running anything on reboot (just the containers as run by docker) so it seems likely this involves e.g. something with the host networking (firewall or other iptables rules).
It appears that i have the same question.After using kind create a cluster successfully.I suspended the virtual machine, but when I open it next time .kubectl will report this error.and of course everything works fine in the container.
I think the problem might be related to docker-proxy. Although ps -ef finds that the api-server port is mapped out, the connection fails in the virtual machine,γ
It appears that I had other applications running in Docker which were taking up a lot of allocated resources.
After removing them, I was able to access the kind cluster again.