kubectl losing connection to minikube - kubernetes

I am using
*$ minikube version
minikube version: v0.28.2
and
*$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I am running a local cluster, while all of the sudden communication between the cluster and kubectl gets lost:
*$ kubectl get pods
No resources found.
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
The problem goes away after performing an explicit minikube stop && minikube start
Any ideas how to debug this?

It sounds to be a problem with your kubectl version.
When I have a server version too old, I usually download a kubectl of the same version.
Here the link (linux version):
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64/kubectl

The problem had to do with resources.
Resolved by stopping minikube and then starting it like this:
minikube --memory 8192 --cpus 2 start
This thread was helpful.

Related

Kubernetes - unable to run echoserver on minikube

I am following this example
When I run the following command I get the error:
➜ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
Error from server (NotFound): the server could not find the requested resource
I just installed fresh minikube, kubectl and the data is below:
Development/tools/k8s
➜ kubectl get nodes
NAME STATUS AGE
minikube Ready 2m
Development/tools/k8s
➜ kubectl get pods
No resources found.
Development/tools/k8s
➜ kubectl get rc --all-namespaces
No resources found.
Development/tools/k8s
➜ kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
KubeDNS is running at https://192.168.99.101:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
➜ minikube version
minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff
What am I doing wrong?
Your kubectl is version 1.5 which is very old. It’s trying to use an outdated format for the Deployment resource which no longer exists in the server.

kubernetes can not join workers centos 7

My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?
Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2

Can't delete Kubernetes object: no kind "GetOptions" is registered

I'm getting the following error when trying to delete a StatefulSets on my local minikube cluster
error: no kind "GetOptions" is registered for version "apps/v1"
I can set the replicas to 0, but that still keeps the StatefulSet alive.
I'm running following version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any help would be appreciated!
It seems your kubectl version and kubernetes version isn't in sync. your kubectl version doesn't know new statefulset version.
You need to upgrade your kubectl version.

Kubernetes Pods Always ContainerCreating

I have installed a cluster of one master and one node using kubadm
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Whenever I try and install a pod (ngix, gafana, influxdb, heapster, tiller), it always stays in a state of ContainerCreating.
I can't figure out how to diagnose the issue to try and get the containers to move to a Running state.
Use the following commands to check logs of Kubelet and diagnose the issue accordingly:
systemctl status kubelet
journalctl -xeu kubelet
For details:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/

kubectl describe nodes, reports some error

I am gettting this error in kubectl describe nodes nodename. I just did a google search but nothing useful found , what does this mean.
Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3783778304 to memory.limit_in_bytes: write /rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument
Do I need to change some kernel settings using sysctl ?
[iahmad#lxplus000 ~]$ kubectl --version
Kubernetes v1.5.2
[iahmad#lxplus000 ~]$
[iahmad#lxplus000 ~]$
[iahmad#lxplus000 ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-18T08:52:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"linux/amd64"}
[iahmad#lxplus000 ~]$
It depends on the kubectl/kubernetes version: this was seen (and fixed) in kubernetes issue 42701
1.6 should have been patched.
If this is not the same bug, the error message was also seen in issue 29166:
I just forgot to active back disk.uuid after creating back my new VMs!