kubernetes can not join workers centos 7 - kubernetes

My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?

Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2

Related

In K8s find what nodes do pods exist in

I want to know what nodes correspond to pods in a K8s cluster. I am working with a 3 node K8s cluster which has 2 specific pods among other pods.
How can I see which pod exists in which node using kubectl?
When I use kubectl get pods, I get the following:
NAME READY STATUS RESTARTS AGE
pod1-7485f58945-zq8tg 1/1 Running 2 2d
pod2-64c4564b5c-8rh5x 1/1 Running 0 2d1h
Following is the version of K8s (kubectl version) that I am using
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Try kubectl get pods -o wide.
You can get more details in this very detailed Kubernetes cheatsheet.

Kubernetes - unable to run echoserver on minikube

I am following this example
When I run the following command I get the error:
➜ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
Error from server (NotFound): the server could not find the requested resource
I just installed fresh minikube, kubectl and the data is below:
Development/tools/k8s
➜ kubectl get nodes
NAME STATUS AGE
minikube Ready 2m
Development/tools/k8s
➜ kubectl get pods
No resources found.
Development/tools/k8s
➜ kubectl get rc --all-namespaces
No resources found.
Development/tools/k8s
➜ kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
KubeDNS is running at https://192.168.99.101:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
➜ minikube version
minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff
What am I doing wrong?
Your kubectl is version 1.5 which is very old. It’s trying to use an outdated format for the Deployment resource which no longer exists in the server.

Kubernetes version upgraded, but kubectl showing different versions

I just upgraded kubernetes cluster, but kubectl is very inconsistent in showing me the version. How can I verify this. Any source of truth?
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
wp-np3-0b Ready worker 55d v1.14.1
wp-np3-0f Ready worker 55d v1.14.1
wp-np3-45 Ready master 104d v1.13.5
wp-np3-46 Ready worker 104d v1.13.5
wp-np3-47 Ready worker 104d v1.13.5
wp-np3-48 Ready worker 43d v1.14.1
wp-np3-49 Ready worker 95d v1.13.5
wp-np3-76 Ready worker 55d v1.14.1
[iahmad#web-prod-ijaz001 k8s-test]$
IIRC: kubectl version is telling you what version the APIServer is at (1.14.3). kubectl get nodes is telling you what version the kubelet is on those nodes.

GitLab Kubernetes integration error; configuration of Helm Tiller already exists

After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:
Something went wrong while installing Helm Tiller
Kubernetes error: configmaps "values-content-configuration-helm" already exists
There are no pods running on the cluster and kubectl version returns:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
update
the output of kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue!
Find the gitlab-managed-apps namespace with kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue:
kubectl delete namespace gitlab-managed-apps
Thanks to Lev Kuznetsov.

Kubernetes Pods Always ContainerCreating

I have installed a cluster of one master and one node using kubadm
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Whenever I try and install a pod (ngix, gafana, influxdb, heapster, tiller), it always stays in a state of ContainerCreating.
I can't figure out how to diagnose the issue to try and get the containers to move to a Running state.
Use the following commands to check logs of Kubelet and diagnose the issue accordingly:
systemctl status kubelet
journalctl -xeu kubelet
For details:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/