When I run this command
kubectl get deployments
at my Linux Ubuntu 18 machine, I got different output than expected (according to documentation).
Expected:
Actual:
Of course, I am not talking about values, I am talking about names of labels.
[EDIT]
My k8s version:
This is just an old output format. The newer output you're getting below contains all the same information; the "READY" field is a combination of the old "DESIRED" and "CURRENT".
It's showing as 4/5 in your output to indicate 4 pods ready/current, and 5 pods desired.
Hope this helps.
The output depends on client version. Let's check it with the same server
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 2 2 2 2 10d
Switching the kubectl version changes output:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
kube-dns 2/2 2 2 10d
Related
I want to know what nodes correspond to pods in a K8s cluster. I am working with a 3 node K8s cluster which has 2 specific pods among other pods.
How can I see which pod exists in which node using kubectl?
When I use kubectl get pods, I get the following:
NAME READY STATUS RESTARTS AGE
pod1-7485f58945-zq8tg 1/1 Running 2 2d
pod2-64c4564b5c-8rh5x 1/1 Running 0 2d1h
Following is the version of K8s (kubectl version) that I am using
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Try kubectl get pods -o wide.
You can get more details in this very detailed Kubernetes cheatsheet.
I would like to upgrade kubectl server version to be compatible with client version to get rid of Selflink issue that has been deprecated after kubernetes v1.21. How can I do this?
PS C:\> kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"e1d093448d0ed9b9b1a48f49833ff1ee64c05ba5", GitTreeState:"clean", BuildDate:"2021-06-03T00:20:57Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Update your kubernetes cluster that you are using. You see the api server version there.
My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?
Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2
I just upgraded kubernetes cluster, but kubectl is very inconsistent in showing me the version. How can I verify this. Any source of truth?
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
wp-np3-0b Ready worker 55d v1.14.1
wp-np3-0f Ready worker 55d v1.14.1
wp-np3-45 Ready master 104d v1.13.5
wp-np3-46 Ready worker 104d v1.13.5
wp-np3-47 Ready worker 104d v1.13.5
wp-np3-48 Ready worker 43d v1.14.1
wp-np3-49 Ready worker 95d v1.13.5
wp-np3-76 Ready worker 55d v1.14.1
[iahmad#web-prod-ijaz001 k8s-test]$
IIRC: kubectl version is telling you what version the APIServer is at (1.14.3). kubectl get nodes is telling you what version the kubelet is on those nodes.
I'm getting the following error when trying to delete a StatefulSets on my local minikube cluster
error: no kind "GetOptions" is registered for version "apps/v1"
I can set the replicas to 0, but that still keeps the StatefulSet alive.
I'm running following version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any help would be appreciated!
It seems your kubectl version and kubernetes version isn't in sync. your kubectl version doesn't know new statefulset version.
You need to upgrade your kubectl version.