In K8s find what nodes do pods exist in - kubernetes

I want to know what nodes correspond to pods in a K8s cluster. I am working with a 3 node K8s cluster which has 2 specific pods among other pods.
How can I see which pod exists in which node using kubectl?
When I use kubectl get pods, I get the following:
NAME READY STATUS RESTARTS AGE
pod1-7485f58945-zq8tg 1/1 Running 2 2d
pod2-64c4564b5c-8rh5x 1/1 Running 0 2d1h
Following is the version of K8s (kubectl version) that I am using
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

Try kubectl get pods -o wide.
You can get more details in this very detailed Kubernetes cheatsheet.

Related

kubectl get deployments output differ from this in documentation

When I run this command
kubectl get deployments
at my Linux Ubuntu 18 machine, I got different output than expected (according to documentation).
Expected:
Actual:
Of course, I am not talking about values, I am talking about names of labels.
[EDIT]
My k8s version:
This is just an old output format. The newer output you're getting below contains all the same information; the "READY" field is a combination of the old "DESIRED" and "CURRENT".
It's showing as 4/5 in your output to indicate 4 pods ready/current, and 5 pods desired.
Hope this helps.
The output depends on client version. Let's check it with the same server
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 2 2 2 2 10d
Switching the kubectl version changes output:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
kube-dns 2/2 2 2 10d

kubernetes can not join workers centos 7

My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?
Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2

Kubernetes version upgraded, but kubectl showing different versions

I just upgraded kubernetes cluster, but kubectl is very inconsistent in showing me the version. How can I verify this. Any source of truth?
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
[iahmad#web-prod-ijaz001 k8s-test]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
wp-np3-0b Ready worker 55d v1.14.1
wp-np3-0f Ready worker 55d v1.14.1
wp-np3-45 Ready master 104d v1.13.5
wp-np3-46 Ready worker 104d v1.13.5
wp-np3-47 Ready worker 104d v1.13.5
wp-np3-48 Ready worker 43d v1.14.1
wp-np3-49 Ready worker 95d v1.13.5
wp-np3-76 Ready worker 55d v1.14.1
[iahmad#web-prod-ijaz001 k8s-test]$
IIRC: kubectl version is telling you what version the APIServer is at (1.14.3). kubectl get nodes is telling you what version the kubelet is on those nodes.

GitLab Kubernetes integration error; configuration of Helm Tiller already exists

After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:
Something went wrong while installing Helm Tiller
Kubernetes error: configmaps "values-content-configuration-helm" already exists
There are no pods running on the cluster and kubectl version returns:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
update
the output of kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue!
Find the gitlab-managed-apps namespace with kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue:
kubectl delete namespace gitlab-managed-apps
Thanks to Lev Kuznetsov.

Kubernetes Pods Always ContainerCreating

I have installed a cluster of one master and one node using kubadm
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Whenever I try and install a pod (ngix, gafana, influxdb, heapster, tiller), it always stays in a state of ContainerCreating.
I can't figure out how to diagnose the issue to try and get the containers to move to a Running state.
Use the following commands to check logs of Kubelet and diagnose the issue accordingly:
systemctl status kubelet
journalctl -xeu kubelet
For details:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/