Two Kubernetes Deployments with exactly the same pod labels - kubernetes

Let's say I have two deployments which are exactly the same apart from deployment name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d2
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
Since these two deployments have the same selectors and the same pod template, I would expect to see three pods. However, six pods are created:
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-dkpk7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-nz7wf 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-vdtfr 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nqmq7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nzrlc 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-qgjkn 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
Why is that?

Consider this: The pods are not directly managed by a deployment, but a deployment manages a ReplicaSet.
This can be validated using
kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-d-5b686ccd46 3 3 3 74s
nginx-d2-7c76fbbbcb 3 3 0 74s
You choose which pods to consider for a replicaset or deployment by specifying the selector. In addition to that each deployment adds its own label to be able to discriminate which pods are managed by its own replicaset and which are managed by other replicasets.
You can inspect this as well:
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-7j4md 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-9j7tx 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-zt4ls 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-ddcr2 1/1 Running 0 75s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-fhvm7 1/1 Running 0 79s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-q99ww 1/1 Running 0 83s app=mynginx,pod-template-hash=5b686ccd46
These are added to the replicaset as match labels:
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
pod-template-hash: 5b686ccd46
Since even these are identical you can inspect the pods and see that there is an owner reference as well:
kubectl get pod nginx-d-5b686ccd46-7j4md -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-28T14:53:17Z"
generateName: nginx-d-5b686ccd46-
labels:
app: mynginx
pod-template-hash: 5b686ccd46
name: nginx-d-5b686ccd46-7j4md
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-d-5b686ccd46
uid: 7eb8fdaf-bfe7-4647-9180-43148a036184
resourceVersion: "556"
More information on this can be found here: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
So a deployment (and replicaset) can disambiguate which pods are managed by which and each ensure the desired number of replicas.

Related

Can a Deployment controller control Pods that weren't created by it?

Say I have a pod YAML such as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.1
And a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Now I first create the Pod:
$ kubectl apply -f pod.yaml
And only then the Deployment:
$ kubectl apply -f deployment.yaml
I thought that, since the pod.yaml metadata includes a app: nginx selector, the Deployment controller will only create 2 nginx:1.17.1 pods, but I see that all 3 are created. Why is that?
In addition to creating the app: nginx label, Deployment controller also added the pod-template-hash label for each pod that was created.
If we check labels for running pods, we can see pod-template-hash=5d5dd5dd49 label for my-deployment pods:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-deployment-5d5dd5dd49-9tbcx 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-b88f4 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-x7n8q 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
nginx 1/1 Running 0 62s app=nginx
According to the official documentation:
The pod-template-hash label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is
added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.
This is why the changes from the Deployment didn't apply to a single pod with only the app: nginx label.

How to identify unhealthy pods in a statefulset

I have a StatefulSet with 6 replicas.
All of a sudden StatefulSet thinks there are 5 ready replicas out if 6. When I look at the pod status all 6 pods are ready with all the readiness checks passed 1/1.
Now I am trying to find logs or status that shows which pod is unhealthy as per the StatefulSet, so I could debug further.
Where can I find information or logs for the StatefulSet that could tell me which pod is unhealthy? I have already checked the output of describe pods and describe statefulset but none of them show which pod is unhealthy.
So lets say you created next statefulset:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
user: anurag
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
user: anurag # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 6 # by default is 1
template:
metadata:
labels:
user: anurag # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Result is:
kubectl get StatefulSet web -o wide
NAME READY AGE CONTAINERS IMAGES
web 6/6 8m31s nginx k8s.gcr.io/nginx-slim:0.8
What we can also check StatefulSet's status in:
kubectl get statefulset web -o yaml
status:
collisionCount: 0
currentReplicas: 6
currentRevision: web-599978b754
observedGeneration: 1
readyReplicas: 6
replicas: 6
updateRevision: web-599978b754
updatedReplicas: 6
As per Debugging a StatefulSet, you can list all the pods which belong to a current StatefulSet using labels.
$ kubectl get pods -l user=anurag
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 13m
web-1 1/1 Running 0 12m
web-2 1/1 Running 0 12m
web-3 1/1 Running 0 12m
web-4 1/1 Running 0 12m
web-5 1/1 Running 0 11m
Here, at this point, if any of your pods aren't available- you will definitely see that. And next debugging is Debug Pods and ReplicationControllers including checks if you have enough sufficient resources to start all these pods and etc etc.
Describe problematic pod (kubectl describe pod web-0) should give you an answer why that happened in the very end in Events section.
For example, if you will use origin yaml as it is for this example from statefulset components - you will have an error and any of your pods will up and running. (The reason is storageClassName: "my-storage-class" )
The exact error and understanding what is happening comes from describing problematic pod... that's how it works.
kubectl describe pod web-0
vents:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31s (x2 over 31s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

How to get logs of deployment from Kubernetes?

I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 163d
kubernetes-dashboard 1/1 1 1 164d
monitoring-grafana 0/1 0 0 12m
monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system
Name: monitoring-influxdb
Namespace: kube-system
CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800
Labels: k8s-app=influxdb
task=monitoring
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"...
Selector: k8s-app=influxdb,task=monitoring
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=influxdb
task=monitoring
Containers:
influxdb:
Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from influxdb-storage (rw)
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb
error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-569fd64d84-5q5pj 1/1 Running 0 46h
kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h
traefik-ingress-controller-hx4xd 1/1 Running 0 11h
kubectl logs deployment/<name-of-deployment> # logs of deployment
kubectl logs -f deployment/<name-of-deployment> # follow logs
You can try kubectl describe deploy monitoring-influxdb to get some high-level view of the deployment, maybe some information here.
For more detailed logs, first get the pods: kubectl get po
Then, request the pod logs: kubectl logs <pod-name>
Adding references of two great tools that might help you view cluster logs:
If you wish to view logs from your terminal without using a "heavy" 3rd party logging solution I would consider using K9S which is a great CLI tool that help you get control over your cluster.
If you are not bound only to the CLI and still want run locally I would recommend on Lens.

Configure starting index of StatefulSet's pods in Kubernetes

As we all know from the documentation:
For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, that is unique over the Set.
Example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper-np
labels:
app: zookeeper-np
spec:
serviceName: zoo-svc-np
replicas: 3
selector:
matchLabels:
app: zookeeper-np
template:
metadata:
labels:
app: zookeeper-np
spec:
containers:
- name: zookeeper-np
image: zookeeper:3.5.6
ports:
- containerPort: 30005
- containerPort: 30006
- containerPort: 30007
env:
- name: ZOO_MY_ID
value: "1"
The above manifest will create three pods like below:
NAME READY STATUS STARTS AGE
pod/zookeeper-np-0 1/1 Running 0 203s
pod/zookeeper-np-1 1/1 Running 0 137s
pod/zookeeper-np-2 1/1 Running 0 73s
Question:
Is there a way to configure this starting index (0) to start from any other integer (e.g 1,2,3 etc.)?
For example, to start the above pod indices from 3 so that the pods created will look like something below:
NAME READY STATUS STARTS AGE
pod/zookeeper-np-3 1/1 Running 0 203s
pod/zookeeper-np-4 1/1 Running 0 137s
pod/zookeeper-np-5 1/1 Running 0 73s
Or if there is only one replica of the statefulset, its ordinal is not zero but some other integer (e.g 12), so that the only pod created will be named as pod/zookeeper-np-12?
Unfortunately, it's impossible, according to source code 0...n is just slice index of replicas slice. See the last argument to newVersionedStatefulSetPod

Kubernetes DNS and NetworkPolicy with Calico not working

I have a Minikube cluster with Calico running and I am trying to make NetworkPolicies working. Here are my Pods and Services:
First pod (team-a):
apiVersion: v1
kind: Pod
metadata:
name: team-a
namespace: orga-1
labels:
run: nginx
app: team-a
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-a
namespace: orga-1
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-a
Second pod (team-b):
apiVersion: v1
kind: Pod
metadata:
name: team-b
namespace: orga-2
labels:
run: nginx
app: team-b
spec:
containers:
- image: joshrosso/nginx-curl:v2
imagePullPolicy: IfNotPresent
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: team-b
namespace: orga-2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
selector:
app: team-b
When I execute a bash in team-a, I cannot curl orga-2.team-b:
dev#ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root#team-a:/# curl google.de
//Body removed...
root#team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b
Now I applied a network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-base-rule
namespace: orga-1
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
When I now curl google in team-a, it still works. Here are my pods:
kube-system calico-etcd-hbpqc 1/1 Running 0 27m
kube-system calico-kube-controllers-6b86746955-5mk9v 1/1 Running 0 27m
kube-system calico-node-72rcl 2/2 Running 0 27m
kube-system coredns-fb8b8dccf-6j64x 1/1 Running 1 29m
kube-system coredns-fb8b8dccf-vjwl7 1/1 Running 1 29m
kube-system default-http-backend-6864bbb7db-5c25r 1/1 Running 0 29m
kube-system etcd-minikube 1/1 Running 0 28m
kube-system kube-addon-manager-minikube 1/1 Running 0 28m
kube-system kube-apiserver-minikube 1/1 Running 0 28m
kube-system kube-controller-manager-minikube 1/1 Running 0 28m
kube-system kube-proxy-p48xv 1/1 Running 0 29m
kube-system kube-scheduler-minikube 1/1 Running 0 28m
kube-system nginx-ingress-controller-586cdc477c-6rh6w 1/1 Running 0 29m
kube-system storage-provisioner 1/1 Running 0 29m
orga-1 team-a 1/1 Running 0 20m
orga-2 team-b 1/1 Running 0 7m20s
and my services:
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 27m
kube-system default-http-backend NodePort 10.105.84.105 <none> 80:30001/TCP 29m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 <none> 80/TCP 8m37s
orga-2 team-b ClusterIP 10.105.79.255 <none> 80/TCP 7m54s
The kube-dns endpoint is available, also the service.
Why is my network policy not working and why is the curl to the other pod not working?
Please run
curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'
If you can reach your pods than please follow this tutorial
Deny all ingress traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
and Allow ingress traffic to Nginx:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}
Below you can find more information about:
Pod’s DNS Policy,
Network Policies
Hope this help.