How to get Kubernetes deployments labels when a new pod is created/updated in client-go? - kubernetes

Imagine the following deployment definition in kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
env: staging
spec:
...
I have two questions in particular:
1). The label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
2). When pod is created/updated, how can I found which deployment it belongs to?

1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
You can get the Deployment using client-go. See the example Create, Update & Delete Deployment for operations on a Deployment.
2). when pod is created/updated, how can I found which deployment it belongs to?
When a Deployment is created, a ReplicaSet is created that manage the Pods.
See the ownerReferences field of a Pod to see what ReplicaSet manages it. This is described in How a ReplicaSet works

hope you are enjoying your kubernetes journey !
In fact the label won't be available in created pods but you can add it to the manifest, in the pod section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
#Here you have the deployment labels
app: nginx
spec:
selector:
matchLabels:
#Here you have the selector that indicates to the deployment
#(more exactly to the replicatsets of the deployment)
#which pod to track to check if the number of replicas is respected.
app: nginx
...
template:
metadata:
labels:
#Here you have the POD labels that needs to match in the selector.matchlabels section
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
...
you can check the pods' labels by typing:
❯ k get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
you can get the deployments' labels by typing:
❯ k get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deploy 3/3 3 3 7m39s app=nginx
you can add a custom column in your "kubectl get po" command to display the value of each "app" labels when getting the pods:
❯ k get pod -L app
NAME READY STATUS RESTARTS AGE APP
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 8m30s nginx
and you can use multiple -L :
❯ k get pod -L app -L test
NAME READY STATUS RESTARTS AGE APP TEST
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 9m46s nginx
In general, the names of the pod begin by the name of their owner (deployment, replicaset, statefulset, job etc)
When you use a deployment to create a pod, you can be sure that between the deployment and the pod there is a replicaset (The deployment only manages the differents version of the replicaset, while the replicaset only ENSURES that the current number of actual replicas is matching the demanded number of replicas in the manifes, with labels selector ! )
So you in fact, checks the ownerReference filed of a pod, by typing:
❯ kubectl get po -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd-5qlhg nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-pgkhb nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-xdz59 nginx-deploy-6bdc4445fd ReplicaSet
can do the same with replicasets to get their deployments owner:
❯ kubectl get rs -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd nginx-deploy Deployment
thats how you can quickly see withs kubectl who owns who
here is a little reading about owners and dependants: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
hope this has helped you. bguess

Related

Can a Pod tolerate one of a set of taints

Consider a cluster in which each node has a given taint (let's say NodeType) and a Pod can tolerate a set of NodeType. For example, there are nodes tainted NodeType=A, NodeType=B and NodeType=C.
I'd like to be able to specify for example that some Pods tolerate NodeType=A or NodeType=C, but not NodeType=B. Other Pods (in different Deployments) would tolerate different sets. Is there a way to do this?
Yes, it appears it is possible to do so by adding multiple tolerations with the same key on the pod's spec. An example of the same is given in the official docs.
Here is a demo I tried which works to produce the desired result.
The cluster has three nodes:
kubectl get nodes
NAME STATUS AGE VERSION
dummy-0 Ready 3m17s v1.17.14
dummy-1 Ready 26m v1.17.14
dummy-2 Ready 26m v1.17.14
I tainted them as mentioned in the question using the kubectl taint command:
kubectl taint node dummy-0 NodeType=A:NoSchedule
kubectl taint node dummy-1 NodeType=B:NoSchedule
kubectl taint node dummy-2 NodeType=C:NoSchedule
Created a Deployment with three replicas with the matching tolerations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx-nfs
tolerations:
- key: "NodeType"
operator: "Equal"
value: "A"
effect: "NoSchedule"
- key: "NodeType"
operator: "Equal"
value: "B"
effect: "NoSchedule"
From the kubectl get pods command, we can see that the pods of the Deployment were scheduled only on the nodes dummy-0 and dummy-1 and not on dummy-2 which has a different taint:
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5fc8f985d8-2pfvm 1/1 Running 0 8s 100.96.2.11 dummy-0
nginx-deployment-5fc8f985d8-hkrcz 1/1 Running 0 8s 100.96.6.10 dummy-1
nginx-deployment-5fc8f985d8-xfxsx 1/1 Running 0 8s 100.96.6.11 dummy-1
Further, it is important to understand that the taints and tolerations are useful to make sure that the pods don't get scheduled to a particular node.
We should use the concepts of node affinities namely affinity and anti-affinity to make sure that the pods are scheduled to a particular node.

[cloud-running-a-container]: No resources found in default namespace

I did a small deployment in K8s using Docker image but it is not showing in deployment but only showing in pods.
Reason: It is not creating any default namespace in deployments.
Please suggest:
Following are the commands I used.
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node 1/1 Running 0 12s
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-node 1/1 Running 0 9m9s
kube-system event-exporter-v0.2.5-599d65f456-4dnqw 2/2 Running 0 23m
kube-system kube-proxy-gke-hello-world-default-pool-c09f603f-3hq6 1/1 Running 0 23m
$ kubectl get deployments
**No resources found in default namespace.**
$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system event-exporter-v0.2.5 1/1 1 1 170m
kube-system fluentd-gcp-scaler 1/1 1 1 170m
kube-system heapster-gke 1/1 1 1 170m
kube-system kube-dns 2/2 2 2 170m
kube-system kube-dns-autoscaler 1/1 1 1 170m
kube-system l7-default-backend 1/1 1 1 170m
kube-system metrics-server-v0.3.1 1/1 1 1 170m
Arghya Sadhu's answer is correct. In the past kubectl run command indeed created by default a Deployment instead of a Pod. Actually in the past you could use it with so called generators and you were able to specify exactly what kind of resource you want to create by providing --generator flag followed by corresponding value. Currently --generator flag is deprecated and has no effect.
Note that you've got quite clear message after running your kubectl run command:
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
It clearly says that the Pod hello-node was created. It doesn't mention about a Deployment anywhere.
As an alternative to using imperative commands for creating either Deployments or Pods you can use declarative approach:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: default
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
ports:
- containerPort: 8080
Declaration of namespace can be ommitted in this case as by default all resources are deployed into the default namespace.
After saving the file e.g. as nginx-deployment.yaml you just need to run:
kubectl apply -f nginx-deployment.yaml
Update:
Expansion of the environment variables within the yaml manifest actually doesn't work so the following line from the above deployment example cannot be used:
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
The simplest workaround is a fairly simple sed "trick".
First we need to change a bit our project id's placeholder in our deployment definition yaml. It may look like this:
image: gcr.io/{{DEVSHELL_PROJECT_ID}}/hello-node:1.0
Then when applying the deployment definition instead of simple kubectl apply -f deployment.yaml run this one-liner:
sed "s/{{DEVSHELL_PROJECT_ID}}/$DEVSHELL_PROJECT_ID/g" deployment.yaml | kubectl apply -f -
The above command tells sed to search through deployment.yaml document for {{DEVSHELL_PROJECT_ID}} string and each time this string occurs, to substitute it with the actual value of $DEVSHELL_PROJECT_ID environment variable.
Check version of kubectl using kubectl version
From kubectl 1.18 version kubectl run creates only pod and nothing else. To create a deployment use kubectl create deployment or use older version of kubectl

Does Kubernetes need a minimum number of replicas in order to carry out a rolling deployment?

Nearly 3 years ago, Kubernetes would not carry out a rolling deployment if you had a single replica (Kubernetes deployment does not perform a rolling update when using a single replica).
Is this still the case? Is there any additional configuration required for this to work?
You are not required to have a minimum number of replicas to rollout an update using Kubernetes Rolling Update anymore.
I tested it on my lab (v1.17.4) and it worked like a charm having only one replica.
You can test it yourself using this Katakoda Lab: Interactive Tutorial - Updating Your App
This lab is setup to create a deployment with 3 replicas. Before starting the lab, edit the deployment and change the number of replicas to one and follow the lab steps.
I created a lab using different example similar to your previous scenario. Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.16.1
ports:
- containerPort: 80
Deployment is running with one replica only:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 5s
Here I edited my nginx-deployment.yaml and changed the version of nginx to nginx:latest and rolled out my deployment running replace:
$ kubectl replace -f nginx-deployment.yaml
deployment.apps/nginx-deployment replaced
Another option is to change the nginx version using the kubectl set image command:
kubectl set image deployment/nginx-deployment nginx-container=nginx:latest --record
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 0/1 ContainerCreating 0 3s
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 48s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 6s
nginx-deployment-6c4699c59c-w8clt 0/1 Terminating 0 51s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 13s
As you can see, everything worked normally with only one replica.
In the latest version of the documentation we can read:
Deployment ensures that only a certain number of Pods are down while
they are being updated. By default, it ensures that at least 75% of
the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created
above the desired number of Pods. By default, it ensures that at most
125% of the desired number of Pods are up (25% max surge).

Kubernetes job and deployment

can I run a job and a deploy in a single config file/action
Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment?
Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called InitContainer:
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.
I'll create a initContainer with a busybox to run a command linux to wait for the service mydb to be running before proceeding with the deployment.
Steps to Reproduce:
- Create a Deployment with an initContainer which will run the job that needs to be completed before doing the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 2
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
restartPolicy: Always
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your sequelize job)
Now let's apply it see the status of the deployment:
$ kubectl apply -f my-app.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s
The pods are hold on Init:0/1 status waiting for the completion of the init container.
- Now let's create the service which the initcontainer is waiting to be running before completing his task:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
We will apply it and monitor the changes in the pods:
$ kubectl apply -f mydb-svc.yaml
service/mydb created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s
my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s
my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s
my-app-6b4fb4958f-44ds7 1/1 Running 0 95s
^C
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s
pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mydb ClusterIP 10.100.106.67 <none> 80/TCP 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app 2/2 2 2 99s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app-6b4fb4958f 2 2 2 99s
If you need help to apply this to your environment let me know.
Although initContainers are a viable option for this solution, there is another if you use helm to manage and deploy to your cluster.
Helm has chart hooks that allow you to run a Job before other installations in the helm chart occur. You mentioned that this is for a database migration before a service deployment. Some example helm config to get this done could be...
apiVersion: batch/v1
kind: Job
metadata:
name: api-migration-job
namespace: default
labels:
app: api-migration-job
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
spec:
containers:
- name: platform-migration
...
This will run the job to completion before moving on to the installation / upgrade phases in the helm chart. You can see there is a 'hook-weight' variable that allows you to order these hooks if you desire.
This in my opinion is a more elegant solution than init containers, and allows for better control.

Kubernetes automatically remove resources no longer required

Using AWS CloudFormation, I can create a stack based on a template that includes all required resources. I can then create a new template, adding some resources, removing some, and changing description of others. I can then update the CloudFormation stack with the new template. CloudFormation will automatically remove any resources that are no longer in the template, add the new ones, and update modified resources. In addition, the update will roll back if any of the operations fails.
Is there an equivalent to this in Kubernetes, where I can just provide an updated configuration file, and have Kubernetes automatically compare that to the previous version and remove any resources that should no longer be there?
For single resources (e.g. a single Pod or Deployment) Kubernetes will automatically reconcile the state. So it works in a similar manner as CloudFormation in that sense. If you change a deployment and remove a pod from it, Kubernetes will automatically remove the resources.
If you want to treat multiple resources as a single object, you can look at something like Helm, which simplifies packaging multiple Kubernetes resources together.
Using deployment template will suffice your need, a deployment can be rollback at any time needed.
Rollout command when used with correct flags like "status/history/undo" should help you control the stack resource rollout or rollback..
kubectl rollout status deployment nginx
Check rollout History
kubectl rollout history deployment nginx
Rolling Back to a Previous Revision
kubectl rollout undo deployment nginx
In below example i created a deployment with two pods using deployment_v1.yaml file which has 2 containers inside a pod (nginx/redis)
kubectl create -f deployment_v1.yaml --record=true
deployment_v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: multi-container-deploy
name: multi-container-deploy
spec:
replicas: 1
selector:
matchLabels:
app: multi-container
template:
metadata:
labels:
app: multi-container
spec:
containers:
- image: nginx
name: nginx-1
- image: redis
name: redis-2
Checking Status during rollout
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "multi-container-deploy" successfully rolled out
Rollout history
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=deployment_v1.yaml --record=true
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-r4dt4 2/2 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 60s
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 60s
Now say we remove the redis pod from the original deployment by say kubectl edit command
kubectl edit deployments multi-container-deploy
Check new rollout status after edit as below
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
deployment "multi-container-deploy" successfully rolled out
Check new rollout history and we will see list updated as below (disadvantage of direct edit we will not have much info on what was done on step2)
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment_v1.yaml --record=true
2 kubectl apply --filename=deployment_v1.yaml --record=true
We can also check that the resource was successful removed and we only have pod running with one container.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-7cdb9cbf4-jr9nc 1/1 Running 0 4m36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 0 0 0 13m
replicaset.apps/multi-container-deploy-7cdb9cbf4 1 1 1 4m36s
We can Undo above edit on deployment just by running below command
$ kubectl rollout undo deployment multi-container-deploy
deployment.apps/multi-container-deploy rolled back
If we check back we have the pod running back with two containers again.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-xn4mz 2/2 Running 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 15m
replicaset.apps/multi-container-deploy-7cdb9cbf4 0 0 0 6m59s
And rollout history will be updated as below
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment_v2.yaml --record=true
3 kubectl apply --filename=deployment_v2.yaml --record=true