Kubernetes automatically remove resources no longer required - kubernetes

Using AWS CloudFormation, I can create a stack based on a template that includes all required resources. I can then create a new template, adding some resources, removing some, and changing description of others. I can then update the CloudFormation stack with the new template. CloudFormation will automatically remove any resources that are no longer in the template, add the new ones, and update modified resources. In addition, the update will roll back if any of the operations fails.
Is there an equivalent to this in Kubernetes, where I can just provide an updated configuration file, and have Kubernetes automatically compare that to the previous version and remove any resources that should no longer be there?

For single resources (e.g. a single Pod or Deployment) Kubernetes will automatically reconcile the state. So it works in a similar manner as CloudFormation in that sense. If you change a deployment and remove a pod from it, Kubernetes will automatically remove the resources.
If you want to treat multiple resources as a single object, you can look at something like Helm, which simplifies packaging multiple Kubernetes resources together.

Using deployment template will suffice your need, a deployment can be rollback at any time needed.
Rollout command when used with correct flags like "status/history/undo" should help you control the stack resource rollout or rollback..
kubectl rollout status deployment nginx
Check rollout History
kubectl rollout history deployment nginx
Rolling Back to a Previous Revision
kubectl rollout undo deployment nginx
In below example i created a deployment with two pods using deployment_v1.yaml file which has 2 containers inside a pod (nginx/redis)
kubectl create -f deployment_v1.yaml --record=true
deployment_v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: multi-container-deploy
name: multi-container-deploy
spec:
replicas: 1
selector:
matchLabels:
app: multi-container
template:
metadata:
labels:
app: multi-container
spec:
containers:
- image: nginx
name: nginx-1
- image: redis
name: redis-2
Checking Status during rollout
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "multi-container-deploy" successfully rolled out
Rollout history
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=deployment_v1.yaml --record=true
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-r4dt4 2/2 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 60s
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 60s
Now say we remove the redis pod from the original deployment by say kubectl edit command
kubectl edit deployments multi-container-deploy
Check new rollout status after edit as below
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
deployment "multi-container-deploy" successfully rolled out
Check new rollout history and we will see list updated as below (disadvantage of direct edit we will not have much info on what was done on step2)
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment_v1.yaml --record=true
2 kubectl apply --filename=deployment_v1.yaml --record=true
We can also check that the resource was successful removed and we only have pod running with one container.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-7cdb9cbf4-jr9nc 1/1 Running 0 4m36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 0 0 0 13m
replicaset.apps/multi-container-deploy-7cdb9cbf4 1 1 1 4m36s
We can Undo above edit on deployment just by running below command
$ kubectl rollout undo deployment multi-container-deploy
deployment.apps/multi-container-deploy rolled back
If we check back we have the pod running back with two containers again.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-xn4mz 2/2 Running 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 15m
replicaset.apps/multi-container-deploy-7cdb9cbf4 0 0 0 6m59s
And rollout history will be updated as below
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment_v2.yaml --record=true
3 kubectl apply --filename=deployment_v2.yaml --record=true

Related

How to get Kubernetes deployments labels when a new pod is created/updated in client-go?

Imagine the following deployment definition in kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
env: staging
spec:
...
I have two questions in particular:
1). The label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
2). When pod is created/updated, how can I found which deployment it belongs to?
1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
You can get the Deployment using client-go. See the example Create, Update & Delete Deployment for operations on a Deployment.
2). when pod is created/updated, how can I found which deployment it belongs to?
When a Deployment is created, a ReplicaSet is created that manage the Pods.
See the ownerReferences field of a Pod to see what ReplicaSet manages it. This is described in How a ReplicaSet works
hope you are enjoying your kubernetes journey !
In fact the label won't be available in created pods but you can add it to the manifest, in the pod section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
#Here you have the deployment labels
app: nginx
spec:
selector:
matchLabels:
#Here you have the selector that indicates to the deployment
#(more exactly to the replicatsets of the deployment)
#which pod to track to check if the number of replicas is respected.
app: nginx
...
template:
metadata:
labels:
#Here you have the POD labels that needs to match in the selector.matchlabels section
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
...
you can check the pods' labels by typing:
❯ k get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
you can get the deployments' labels by typing:
❯ k get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deploy 3/3 3 3 7m39s app=nginx
you can add a custom column in your "kubectl get po" command to display the value of each "app" labels when getting the pods:
❯ k get pod -L app
NAME READY STATUS RESTARTS AGE APP
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 8m30s nginx
and you can use multiple -L :
❯ k get pod -L app -L test
NAME READY STATUS RESTARTS AGE APP TEST
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 9m46s nginx
In general, the names of the pod begin by the name of their owner (deployment, replicaset, statefulset, job etc)
When you use a deployment to create a pod, you can be sure that between the deployment and the pod there is a replicaset (The deployment only manages the differents version of the replicaset, while the replicaset only ENSURES that the current number of actual replicas is matching the demanded number of replicas in the manifes, with labels selector ! )
So you in fact, checks the ownerReference filed of a pod, by typing:
❯ kubectl get po -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd-5qlhg nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-pgkhb nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-xdz59 nginx-deploy-6bdc4445fd ReplicaSet
can do the same with replicasets to get their deployments owner:
❯ kubectl get rs -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd nginx-deploy Deployment
thats how you can quickly see withs kubectl who owns who
here is a little reading about owners and dependants: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
hope this has helped you. bguess

pods still there when run kubectl delete pods

I want to remove zk and kafka from my k8s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka1-mvzch 1/1 Running 1 25s
kafka2-m292k 0/1 CrashLoopBackOff 8 20m
zookeeper1-qhmnf 1/1 Running 0 20m
zookeeper2-t7r8w 1/1 Running 0 20m
$kubectl delete pod kafka1-mvzch kafka2-m292k zookeeper1-qhmnf zookeeper2-t7r8w
pod "kafka1-mvzch" deleted
pod "kafka1-m292k" deleted
pod "zookeeper1-qhmnf" deleted
pod "zookeeper2-t7r8w" deleted
but when I run get pods, it still shows the pods.
And I got no service and deployment
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h1m
$ kubectl get deployment
No resources found in default namespace.
You are removing the pods, and they will be deleted.
But there is some other construct that re-creates pods to replace the (now deleted) previous pods.
In fact, the names of the pods with the random-looking suffix suggest that there is another controller operating the pods.
When looking at the linked tutorial, you notice that a ReplicationController is created. This ensures the pods.
If you want to remove it, remove the replication controller; the pods will be deleted as well.
You can use kubectl get pod -ojsonpath='{.metadata.ownerReferences}' to identify the owner object of the pods. The owner might be a Deployment, StatefulSet, etc.
Looking at the medium.com guide that you mentioned, I see that they suggest to create ReplicationControllers.
You can cleanup your namespace by running kubectl delete replicationcontroller --all.

kubernetes deployments / replicasets are recreated after deletion

I'm trying to delete some old deployments / replicasets I have in my cluster but when I run kubectl delete deployment
It'll say the deployment is deleted and the pod from that deployment is Terminating, but then a few seconds later the deployment is magically recreated and the pod comes back.
This is the same result for another replicaset I have.
What could be re-creating these deployments / replicasets and how can I stop it so I can permanently delete these deployments/rs?
Edit: Here's some output. This is on a kubernetes cluster in GKE btw:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 41m
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
quickstart-kb-f9b65577f-4fxph 1/1 Running 0 40m
kubectl delete deployment quickstart-kb
deployment.extensions "quickstart-kb" deleted
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 0/1 1 0 7s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 0/1 Running 0 11s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 4m6s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 1/1 Running 0 4m13s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
I think your deployment object is created with the deployment of some custom resources (CRD).
When you created the CRD, the CRD controller created the deployment object. So, even if you delete the deployment object, the CRD controller re-creates it.
Delete the CRD object itself, to delete the deployment and other objects (if any) that were created with it.
From the name, it seems like Kibana CRD object:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
Use the following command to delete the Kibana object:
$ kubectl delete Kibana quickstart-kb

[cloud-running-a-container]: No resources found in default namespace

I did a small deployment in K8s using Docker image but it is not showing in deployment but only showing in pods.
Reason: It is not creating any default namespace in deployments.
Please suggest:
Following are the commands I used.
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node 1/1 Running 0 12s
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-node 1/1 Running 0 9m9s
kube-system event-exporter-v0.2.5-599d65f456-4dnqw 2/2 Running 0 23m
kube-system kube-proxy-gke-hello-world-default-pool-c09f603f-3hq6 1/1 Running 0 23m
$ kubectl get deployments
**No resources found in default namespace.**
$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system event-exporter-v0.2.5 1/1 1 1 170m
kube-system fluentd-gcp-scaler 1/1 1 1 170m
kube-system heapster-gke 1/1 1 1 170m
kube-system kube-dns 2/2 2 2 170m
kube-system kube-dns-autoscaler 1/1 1 1 170m
kube-system l7-default-backend 1/1 1 1 170m
kube-system metrics-server-v0.3.1 1/1 1 1 170m
Arghya Sadhu's answer is correct. In the past kubectl run command indeed created by default a Deployment instead of a Pod. Actually in the past you could use it with so called generators and you were able to specify exactly what kind of resource you want to create by providing --generator flag followed by corresponding value. Currently --generator flag is deprecated and has no effect.
Note that you've got quite clear message after running your kubectl run command:
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
It clearly says that the Pod hello-node was created. It doesn't mention about a Deployment anywhere.
As an alternative to using imperative commands for creating either Deployments or Pods you can use declarative approach:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: default
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
ports:
- containerPort: 8080
Declaration of namespace can be ommitted in this case as by default all resources are deployed into the default namespace.
After saving the file e.g. as nginx-deployment.yaml you just need to run:
kubectl apply -f nginx-deployment.yaml
Update:
Expansion of the environment variables within the yaml manifest actually doesn't work so the following line from the above deployment example cannot be used:
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
The simplest workaround is a fairly simple sed "trick".
First we need to change a bit our project id's placeholder in our deployment definition yaml. It may look like this:
image: gcr.io/{{DEVSHELL_PROJECT_ID}}/hello-node:1.0
Then when applying the deployment definition instead of simple kubectl apply -f deployment.yaml run this one-liner:
sed "s/{{DEVSHELL_PROJECT_ID}}/$DEVSHELL_PROJECT_ID/g" deployment.yaml | kubectl apply -f -
The above command tells sed to search through deployment.yaml document for {{DEVSHELL_PROJECT_ID}} string and each time this string occurs, to substitute it with the actual value of $DEVSHELL_PROJECT_ID environment variable.
Check version of kubectl using kubectl version
From kubectl 1.18 version kubectl run creates only pod and nothing else. To create a deployment use kubectl create deployment or use older version of kubectl

Is there some way only increase statefulset's replicas and NO decrease the replicas?

I do not want to decrease the number of pods controlled by StatefulSet, and i think that decreasing pods is a dangerous operation in production env.
so... , is there some way ? thx ~
I'm not sure if this is what you are looking for but you can scale a StatefulSet
Use kubectl to scale StatefulSets
First, find the StatefulSet you want to scale.
kubectl get statefulsets <stateful-set-name>
Change the number of replicas of your StatefulSet:
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
To show you an example, I've deployed a 2 pod StatefulSet called web:
$ kubectl get statefulsets.apps web
NAME READY AGE
web 2/2 60s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 63s
web-1 1/1 Running 0 44s
$ kubectl describe statefulsets.apps web
Name: web
Namespace: default
CreationTimestamp: Wed, 23 Oct 2019 13:46:33 +0200
Selector: app=nginx
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":2,"select...
Replicas: 824643442664 desired | 2 total
Update Strategy: RollingUpdate
Partition: 824643442984
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
...
Now if we do scale this StatefulSet up to 5 replicas:
$ kubectl scale statefulset web --replicas=5
statefulset.apps/web scaled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m41s
web-1 1/1 Running 0 3m22s
web-2 1/1 Running 0 59s
web-3 1/1 Running 0 40s
web-4 1/1 Running 0 27s
$ kubectl get statefulsets.apps web
NAME READY AGE
web 5/5 3m56s
You do not have any downtime in already working pods.
i think that decreasing pods is a dangerous operation in production env.
I agree with you.
As Crou wrote, it is possible to do this operation with kubectl scale statefulsets <stateful-set-name> but this is an imperative operation and it is not recommended to do imperative operations in a production environment.
In a production environment it is better to use a declarative operation, e.g. have the number of replicas in a text file (e.g. stateful-set-name.yaml) and deploy them with kubectl apply -f <stateful-set-name>.yaml with this way of working, it is easy to store the yaml-files in Git so you have full control of all changes and can revert/rollback to a previous configuration. When you store the declarative files in a Git repository you can use a CICD solution e.g. Jenkins or ArgoCD to 1) validate the operation (e.g. not allow decrease) and 2) first deploy to a test-environment and see that it works, before applying the changes to the production environment.
I recommend the book (new edition) Kubernetes Up&Running 2nd ed that describes this procedure in Chapter 18 (new chapter).