Scale down Kubernetes pods - kubernetes

I am using
kubectl scale --replicas=0 -f deployment.yaml
to stop all my running pods. Please let me know if there are better ways to bring down all running pods to Zero keeping configuration, deployments etc.. intact, so that I can scale up later as required.

You are doing the correct action; traditionally the scale verb is applied just to the resource name, as in kubectl scale deploy my-awesome-deployment --replicas=0, which removes the need to always point at the specific file that describes that deployment, but there's nothing wrong (that I know of) with using the file if that is more convenient for you.

The solution is pretty easy and straightforward
kubectl scale deploy -n <namespace> --replicas=0 --all

Here we go.
Scales down all deployments in a whole namespace:
kubectl get deploy -n <namespace> -o name | xargs -I % kubectl scale % --replicas=0 -n <namespace>
To scale up set --replicas=1 (or any other required number) accordingly

Use the following to scale down/up all deployments and stateful sets in the current namespace. Useful in development when switching projects.
kubectl scale statefulset,deployment --all --replicas=0
Add a namespace flag if needed
kubectl scale statefulset,deployment -n mynamespace --all --replicas=0

kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
app-gke 3/3 3 3 13m
kubectl scale deploy app-gke --replicas=5
deployment.extensions/app-gke scaled
kubectl get pods
NAME READY STATUS RESTARTS AGE
app-gke-7b768cd6d7-b25px 2/2 Running 0 11m
app-gke-7b768cd6d7-glj5v 0/2 ContainerCreating 0 4s
app-gke-7b768cd6d7-jdt6l 2/2 Running 0 11m
app-gke-7b768cd6d7-ktx87 2/2 Running 0 11m
app-gke-7b768cd6d7-qxpgl 0/2 ContainerCreating 0 4s

If you need more granularity with pipes or grep, here is another shell solution:
for i in $(kubectl get deployments | grep -v NAME | grep -v app | awk '{print $1}'); do kubectl scale --replicas=2 deploy $i; done

If you want generic patch:
namespace=devops-ci-dev
kubectl get deployment -n ${namespace} --no-headers| awk '{print $1}' | xargs -I elhay kubectl patch deployment -n ${namespace} -p '{"spec": {"replicas": 1}}' elhay
Change namespace=devops-ci-dev, to be your name space.

kubectl get svc | awk '{print $1}' | xargs kubectl scale deploy --replicas=0

Related

Failed to move past 1 pod has unbound immediate PersistentVolumeClaims

I am new to Kubernetes, and trying to get apache airflow working using helm charts. After almost a week of struggling, I am nowhere - even to get the one provided in the apache airflow documentation working. I use Pop OS 20.04 and microk8s.
When I run these commands:
kubectl create namespace airflow
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow --namespace airflow
The helm installation times out after five minutes.
kubectl get pods -n airflow
shows this list:
NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 0/1 Pending 0 4m8s
airflow-redis-0 0/1 Pending 0 4m8s
airflow-worker-0 0/2 Pending 0 4m8s
airflow-scheduler-565d8587fd-vm8h7 0/2 Init:0/1 0 4m8s
airflow-triggerer-7f4477dcb6-nlhg8 0/1 Init:0/1 0 4m8s
airflow-webserver-684c5d94d9-qhhv2 0/1 Init:0/1 0 4m8s
airflow-run-airflow-migrations-rzm59 1/1 Running 0 4m8s
airflow-statsd-84f4f9898-sltw9 1/1 Running 0 4m8s
airflow-flower-7c87f95f46-qqqqx 0/1 Running 4 4m8s
Then when I run the below command:
kubectl describe pod airflow-postgresql-0 -n airflow
I get the below (trimmed up to the events):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s (x2 over 58s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Then I deleted the namespace using the following commands
kubectl delete ns airflow
At this point, the termination of the pods gets stuck. Then I bring up the proxy in another terminal:
kubectl proxy
Then issue the following command to force deleting the namespace and all it's pods and resources:
kubectl get ns airflow -o json | jq '.spec.finalizers=[]' | curl -X PUT http://localhost:8001/api/v1/namespaces/airflow/finalize -H "Content-Type: application/json" --data #-
Then I deleted the PVC's using the following command:
kubectl delete pvc --force --grace-period=0 --all -n airflow
You get stuck again, so I had to issue another command to force this deletion:
kubectl patch pvc data-airflow-postgresql-0 -p '{"metadata":{"finalizers":null}}' -n airflow
The PVC's gets terminated at this point and these two commands return nothing:
kubectl get pvc -n airflow
kubectl get all -n airflow
Then I restarted the machine and executed the helm install again (using first and last commands in the first section of this question), but the same result.
I executed the following command then (using the suggestions I found here):
kubectl describe pvc -n airflow
I got the following output (I am posting the event portion of PostgreSQL):
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m58s (x42 over 13m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
So my assumption is that I need to provide storage class as part of the values.yaml
Is my understanding right? How do I provide the required (and what values) in the values.yaml?
If you installed with helm, you can uninstall with helm delete airflow -n airflow.
Here's a way to install airflow for testing purposes using default values:
Generate the manifest helm template airflow apache-airflow/airflow -n airflow > airflow.yaml
Open the "airflow.yaml" with your favorite editor, replace all "volumeClaimTemplates" with emptyDir. Example:
Create the namespace and install:
kubectl create namespace airflow
kubectl apply -f airflow.yaml --namespace airflow
You can copy files out from the pods if needed.
To delete kubectl delete -f airflow.yaml --namespace airflow.

How to force delete resources in a non-existant namespace?

This question is a follow up of: How to list really all objects of a nonexistant namespace?
Long story short:
$ kubectl get namespaces
NAME STATUS AGE
argo Active 27d
default Active 27d
kube-node-lease Active 27d
kube-public Active 27d
kube-system Active 27d
$ kubectl get eventbus -n argo-events
NAME AGE
default 17h
$ kubectl get eventsource -n argo-events
NAME AGE
pubsub-event-source 14h
There are two resources in namespace argo-events which actually no longer exits because I deleted it and expected it to be gone with all resources in it. Obviously something didn't work as expected.
Now (after listing potentially more objects - first question) I want to really get rid of those resources because they seem to block a redeployment.
But this ...
$ kubectl delete eventbus default -n argo-events
eventbus.argoproj.io "default" deleted
^C
$ kubectl delete eventsource pubsub-event-source -n argo-events
eventsource.argoproj.io "pubsub-event-source" deleted
^C
... doesn't work.
So, how do I force their deletion?
UPDATE:
$ kubectl describe eventbus default -n argo-events | grep -A 3 final
f:finalizers:
.:
v:"eventbus-controller":
f:status:
$ kubectl describe eventsource pubsub-event-source -n argo-events | grep -A 3 final
f:finalizers:
.:
v:"eventsource-controller":
f:spec:
This worked:
$ kubectl create namespace argo-events
namespace/argo-events created
$ kubectl patch eventsource/pubsub-event-source -p '{"metadata":{"finalizers":[]}}' --type=merge -n argo-events
eventsource.argoproj.io/pubsub-event-source patched
$ kubectl patch eventbus/default -p '{"metadata":{"finalizers":[]}}' --type=merge -n argo-events
eventbus.argoproj.io/default patched
$ kubectl delete namespace argo-events
namespace "argo-events" deleted
If somebody stumbles upon this answer and knows why this works - please add an explanation in a comment. That would be cool, thanks.
What about:
kubectl delete eventsource pubsub-event-source -n argo-events --grace-period=0 --force
?

Add some labels to a deployment?

I'm very beginner with K8S and I've a question about the labels with kubernetes. On youtube video (In french here), I've seen that :
The man create three deploys with these commands and run the command kubectl get deployment then kubectl get deployment --show-labels :
kubectl run monnginx --image nginx --labels "env=prod,group=front"
kubectl run monnginx2 --image nginx --labels "env=dev,group=front"
kubectl run monnginx3 --image nginx --labels "env=prod,group=back"
root#kubmaster:# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
monnginx 1/1 1 1 46s
monnginx2 1/1 1 1 22s
monnginx3 1/1 1 1 10s
root#kubmaster:# kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
monnginx 1/1 1 1 46s env=prod,group=front
monnginx2 1/1 1 1 22s env=dev,group=front
monnginx3 1/1 1 1 10s env=prod,group=back
Currently, if I try to do the same things :
root#kubermaster:~ kubectl run mynginx --image nginx --labels "env=prod,group=front"
pod/mynginx created
root#kubermaster:~ kubectl run mynginx2 --image nginx --labels "env=dev,group=front"
pod/mynginx2 created
root#kubermaster:~ kubectl run mynginx3 --image nginx --labels "env=dev,group=back"
pod/mynginx3 created
When I try the command kubectl get deployments --show-labels, the output is :
No resources found in default namespace.
But if I try kubectl get pods --show-labels, the output is :
NAME READY STATUS RESTARTS AGE LABELS
mynginx 1/1 Running 0 2m39s env=prod,group=front
mynginx2 1/1 Running 0 2m32s env=dev,group=front
mynginx3 1/1 Running 0 2m25s env=dev,group=back
If I follow every steps from the videos, there is a way to put some labels on deployments... But the command kubectl create deployment does not accept the flag --labels :
Error: unknown flag: --labels
There is someone to explain why I've this error and How put some label ?
Thanks a lot !
Because $ kubectl create deployment doesn't support --labels flag. But you can use $ kubectl label to add labels to your deployment.
Examples:
# Update deployment 'my-deployment' with the label 'unhealthy' and the value 'true'.
$ kubectl label deployment my-deployment unhealthy=true
# Update deployment 'my-deployment' with the label 'status' and the value 'unhealthy', overwriting any existing value.
$ kubectl label --overwrite deployment my-deployment status=unhealthy
It works with other Kubernetes objects too.
Format: kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
I think the problem is something different.
Until Kubernetes 1.17 the command kubectl run created a deployment.
Since Kubernetes 1.18 the command kubectl run creates a pod.
Release Notes of Kubernetes 1.18
kubectl run has removed the previously deprecated generators, along with flags
unrelated to creating pods. kubectl run now only creates pods. See specific
kubectl create subcommands to create objects other than pods. (#87077,
#soltysh) [SIG Architecture, CLI and Testing]
As of 2022 you can use the following imperative command to create a pod with labels as:-
kubectl run POD_NAME --image IMAGE_NAME -l myapp:app
where, myapp:app is the label name.

Identify pod which is not in a Ready state

We have deployed a few pods in cluster in various namespaces. I would like to inspect and identify all pod which is not in a Ready state.
master $ k get pod/nginx1401 -n dev1401
NAME READY STATUS RESTARTS AGE
nginx1401 0/1 Running 0 10m
In above list, Pod are showing in Running status but having some issue. How can we find the list of those pods. Below command not showing me the desired output:
kubectl get po -A | grep Pending Looking for pods that have yet to schedule
kubectl get po -A | grep -v Running Looking for pods in a state other than Running
kubectl get pods --field-selector=status.phase=Failed
There is a long-standing feature request for this. The latest entry suggests
kubectl get po --all-namespaces | gawk 'match($3, /([0-9])+\/([0-9])+/, a) {if (a[1] < a[2] && $4 != "Completed") print $0}'
for finding pods that are running but not complete.
There are a lot of other suggestions in the thread that might work as well.
You can try this:
$ kubectl get po --all-namespaces -w
you will get an update whenever any change(create/update/delete) happened in the pod for all namespace
Or you can watch all pod by using:
$ watch -n 1 kubectl get po --all-namespaces
This will continuously watch all pod in any namespace in 1 seconds interval.

Can't delete pods in pending state?

[root#vpct-k8s-1 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-ui-v2-ck0yw 0/1 Pending 0 1h
[root#vpct-k8s-1 kubernetes]# kubectl get rc --all-namespaces
NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
kube-system kube-ui-v2 kube-ui gcr.io/google_containers/kube-ui:v2 k8s-app=kube-ui,version=v2 1 1h
Can't delete pods in pending state?
kubectl get ns
kubectl get pods --all-namespaces
kubectl get deployment -n (namespacename)
kubectl get deployments --all-namespaces
kubectl delete deployment (podname) -n (namespacename)
Try the below command
kubectl delete pod kube-ui-v2-ck0yw --grace-period=0 --force -n kube-system
To delete a pod in the pending state, simply delete the deployment file by using kubectl.
Please check the below command:
kubectl delete -f deployment-file-name.yaml
Depending on the number of replicas you specified while creating the cluster, you might be able to delete the pending pod but another pod will be recreated automatically. You can delete the pod by running this command:
$ ./cluster/kubectl.sh delete pod kube-ui-v2-ck0yw