Add some labels to a deployment? - kubernetes

I'm very beginner with K8S and I've a question about the labels with kubernetes. On youtube video (In french here), I've seen that :
The man create three deploys with these commands and run the command kubectl get deployment then kubectl get deployment --show-labels :
kubectl run monnginx --image nginx --labels "env=prod,group=front"
kubectl run monnginx2 --image nginx --labels "env=dev,group=front"
kubectl run monnginx3 --image nginx --labels "env=prod,group=back"
root#kubmaster:# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
monnginx 1/1 1 1 46s
monnginx2 1/1 1 1 22s
monnginx3 1/1 1 1 10s
root#kubmaster:# kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
monnginx 1/1 1 1 46s env=prod,group=front
monnginx2 1/1 1 1 22s env=dev,group=front
monnginx3 1/1 1 1 10s env=prod,group=back
Currently, if I try to do the same things :
root#kubermaster:~ kubectl run mynginx --image nginx --labels "env=prod,group=front"
pod/mynginx created
root#kubermaster:~ kubectl run mynginx2 --image nginx --labels "env=dev,group=front"
pod/mynginx2 created
root#kubermaster:~ kubectl run mynginx3 --image nginx --labels "env=dev,group=back"
pod/mynginx3 created
When I try the command kubectl get deployments --show-labels, the output is :
No resources found in default namespace.
But if I try kubectl get pods --show-labels, the output is :
NAME READY STATUS RESTARTS AGE LABELS
mynginx 1/1 Running 0 2m39s env=prod,group=front
mynginx2 1/1 Running 0 2m32s env=dev,group=front
mynginx3 1/1 Running 0 2m25s env=dev,group=back
If I follow every steps from the videos, there is a way to put some labels on deployments... But the command kubectl create deployment does not accept the flag --labels :
Error: unknown flag: --labels
There is someone to explain why I've this error and How put some label ?
Thanks a lot !

Because $ kubectl create deployment doesn't support --labels flag. But you can use $ kubectl label to add labels to your deployment.
Examples:
# Update deployment 'my-deployment' with the label 'unhealthy' and the value 'true'.
$ kubectl label deployment my-deployment unhealthy=true
# Update deployment 'my-deployment' with the label 'status' and the value 'unhealthy', overwriting any existing value.
$ kubectl label --overwrite deployment my-deployment status=unhealthy
It works with other Kubernetes objects too.
Format: kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N

I think the problem is something different.
Until Kubernetes 1.17 the command kubectl run created a deployment.
Since Kubernetes 1.18 the command kubectl run creates a pod.
Release Notes of Kubernetes 1.18
kubectl run has removed the previously deprecated generators, along with flags
unrelated to creating pods. kubectl run now only creates pods. See specific
kubectl create subcommands to create objects other than pods. (#87077,
#soltysh) [SIG Architecture, CLI and Testing]

As of 2022 you can use the following imperative command to create a pod with labels as:-
kubectl run POD_NAME --image IMAGE_NAME -l myapp:app
where, myapp:app is the label name.

Related

Failed to move past 1 pod has unbound immediate PersistentVolumeClaims

I am new to Kubernetes, and trying to get apache airflow working using helm charts. After almost a week of struggling, I am nowhere - even to get the one provided in the apache airflow documentation working. I use Pop OS 20.04 and microk8s.
When I run these commands:
kubectl create namespace airflow
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow --namespace airflow
The helm installation times out after five minutes.
kubectl get pods -n airflow
shows this list:
NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 0/1 Pending 0 4m8s
airflow-redis-0 0/1 Pending 0 4m8s
airflow-worker-0 0/2 Pending 0 4m8s
airflow-scheduler-565d8587fd-vm8h7 0/2 Init:0/1 0 4m8s
airflow-triggerer-7f4477dcb6-nlhg8 0/1 Init:0/1 0 4m8s
airflow-webserver-684c5d94d9-qhhv2 0/1 Init:0/1 0 4m8s
airflow-run-airflow-migrations-rzm59 1/1 Running 0 4m8s
airflow-statsd-84f4f9898-sltw9 1/1 Running 0 4m8s
airflow-flower-7c87f95f46-qqqqx 0/1 Running 4 4m8s
Then when I run the below command:
kubectl describe pod airflow-postgresql-0 -n airflow
I get the below (trimmed up to the events):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s (x2 over 58s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Then I deleted the namespace using the following commands
kubectl delete ns airflow
At this point, the termination of the pods gets stuck. Then I bring up the proxy in another terminal:
kubectl proxy
Then issue the following command to force deleting the namespace and all it's pods and resources:
kubectl get ns airflow -o json | jq '.spec.finalizers=[]' | curl -X PUT http://localhost:8001/api/v1/namespaces/airflow/finalize -H "Content-Type: application/json" --data #-
Then I deleted the PVC's using the following command:
kubectl delete pvc --force --grace-period=0 --all -n airflow
You get stuck again, so I had to issue another command to force this deletion:
kubectl patch pvc data-airflow-postgresql-0 -p '{"metadata":{"finalizers":null}}' -n airflow
The PVC's gets terminated at this point and these two commands return nothing:
kubectl get pvc -n airflow
kubectl get all -n airflow
Then I restarted the machine and executed the helm install again (using first and last commands in the first section of this question), but the same result.
I executed the following command then (using the suggestions I found here):
kubectl describe pvc -n airflow
I got the following output (I am posting the event portion of PostgreSQL):
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m58s (x42 over 13m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
So my assumption is that I need to provide storage class as part of the values.yaml
Is my understanding right? How do I provide the required (and what values) in the values.yaml?
If you installed with helm, you can uninstall with helm delete airflow -n airflow.
Here's a way to install airflow for testing purposes using default values:
Generate the manifest helm template airflow apache-airflow/airflow -n airflow > airflow.yaml
Open the "airflow.yaml" with your favorite editor, replace all "volumeClaimTemplates" with emptyDir. Example:
Create the namespace and install:
kubectl create namespace airflow
kubectl apply -f airflow.yaml --namespace airflow
You can copy files out from the pods if needed.
To delete kubectl delete -f airflow.yaml --namespace airflow.

kubectl status.phase=Running return wrong results

When I run:
kubectl get pods --field-selector=status.phase=Running
I see:
NAME READY STATUS RESTARTS AGE
k8s-fbd7b 2/2 Running 0 5m5s
testm-45gfg 1/2 Error 0 22h
I don't understand why this command gives me pod that are in Error status?
According to K8S api, there is no such thing STATUS=Error.
How can I get only the pods that are in this Error status?
When I run:
kubectl get pods --field-selector=status.phase=Failed
It tells me that there are no pods in that status.
Using the kubectl get pods --field-selector=status.phase=Failed command you can display all Pods in the Failed phase.
Failed means that all containers in the Pod have terminated, and at least one container has terminated in failure (see: Pod phase):
Failed - All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
In your example, both Pods are in the Running phase because at least one container is still running in each of these Pods.:
Running - The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
You can check the current phase of Pods using the following command:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
Let's check how this command works:
$ kubectl get pods
NAME READY STATUS
app-1 1/2 Error
app-2 0/1 Error
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
app-1 Running
app-2 Failed
As you can see, only the app-2 Pod is in the Failed phase. There is still one container running in the app-1 Pod, so this Pod is in the Running phase.
To list all pods with the Error status, you can simply use:
$ kubectl get pods -A | grep Error
default app-1 1/2 Error
default app-2 0/1 Error
Additionally, it's worth mentioning that you can check the state of all containers in Pods:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[*].state}{"\n"}{end}'
app-1 {"terminated":{"containerID":"containerd://f208e2a1ff08c5ce2acf3a33da05603c1947107e398d2f5fbf6f35d8b273ac71","exitCode":2,"finishedAt":"2021-08-11T14:07:21Z","reason":"Error","startedAt":"2021-08-11T14:07:21Z"}} {"running":{"startedAt":"2021-08-11T14:07:21Z"}}
app-2 {"terminated":{"containerID":"containerd://7a66cbbf73985efaaf348ec2f7a14d8e5bf22f891bd655c4b64692005eb0439b","exitCode":2,"finishedAt":"2021-08-11T14:08:50Z","reason":"Error","startedAt":"2021-08-11T14:08:50Z"}}
You can simply grep the Error pods using the
kubectl get pods --all-namespces | grep Error
Remove all error pods from the cluster
kubectl delete pod `kubectl get pods --namespace <yournamespace> | awk '$3 == "Error" {print $1}'` --namespace <yournamespace>
Mostly Pod failures return explicit error states that can be observed in the status field
Error :
Your pod is crashed, it was able to schedule on node successfully but crashed after that. To debug it more you can use different methods or commands
kubectl describe pod <Pod name > -n <Namespace>
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#my-pod-is-crashing-or-otherwise-unhealthy
Here is an overkill go-template based attempt:
kubectl get pods -o go-template='{{range $index, $element := .items}}{{range .status.containerStatuses}}{{range .state }}{{if .reason }}{{if (eq .reason "Error") }}{{$element.metadata.name}} {{$element.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}{{end}}'
job1-stn45 default
My pod status:
k get pod
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 1 2d11h
nginx-0 1/1 Running 3 5d10h
nginx-2 1/1 Running 3 5d10h
nginx-1 1/1 Running 3 5d10h
job1-stn45 0/1 Error 0 113m
update-test-27145740-82z7s 0/1 ImagePullBackOff 0 96m
update-test-27145500-7f2l9 0/1 ImagePullBackOff 0 5h36m

Kubernetes replicate pod modification to other pods

I have a k8s cluster with 3 nodes.
With kubectl command i enter in a pod shell and make some file editing:
kubectl exec --stdin --tty <pod-name> -- /bin/bash
at this point i have one pod wit correct editing and other 2 replicas with old file.
My question is:
There is a kubectl commend for, starting from a specific pod, overwrite current replicas in cluster for create n equals pods?
Hope to be clear
So many thanks in advance
Manuel
You can use a kubectl plugin called: kubectl-tmux-exec.
All information on how to install and use this plugin can be found on GitHub: predatorray/kubectl-tmux-exec.
As described in the How to Install Dependencies documentation.
The plugin needs the following programs:
gnu-getopt(1)
tmux(1)
I've created a simple example to illustrate you how it works.
Suppose I have a web Deployment and want to create a sample-file file inside all (3) replicas.
$ kubectl get deployment,pods --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/web 3/3 3 3 19m app=web
NAME READY STATUS RESTARTS AGE LABELS
pod/web-96d5df5c8-5gn8x 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-95r4c 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-wc9k5 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
I have the kubectl-tmux_exec plugin installed, so I can use it:
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-tmux_exec
$ kubectl tmux-exec -l app=web bash
After running the above command, Tmux will be opened and we can modify multiple Pods simultaneously:

coredns containers are running on only one master

I have setup Kubernetes HA cluster with 3 masters. Version 1.14.2. Observed that 2 coredns containers are running on only one master. If I stop this Master, coredns is stopped. Are there any configuration to spawn this to remaining masters?
How can I spawn the coredns containers to the remaining masters.
you need to deploy dns autoscaler. and then tune autoscale parameters.
follow the link
https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/
follow the steps
kubectl apply -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/dns-horizontal-autoscaler.yaml
kubectl get deployment --namespace=kube-system
kubectl edit configmap dns-autoscaler --namespace=kube-system
Look for this line:
linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}'
updae min value to 2 as shown below
kubectl edit configmap dns-autoscaler --namespace=kube-system
linear: '{"coresPerReplica":256,"min":2,"nodesPerReplica":16}'
you should get two coredns pods listed as below
master $ kubectl get po --namespace=kube-system|grep dns
coredns-78fcdf6894-l54db 1/1 Running 0 1h
coredns-78fcdf6894-vbk6q 1/1 Running 0 1h
dns-autoscaler-6f888f5957-fwpgl 1/1 Running 0 2m

Update multiple pods based on labels

I have the following situation:
I have multiple deployments running the same image and are configured identically except for the environmental variables.
Now my question is, is there an easy way to update for example the image of all these deployments instead of doing it one by one?
I haven't found a solution, but I think this should be possible with the use of labels?
Or is there a better way to deploy the same deployments with only different environmental variables?
Yes, you can, this is what labels are for. They are good for grouping similar objects.
Here is the minimal reproducible example.
Create 2 deployments with the same label app=nginx:
$ kubectl run --image=nginx --overrides='{ "metadata": {"labels": {"app": "nginx"}}, "spec":{"template":{"spec": {"containers":[{"name":"nginx-container", "image": "nginx"}]}}}}' nginx-1
deployment.apps/nginx-1 created
$ kubectl run --image=nginx --overrides='{ "metadata": {"labels": {"app": "nginx"}}, "spec":{"template":{"spec": {"containers":[{"name":"nginx-container", "image": "nginx"}]}}}}' nginx-2
deployment.apps/nginx-2 created
Here are our deployments:
$ kubectl get deploy -o wide --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS
nginx-1 1/1 1 1 20s nginx-container nginx run=nginx-1 app=nginx,run=nginx-1
nginx-2 1/1 1 1 16s nginx-container nginx run=nginx-2 app=nginx,run=nginx-2
Then we can use set command and filter desired deployments using label app=nginx:
$ kubectl set image deployment -l app=nginx nginx-container=nginx:alpine
deployment.extensions/nginx-1 image updated
deployment.extensions/nginx-2 image updated
And see the results:
$ kubectl get deploy -o wide --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS
nginx-1 1/1 1 1 6m49s nginx-container nginx:alpine run=nginx-1 app=nginx,run=nginx-1
nginx-2 1/1 1 1 6m45s nginx-container nginx:alpine run=nginx-2 app=nginx,run=nginx-2