Why do pod names have 5 random alphanumeric characters appended to their name when created through a kubernetes deployment? - kubernetes

Why do pod names have 5 random alphanumeric characters appended to their name when created through a kubernetes deployment? Is it possible to get rid of them so that the pods names don't change? I am frequently deleting and creating deployments and would prefer that pod names don't change.
Update: I would like to have the same name because I am constantly deleting/recreating the same deployment and if the name doesn't change, then I can quickly reuse old commands to exec into/see the logs of the containers.

Reason for having random alphanumeric in pod names:
When we create a deployment, it will not directly create pods(to match the replica count).
It will create a replicaset (with name = deployname_name + 10 digit aplhanumeric). But why extra alphanumeric ? When we do upgrade of deployment, new replicaset is create with new alphanumeric and old is kept as it is. This old replicaset is used for rollbacks.
The created replicaset will create pods (with name = replicaset_name + 5 digit alphanumeric). But why extra alphanumeric? We cannot have two pods with same name.
If your usecase is to use the old commands frequently, then going for Statefulset is not the good solution. Statefulsets are heavy weight(ordered deployment, ordered termination, unique network names) and they are specially designed to preserve state across restart (in combination with persistent volume).
There are few tools which you can use:
stern
kube-fzf
Lightweight solution to your problem:
You can use labels to get the same pod across deployments:
kubectl get pods -l app=my_app,app_type=server
NAME READY STATUS RESTARTS AGE
my-app-5b7644f7f6-4hb8s 1/1 Running 0 22h
my-app-5b7644f7f6-72ssz 1/1 Running 0 22h
after this we can use some bash magic get what we want like below
Final command:
kubectl get pods -l app=my_app,app_type=server -o name | rg "pod/" -r "" | head -n 1 | awk '{print "kubectl logs " $0}' | bash
Explanation:
get list of pod names
kubectl get pods -l app=my_app,app_type=server -o namenames
pod/my-app-5b7644f7f6-4hb8s
pod/my-app-5b7644f7f6-72ssz
replace pod/ using ripgrep or sed (rg "pod/" -r "")
take only one pod using head -n 1
use awk to print exec/see_log command
pipe it to bash to execute

This is how deployments works, every time pod dies, ReplcaSet create pod with different name to match desired state and random number attached to pod name to give them unique names.
Whatever you are trying to achieve is not possible with deployment object as they are intended for stateless applications. As you want to preserve state( name) of application this is certainly possible with StatefulSet.
So if you use StatefulSet object to manage replicas, every pod will be created with certain name convention, e. g. POD_NAME-1, POD_NAME-2 etc i. e. Index will be appeneded to pod name. Also when pod dies, new pod will created with same name.
Ao you want to achieve is ideal use case of StatefulSet. Go for it.

If you deploy a pod from deployment object, kind:Deployment, then the deployment controller appends a unique name to pod that is part of specific deployment.
This is how the deployment controller looks up all the relevant pods of respective deployment. This is needed for rolling upgrade, rollback functions

Related

How to delete random caracters that k8S system adds to pods name

I have a namespace that contains pods like :
vpar--x-xxxx-v1-75bb57b655-ck5wg
vpar--x-xxx-v1-7f784c94db-fj4q6
vpar--x-xxxxx-v1-59cb4654c8-n65m2
vpar--x-xxxxxxx-v1-866b85849b-95mmz
vpar-*-x-xxx--v1-75f45c9c6c-nwtgg
vpar--x-xxxxx-v1-6c957fb6f6-xthbd
I want to delete the random bold string.
Some help please.
The results should be like this :
pod/vpar-parc-m-engagement-v1
pod/vpar-parc-m-groupe-v1
pod/vpar-parc-m-journal-v1
pod/vpar-parc-m-offre-v1
pod/vpar-parc-m-produit-physique-v1
pod/vpar-parc-m-produit-v1
Looks like you've deployed them using a Deployment. You can't get rid of them. If you want to get the pods with a know (or predictable) name you can use statefulset instead. They will create pods with the same name appending the pod number at the end. For example for the statefulset my-awesome-app with 3 replicas the pods will be my-awesome-app-0, my-awesome-app-1 and my-awesome-app-2.

How to get the full name of a pod by both its creation date and part of its name?

In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this:
service-job-12345abc
service-job-abc54321
other-job-54321cba
I need to find the nameof the most recently created pod starting with "service-job-".
I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix.
What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one.
Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.
This works as you expect:
kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1

Statefulset - How automatically setting labels to pods after creation and restarting?

I have statefulset mongo-replica, It creates Two replica, I want to set a new label ( COMPANY ) foreach pod (replica) it's value should be the pod's name .e.g.:
in POD mongo-replica-0 -> COMPANY: mongo-replica-0
in POD mongo-replica-1 -> COMPANY: mongo-replica-1
So, Is there away to do it, automatically in Creating/Restarting pod ?
I know we can do it via kubectl label, but it manual
At the time of writing this, there is not such a dedicated tool for this purpose. There are two things that comes to my mind here:
Use an initContainer for the Statefulset Pods that uses service
account created upon it with appropriate permissions. initContainer
will then run a command like kubectl label pod $hostname company=$HOSTNAME"
This article shows how run kubectl from within a
pod with image build, service account and roles creation.
Creating some sort of bash script that will run in pod/job and automate this process for you:
a=$(kubectl get pods -o jsonpath='{.items[*].[metadata.name](http://metadata.name/)}' -l app=$stsname) for n in $a ; do kubectl label pod $n company="$n" --overwrite ; done
Create custom mutating webhook/controller that will modify those objects.
Here is good article that describes how to write basic kubernetes mutating admission webhook. Kubernetes official documentaion shares a very good section about dynamic admission control that is worth checking out.

How to clear CrashLoopBackOff

When a Kubernetes pod goes into CrashLoopBackOff state, you will fix the underlying issue. How do you force it to be rescheduled?
For apply new configuration the new pod should be created (the old one will be removed).
If your pod was created automatically by Deployment or DaemonSet resource, this action will run automaticaly each time after you update resource's yaml.
It is not going to happen if your resource have spec.updateStrategy.type=OnDelete.
If problem was connected with error inside docker image, that you solved, you should update pods manually, you can use rolling-update feature for this purpose, In case when new image have same tag, you can just remove broken pod. (see below)
In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by DaemonSet or StatefulSet.
Any way you can manual remove crashed pod:
kubectl delete pod <pod_name>
Or all pods with CrashLoopBackOff state:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
If you have completely dead node you can add --grace-period=0 --force options for remove just information about this pod from kubernetes.
Generally a fix requires you to change something about the configuration of the pod (the docker image, an environment variable, a command line flag, etc), in which case you should remove the old pod and start a new pod. If your pod is running under a replication controller (which it should be), then you can do a rolling update to the new version.
5 Years later, unfortunately, this scenario seems to still be the case.
#kvaps answer above suggested an alternative (rolling updates), that essentially updates(overwrites) instead of deleting a pod -- the current working link of rolling updates
The alternative to being able to delete a pod, was NOT to create a pod but instead create a deployment, and delete the deployment that contains the pod, subject to deletion.
$ kubectl get deployments -A
$ kubectl delete -n <NAMESPACE> deployment <DEPLOYMENT>
# When on minikube or using docker for development + testing
$ docker system prune -a
The first command displays all deployments, alongside their respective namespaces. This helped me reduce the error of deleting deployments that share the same name(name collision) but from two different namespaces.
The second command deletes a deployment that is exactly located underneath a namespace.
The last command helps when working in development mode. Essentially, removing all unused images, which is not required but helps clean up and save some disk-space.
Another great tip, is to try to understand the reasons why a Pod is failing. The problem may be relying completely somewhere else, and k8s does a good deal of documenting. For that one of the following may help:
$ kubectl logs -f <POD NAME>
$ kubectl get events
Other reference here on StackOveflow:
https://stackoverflow.com/a/55647634/132610
For anyone interested I wrote a simple helm chart and python script which watches the current namespace and deletes any pod that enters CrashLoopBackOff.
The chart is at https://github.com/timothyclarke/helm-charts/tree/master/charts/dr-abc.
This is a sticking plaster. Fixing the problem is always the best option. In my specific case getting the historic apps into K8s so the development teams have a common place to work and strangle the old applications with new ones is preferable to fixing all the bugs in the old apps. Having this in the namespace to keep the illusion of everything running buys that time.
This command will delete all pods that are in any of (CrashLoopBackOff, Init:CrashLoopBackOff, etc.) states. You can use grep -i <keyword> to match different states and then delete the pods that match the state. In your case it should be:
kubectl get pod -n <namespace> --no-headers | grep -i crash | awk '{print $1}' | while read line; do; kubectl delete pod -n <namespace> $line; done

Reload Kubernetes ReplicationController to get newly created Service

Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services?
Example:
I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them
kubectl delete pod <your-pod-name>
if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome.
So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need.
kubectl scale --replicas=0 rc <your-rc>
kubectl scale --replicas=<n> rc <your-rc>
By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
If you want the same pod to have the new service, the clean answer is no. You could (I strongly suggest not to do this) run kubectl exec <pod-name> -c <containers> -- export <service env var name>=<service env var value>. But your best bet is to run kubectl delete <pod-name> and let your replication controller handle the work.
I've ran into a similar issue for services being ran outside of kubernetes, say a DB for instance, to address this I've been creating this https://github.com/cpg1111/kubongo which updates the service's endpoint without deleting the pods. That same idea can also be applied to other pods in kubernetes to automate the service update. Basically it watches a specific service, and when it's IP changes for whatever reason it updates all the pods without deleting them. This does use the same code as kubectl exec however it is automated, sanitizes input and ensures the export is executed on all pods.
What do you mean with 'reapply'?
The pods to which the services point are generally selected based on labels.In other words, you can add / remove labels from the pods to include / exclude them from a service.
Read here for more information about defining services: http://kubernetes.io/v1.1/docs/user-guide/services.html#defining-a-service
And here for more information about labels: http://kubernetes.io/v1.1/docs/user-guide/labels.html
Hope it helps!