How to delete random caracters that k8S system adds to pods name - kubernetes

I have a namespace that contains pods like :
vpar--x-xxxx-v1-75bb57b655-ck5wg
vpar--x-xxx-v1-7f784c94db-fj4q6
vpar--x-xxxxx-v1-59cb4654c8-n65m2
vpar--x-xxxxxxx-v1-866b85849b-95mmz
vpar-*-x-xxx--v1-75f45c9c6c-nwtgg
vpar--x-xxxxx-v1-6c957fb6f6-xthbd
I want to delete the random bold string.
Some help please.
The results should be like this :
pod/vpar-parc-m-engagement-v1
pod/vpar-parc-m-groupe-v1
pod/vpar-parc-m-journal-v1
pod/vpar-parc-m-offre-v1
pod/vpar-parc-m-produit-physique-v1
pod/vpar-parc-m-produit-v1

Looks like you've deployed them using a Deployment. You can't get rid of them. If you want to get the pods with a know (or predictable) name you can use statefulset instead. They will create pods with the same name appending the pod number at the end. For example for the statefulset my-awesome-app with 3 replicas the pods will be my-awesome-app-0, my-awesome-app-1 and my-awesome-app-2.

Related

How to get the full name of a pod by both its creation date and part of its name?

In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this:
service-job-12345abc
service-job-abc54321
other-job-54321cba
I need to find the nameof the most recently created pod starting with "service-job-".
I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix.
What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one.
Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.
This works as you expect:
kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1

Delete AKS deployment's running pod on regular basis (Job)

I have been struggling for some time to figure out how to accomplish the following:
I want to delete running pod on Azure Kubernetes Service cluster on scheduled basis, so that it respawns from deployment. This is required that application re-reads configuration files stored on shared storage and shared with other application.
I have found out that Kubernetes Jobs might be handy to accomplish this, but there is some but.
I cant figure how can I select corresponding pod related to my deployment as it adds random string to the deployment name, i.e
deployment-name-546fcbf44f-wckh4
Using selectors to get my pod doesnt succeed as there is not such operator like LIKE
kubectl get pods --field-selector metadata.name=deployment-name
No resources found
Looking at the official docs one way of doing this would be like so:
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
you'd need to modify job-name to match your job name
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job

kubernetes creates more pods than scale amount

I have encountered a strange situation in one of our clusters, where all of a sudden a number of new pods have been created so that we end up with a greater number of running pods than the scale amount.
So in the dashboard it will show
serviceX pods: 8/2
and then 8 running instances of that service
Questions
How can this possibly happen?
Is there an easy way to get rid
of the extra pods (which all seem to be running)?
I have tried changing the scale amount in the dashboard and the extra pods do not disappear.
Both Pod and deployment are full-fledged objects in the Kubernetes API. Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.
In your case deployment name edgeservicepublic-svc is set to have 13 replicas. Deployment is a kind of controller in Kubernetes. Its is naturally that this controller with continuously check if 13 pods are created. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Probably at first not enough pods are created co controller with pursue to achieve desried number of them.
To make sure your deployment works properly you can delete deployment, make sure that that pods are deleted. Make sure that you haven't set up autoscaler ( $ kubectl get hpa ) if so, delete it. Then if you want to change deployment specification edit deployment configuration file and apply changes ($ kubectl apply -f deployment_configuration_file.yaml).
Useful documentation about deployment , autoscaling in context of GKE.
EDIT:
Basically at first place check autoscaler then delete it if it exists. I told you to delete deployment because you told that you try to change scale amount/ number of replicas. So if you want to be 100 % sure that changes are applied is to delete whole deployment end then recreate it with desired number of replicas. Of course you can just apply changes in deployment configuration file ($ kubectl edit ...) or ( $ kubectl apply -f ) but sometimes existing pods are not deleted so it will be saver. You could also create new deployment with the same parameters but different name.

Why do pod names have 5 random alphanumeric characters appended to their name when created through a kubernetes deployment?

Why do pod names have 5 random alphanumeric characters appended to their name when created through a kubernetes deployment? Is it possible to get rid of them so that the pods names don't change? I am frequently deleting and creating deployments and would prefer that pod names don't change.
Update: I would like to have the same name because I am constantly deleting/recreating the same deployment and if the name doesn't change, then I can quickly reuse old commands to exec into/see the logs of the containers.
Reason for having random alphanumeric in pod names:
When we create a deployment, it will not directly create pods(to match the replica count).
It will create a replicaset (with name = deployname_name + 10 digit aplhanumeric). But why extra alphanumeric ? When we do upgrade of deployment, new replicaset is create with new alphanumeric and old is kept as it is. This old replicaset is used for rollbacks.
The created replicaset will create pods (with name = replicaset_name + 5 digit alphanumeric). But why extra alphanumeric? We cannot have two pods with same name.
If your usecase is to use the old commands frequently, then going for Statefulset is not the good solution. Statefulsets are heavy weight(ordered deployment, ordered termination, unique network names) and they are specially designed to preserve state across restart (in combination with persistent volume).
There are few tools which you can use:
stern
kube-fzf
Lightweight solution to your problem:
You can use labels to get the same pod across deployments:
kubectl get pods -l app=my_app,app_type=server
NAME READY STATUS RESTARTS AGE
my-app-5b7644f7f6-4hb8s 1/1 Running 0 22h
my-app-5b7644f7f6-72ssz 1/1 Running 0 22h
after this we can use some bash magic get what we want like below
Final command:
kubectl get pods -l app=my_app,app_type=server -o name | rg "pod/" -r "" | head -n 1 | awk '{print "kubectl logs " $0}' | bash
Explanation:
get list of pod names
kubectl get pods -l app=my_app,app_type=server -o namenames
pod/my-app-5b7644f7f6-4hb8s
pod/my-app-5b7644f7f6-72ssz
replace pod/ using ripgrep or sed (rg "pod/" -r "")
take only one pod using head -n 1
use awk to print exec/see_log command
pipe it to bash to execute
This is how deployments works, every time pod dies, ReplcaSet create pod with different name to match desired state and random number attached to pod name to give them unique names.
Whatever you are trying to achieve is not possible with deployment object as they are intended for stateless applications. As you want to preserve state( name) of application this is certainly possible with StatefulSet.
So if you use StatefulSet object to manage replicas, every pod will be created with certain name convention, e. g. POD_NAME-1, POD_NAME-2 etc i. e. Index will be appeneded to pod name. Also when pod dies, new pod will created with same name.
Ao you want to achieve is ideal use case of StatefulSet. Go for it.
If you deploy a pod from deployment object, kind:Deployment, then the deployment controller appends a unique name to pod that is part of specific deployment.
This is how the deployment controller looks up all the relevant pods of respective deployment. This is needed for rolling upgrade, rollback functions

How to get number of pods running in prometheus

I am scraping the kubernetes metrics from prometheus and would need to extract the number of running pods.
I can see container_last_seen metrics but how should i get no of pods running. Can someone help on this?
If you need to get number of running pods, you can use a metric from the list of pods metrics https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md for that (To get the info purely on pods, it'd make sens to use pod-specific metrics).
For example if you need to get the number of pods per namespace, it'll be:
count(kube_pod_info{namespace="$namespace_name"}) by (namespace)
To get the number of all pods running on the cluster, then just do:
count(kube_pod_info)
Assuming you want to display that in Grafana according to your question tags, from this Kubernetes App Metrics dashboard for example:
count(count(container_memory_usage_bytes{container_name="$container", namespace="$namespace"}) by (pod_name))
You can just import the dashboard and play with the queries.
Depending on your configuration/deployment, you can adjust the variables container_name and namespace, grouping by (pod_name) and count'ing it does the trick. Some other label than pod_name can be used as long as it's shared between the pods you want to count.
If you want to see only the number of "deployed" pods in some namespace, you can use the solutions in previous answers.
My use case was to see the current running pods in some namespace and below is my solution:
'min_over_time(sum(group(kube_pod_container_status_ready{namespace="BC_NAME"}) by (pod,uid)) [5m:1m]) OR on() vector(0)'
Please replace BC_NAME with your namespace name.
The timespan provides you fine the data.
If no data found - no pod currently running it returns '0'