I need to add the name of a kubernetes pod as a label to that pod when I create a pod using a replication controller. Is there a way to do that or should I do a patch once the pod is created?
There is no way to auto-promote the pod name into a label. You'll have to do that manually. Sorry.
Depending on what you're trying to do, a headless service may work for you:
http://kubernetes.io/v1.1/docs/user-guide/services.html#headless-services
Specify spec.clusterIP=None
DNS is ten configured to return multiple A records (addresses) for the Service name, which point directly to the Pods backing the Service.
Otherwise, you may want to follow progress on the PetSet proposal:
https://github.com/kubernetes/kubernetes/pull/18016
Related
I'm using hardware-dependents pods; in my K8s, I instantiate my pods with a DaemonSet.
Now I want to access those pods with an URL like https://domain/{pod-hostname}/
My use case is a bit more tedious than this one. my pods' names are not predefined.
Moreover, I also need a REST entry point to list my pod's name or hostname.
I publish a Docker Image to solve my issue: urielch/dyn-ingress
My YAML configuration is in the Docker doc.
This Container add label on each pod, then use this label to create a service per pod, and then update an existing Ingress to reach each node with a path //
feel free to test it.
the source code is here
If we have some requirement to modify property of running pods, Which will be the recommeneded way and whats the reason.
I guess once a pod deployed as part of the deployment, we can modify the pods properties either by kubectl edit pod or by kubectl edit deploy.
Would like to understand is there any difference between these 2 actions. ?
Modify the Deployment not the Pod.
Why?
The Deployment describe the desired state for your pods. The Deployment controller continuously watches for the Deployment object in a control loop. It reads the desired pod state from the Deployment specification and try to ensure the state in the cluster. So, if you edit the pod and change something, the Deployment controller will overwrite the change in next resync because your modification is not present in the Deployment specification.
For the most part you can't edit the pods. In the API definition of a PodSpec, the containers and initContainers fields are both described as "Cannot be updated." Almost all of the interesting things in a Pod spec are in the Container sub-objects.
The corollary to this is that you can't "modify properties of running pods" for the most part; you can only delete and replace them with new pods with the properties you want. If you edit the pod template in a deployment spec, Kubernetes will do exactly that.
I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
I have setup Prometheus and Grafana for monitoring my kubernetes cluster and everything works fine. Then I have created custom dashboard in Grafana for my application.The metrics available in Prometheus is as follows and i have added the same in grafana as metrics:
sum(irate(container_cpu_usage_seconds_total{namespace="test", pod_name="my-app-65c7d6576b-5pgjq", container_name!="POD"}[1m])) by (container_name)
The issue is, my application is running as pod in kubernetes,so when the pod is deleted or recreated, then the name of the pod will change and it will be different than the pod name specified in the above metrics "my-app-65c7d6576b-5pgjq". So the data for the above metrics will not work anymore. and I have to add new metrics again in Grafana. Please let me know How can I overcome this situation.
Answer was provided by manu thankachan:
I have done it. Made some change in the query as follow:
sum(irate(container_cpu_usage_seconds_total{namespace="test",
container_name="my-app", container_name!="POD"}[1m])) by
(container_name)
If pod is created directly(not a part of deployment) then only pod name is same as we mentioned.
If pod is part of Deployment the it will have unique string from replicaset and also ends with random 5 characters to maintain unique name.
So always try to use container_name label or if your Kubernetes version is > v1.16.0 then use container label
Need to set the ip and/or any metadata of the deployment to be available as env vars to each pod under the same deployment...
ex:
having a 3 replica deploment.
need to set env var for other IP address for each of the two other pods.
need to set the host name for each other two pods.
as of having
HOSTNAME=deplymentNAME-d74cf6f77-q57jx
deplymentNAME_PORT=tcp://10.152.183.27:13000
need to add:
HOSTNAME2=deplymentNAME-d74cf6f77-y67kl
HOSTNAME3=deplymentNAME-d74cf6f77-i90ro
deplymentNAME_PORT2=tcp://10.152.183.45:13000
deplymentNAME_PORT3=tcp://10.152.16.28:13000
those should be available on the three pods relatively.
as of now each pod have only its own data, we need to spread others data to the other replicas in the same deployment.
Well,
I figured out that my application is a stateful and not stateless application, that requires fixed/stable hostname/storage etc...
I have decided to use the statefulset controller
references:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#statefulset-v1-apps