The problem
It seems that deleting a pod by UID used to be possible, but has been removed (https://github.com/kubernetes/kubernetes/issues/40121).
It also seems one cannot get a pod by its resource either (https://github.com/kubernetes/kubernetes/issues/20572).
(Why did they remove deleting by UID, what is the use of the UID then?)
Why do I want to use UID in the first place, and not, say, name?
Because I need something like replicas and none of the controllers (like Job, Deployment etc) suit my case.
The good thing about UIDs is that kubernetes generates them -- I don't have to worry about differentiating the pods then, I just save the returned UID. So it would be great if I could use it later to delete that particular pod ("replica").
The question
Is it really not possible to use the UID to get/delete?
Is my only option then to take care of differentiating the pods myself (e.g. keep a counter or generate a unique id myself and set it as the name or label)?
This works as a get by UID:
Use kubectl custom-columns
List all PodName along with its UID of a namespace:
$ kubectl get pods -n <namespace> -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid
source: How do I get the pod ID in Kubernetes?
Using awk and xargs and you can delete with custom-columns too.
I assume you are writing a controller to manage a set of replicas using a TPR or CRD
The short answer is that you should not track pods by UID or name but by using a selector much like Deployments/ReplicaSets/Services do. If pods match that selector, you can check the Pod's metadata or Pod spec to see if it needs to be
deleted for some reason.
Related
In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this:
service-job-12345abc
service-job-abc54321
other-job-54321cba
I need to find the nameof the most recently created pod starting with "service-job-".
I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix.
What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one.
Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.
This works as you expect:
kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1
is there a way to pull out the username who was the last one that updated a pod in a namespace?
I have already tried the below command but non of them get me the user name
helm get values *myservice*
kubectl get pod *mypod*
If you are the cluster-admin, then you can check the kubernetes audit logs and determine the activities done in any particular namespace.
You can find more about auditing here.
I am working with kubernetes and kubectl commands, and am able to get a list of namespaces, and then can get the resources inside those namespaces. Question is, is there an effective way to monitor all resources (CRDs especially) in a certain namespace for changes? I know I could do this:
kubectl get myobjecttype -n <user-account-1>
and then check timestamps with a separate command, but that seems resource-taxing.
You might be looking for the Kubernetes Watch API.
In fact, you make a List request (see API reference for e.g. Pods) and add the watch=1 query parameter to get a continuous stream of changes to the specified resources.
kubectl also supports watches with the -w/--watch flag:
kubectl get myobjecttype -n <user-account-1> -w
For the debug and testing purposes I'd like to find a most convenient way launching Kubernetes pods and altering its specification on-the-fly.
The launching part is quite easy with imperative commands.
Running
kubectl run nginx-test --image nginx --restart=Never
gives me exactly what I want: the single pod not managed by any controller like Deployment or ReplicaSet. Easy to play with and cleanup when it needed.
However when I'm trying to edit the spec with
kubectl edit po nginx-test
I'm getting the following warning:
pods "nginx-test" was not valid:
* spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
i.e. only the limited set of Pod spec is editable at runtime.
OPTIONS FOUND SO FAR:
Getting Pod spec saved into the file:
kubectl get po nginx-test -oyaml > nginx-test.yaml
edited and recreated with
kubectl apply -f
A bit heavy weight for changing just one field though.
Creating a Deployment not single Pod and then editing spec section in Deployment itself.
The cons are:
additional API object needed (Deployment) which you should not forget to cleanup when you are done
the Pod names are autogenerated in the form of nginx-test-xxxxxxxxx-xxxx and less
convenient to work with.
So is there any simpler option (or possibly some elegant workaround) of editing arbitrary field in the Pod spec?
I would appreciate any suggestion.
You should absolutely use a Deployment here.
For the use case you're describing, most of the interesting fields on a Pod cannot be updated, so you need to manually delete and recreate the pod yourself. A Deployment manages that for you. If a Deployment owns a Pod, and you delete the Deployment, Kubernetes knows on its own to delete the matching Pod, so there's not really any more work.
(There's not really any reason to want a bare pod; you almost always want one of the higher-level controllers. The one exception I can think of is kubectl run a debugging shell inside the cluster.)
The Pod name being generated can be a minor hassle. One trick that's useful here: as of reasonably recent kubectl, you can give the deployment name to commands like kubectl logs
kubectl logs deployment/nginx-test
There are also various "dashboard" type tools out there that will let you browse your current set of pods, so you can do things like read logs without having to copy-and-paste the full pod name. You may also be able to set up tab completion for kubectl, and type
kubectl logs nginx-test<TAB>
The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.