who was the last to modify a pod in a namespace - command

is there a way to pull out the username who was the last one that updated a pod in a namespace?
I have already tried the below command but non of them get me the user name
helm get values *myservice*
kubectl get pod *mypod*

If you are the cluster-admin, then you can check the kubernetes audit logs and determine the activities done in any particular namespace.
You can find more about auditing here.

Related

Is it possible to find out resource update time from kubemaster?

We can see updates to deployment using command:
kubectl rollout history deploy/<name>
We can also see updated config using:
kubectl rollout history --revision=<revision-#> deploy/<name>
I'm not sure how to find out given revision's update time. Is it possible to find it?
If you are storing events from the namespace or api server logs, you might be able to find out. One crude way will be to look at creation time for replica sets of the deployment - kubectl get replicaset

Delete AKS deployment's running pod on regular basis (Job)

I have been struggling for some time to figure out how to accomplish the following:
I want to delete running pod on Azure Kubernetes Service cluster on scheduled basis, so that it respawns from deployment. This is required that application re-reads configuration files stored on shared storage and shared with other application.
I have found out that Kubernetes Jobs might be handy to accomplish this, but there is some but.
I cant figure how can I select corresponding pod related to my deployment as it adds random string to the deployment name, i.e
deployment-name-546fcbf44f-wckh4
Using selectors to get my pod doesnt succeed as there is not such operator like LIKE
kubectl get pods --field-selector metadata.name=deployment-name
No resources found
Looking at the official docs one way of doing this would be like so:
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
you'd need to modify job-name to match your job name
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job

Kubernetes Secret is persisting through deletes

I'm trying to clean up some leftover data from a failed deployment of rabbitmq. As such, I have 3 secrets that were being used by rabbit services that never fully started. Whenever I try to delete these using kubectl delete secret they get recreated with a similar name instantly (even when using --force).
I do not see any services or pods that are using these secrets, so there shouldn't be any reason they are persisting.
Example of what happens when I delete:
The reason they wouldn't delete is because they were associated with a service account.
I found this by looking at their yaml files, which mentioned they were for a service account.
I then ran
kubectl get serviceaccounts
which returned a list of accounts that had identical names. After running
kubectl delete serviceaccounts <accountName>
The secrets removed themselves.
However, if they do not, you can still get and delete them with
kubectl get secrets
kubectl delete secret <secret name>
If you do not see the item in question, you may want to append --all-namespaces to see "all" of them, as by default it looks at the top level of your kubernetes environment.

Can I get or delete a pod/resource by UID?

The problem
It seems that deleting a pod by UID used to be possible, but has been removed (https://github.com/kubernetes/kubernetes/issues/40121).
It also seems one cannot get a pod by its resource either (https://github.com/kubernetes/kubernetes/issues/20572).
(Why did they remove deleting by UID, what is the use of the UID then?)
Why do I want to use UID in the first place, and not, say, name?
Because I need something like replicas and none of the controllers (like Job, Deployment etc) suit my case.
The good thing about UIDs is that kubernetes generates them -- I don't have to worry about differentiating the pods then, I just save the returned UID. So it would be great if I could use it later to delete that particular pod ("replica").
The question
Is it really not possible to use the UID to get/delete?
Is my only option then to take care of differentiating the pods myself (e.g. keep a counter or generate a unique id myself and set it as the name or label)?
This works as a get by UID:
Use kubectl custom-columns
List all PodName along with its UID of a namespace:
$ kubectl get pods -n <namespace> -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid
source: How do I get the pod ID in Kubernetes?
Using awk and xargs and you can delete with custom-columns too.
I assume you are writing a controller to manage a set of replicas using a TPR or CRD
The short answer is that you should not track pods by UID or name but by using a selector much like Deployments/ReplicaSets/Services do. If pods match that selector, you can check the Pod's metadata or Pod spec to see if it needs to be
deleted for some reason.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.