Kubernetes Secret is persisting through deletes - kubernetes

I'm trying to clean up some leftover data from a failed deployment of rabbitmq. As such, I have 3 secrets that were being used by rabbit services that never fully started. Whenever I try to delete these using kubectl delete secret they get recreated with a similar name instantly (even when using --force).
I do not see any services or pods that are using these secrets, so there shouldn't be any reason they are persisting.
Example of what happens when I delete:

The reason they wouldn't delete is because they were associated with a service account.
I found this by looking at their yaml files, which mentioned they were for a service account.
I then ran
kubectl get serviceaccounts
which returned a list of accounts that had identical names. After running
kubectl delete serviceaccounts <accountName>
The secrets removed themselves.
However, if they do not, you can still get and delete them with
kubectl get secrets
kubectl delete secret <secret name>
If you do not see the item in question, you may want to append --all-namespaces to see "all" of them, as by default it looks at the top level of your kubernetes environment.

Related

I have deleted a StatefulSet on accident and need to re-create it

So, dumb me has deleted a very important Statefulset, (I was working in the wrong environment) and need to recover it.
I have the .yaml file and the description (what you get when you get when you click edit on OpenLens).
This stateful set if used for the database and I can not lose the data.
The pvc's and pv's are still there and I have not done anything in fear of losing data.
As you can probably tell, I am not very familiar with Kubernetes and need help restoring my statefulset and not losing data in the process.
As a sidenote, I tried just kubectl apply -f <file> in our dev environment and data gets lost.
To restore the StatefulSet without losing data, you should first check the status of the PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) associated with the StatefulSet. You can do this by running the kubectl get pvc and kubectl get pv commands. Once you have verified that the PVCs and PVs are intact, you need the same statefulset yaml file for restoring, by using kubectl apply -f command you can recreate the StatefulSet. If you want to ensure that the StatefulSet is restored exactly as it was before it was deleted, you can use the kubectl replace -f command instead. This will replace the existing StatefulSet with the one defined in the .yaml file, without changing any of the data on the associated PVCs and PVs.
To ensure that your data is not lost in the process, it is recommended that you create a backup of the StatefulSet before performing any of the above commands.

who was the last to modify a pod in a namespace

is there a way to pull out the username who was the last one that updated a pod in a namespace?
I have already tried the below command but non of them get me the user name
helm get values *myservice*
kubectl get pod *mypod*
If you are the cluster-admin, then you can check the kubernetes audit logs and determine the activities done in any particular namespace.
You can find more about auditing here.

Accidentally deleted Kubernetes namespace

I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state.
Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?
Thanks
A few interesting articles about backing up and restoring Kubernetes cluster using various tools:
https://medium.com/#pmvk/kubernetes-backups-and-recovery-efc33180e89d
https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark
https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state
I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.
Please notice that in all of those articles they use namespace deletion to simulate disaster scenario so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually namespace deletion removes all kubernetes cluster resources in a given namespace including LoadBalancers or PersistentVolumes. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. PersistentVolume by running Pod).
You can try and run this script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add --all-namespaces flag to list them.
You may also try to dump any resource which is still available manually. If you still can see some resources like Pods, Deployments etc. and you can run on them kubectl get you may try to save their definition to a yaml file:
kubectl get deployment nginx-deployment -o yaml > deployment_backup.yaml
Once you have your resources backed up you should be able to recreate your cluster more easily.
backup most resource configuration reguarly:
kubectl get all --all-namespaces -o yaml > all-deploy-resources.yaml
but this is not includes all resources.
another ways
by ark/velero:
https://github.com/vmware-tanzu/velero (Backup and migrate Kubernetes applications and their persistent volumes https://velero.io)

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.

Update kubernetes secrets doesn't update running container env vars

Currenly when updating a kubernetes secrets file, in order to apply the changes, I need to run kubectl apply -f my-secrets.yaml. If there was a running container, it would still be using the old secrets. In order to apply the new secrets on the running container, I currently run the command kubectl replace -f my-pod.yaml .
I was wondering if this is the best way to update a running container secret, or am I missing something.
Thanks.
For k8s' versions >v1.15: kubectl rollout restart deployment $deploymentname: this will
restart pods incrementally without causing downtime.
The secret docs for users say this:
Mounted Secrets are updated automatically
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. The update time depends on the kubelet syncing period.
Mounted secrets are updated. The question is when. In case a the content of a secret is updated does not mean that your application automatically consumes it. It is the job of your application to watch file changes in this scenario to act accordingly. Having this in mind you currently need to do a little bit more work. One way I have in mind right now would be to run a scheduled job in Kubernetes which talks to the Kubernetes API to initiate a new rollout of your deployment. That way you could theoretically achieve what you want to renew your secrets. It is somehow not elegant, but this is the only way I have in mind at the moment. I still need to check more on the Kubernetes concepts myself. So please bear with me.
Assuming we have running pod mypod [mounted secret as mysecret in pod spec]
We can delete the existing secret
kubectl delete secret mysecret
recreate the same secret with updated file
kubectl create secret mysecret <updated file/s>
then do
kubectl apply -f ./mypod.yaml
check the secrets inside mypod, it will be updated.
In case anyone (like me) want to force rolling update pods which are using those secrets. From this issue, the trick is to update an Env variable inside the container, then k8s will automatically rolling update entire pods
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
By design, Kubernetes won't push Secret updates to running Pods. If you want to update the Secret value for a Pod, you have to destroy and recreate the Pod. You can read more about it here.