k8s partial default recovery via yaml repo? - kubernetes

I am new to kubernetes and wondering if there is universal repo where I can fetch and apply default yaml configurations in case I accidentally delete them , for example if I accidentally delete any default resource like apiservice networking or other ... , could I restore it with simply by:
kubectl apply -f <k8s-resource-repo-uri>
And if there is such a repo how to identify the necessary yaml for the relevant kubernetes resource version ?
P.S.
I know that I can backup the whole ETCD and restore in case of issues , but I am wondering if there is universal yaml repo location where I can restore partially specific default resources?
For example all the resources I get with:
kubectl get apiservices
or
kubectl api-resources
I can delete them , but can I restore easily from some universal place?
For example I can do:
kubectl delete apiservices.apiregistration.k8s.io v1.networking.k8s.io
But dont know how to recover after , any ideas?

Most of the critical kube-system pods tend to be "static pods".
This means that they get restored automatically if you happen to just kubectl delete them.
https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/
For most other resources, e.g. your workload, it's good practice to have your own git repo with all the yaml files, so that you can redeploy it in case it's ever lost.

Related

K8s get back my yaml files from running cluster

Okay first let me say please don't judge. Believe me, I am kicking myself in the ass.
So I lost my hard disk on my laptop which held the Kubernetes yaml files that I ran against a Kubernetes Cloud cluster. I don't have the latest backup which is the problem.
does anyone know how to get just the yaml I ran against the K8s cloud server. I can get to the cluster and run kubectl get pod my-pod -o yaml but of course, it adds a lot of things. I am just looking for the yaml that I ran.
I am stressing here and have learned my lesson. Backup, Backup and verify Backup.
You can use this and extend it to your needs:
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [output.yaml]
The -o yaml will do the job
Note
You will get some extra information provided by your cloud providers like history, version, and more.
Lens
https://k8slens.dev/
You can use Lens which will allow you to view & edit your resources so you will be able to copy the YAML from it.

Safest and best way to retrieve the current configuration file in yaml for a single or bunch of resources in a kubernetes cluster

I applied a file xyz.yml sometime ago in EKS (Amazon elastic kubernetes service cluster) to deploy a statefulset pod from my local machine. This file is versioned in GitHub. However, there were few manual applies made using kubectl for this file to the kubernetes cluster after that, so it looks like the source file i have right now in GitHub might be out of sync from the cluster.
Is there a safe and easy way to retrieve this file in yaml directly from the cluster using kubectl so that i can use that from now in my GitHub source code. I do not want to make changes in my GitHub source code and then apply them to the cluster as the file might be out of sync.
If somehow i could directly retrieve the file in YAML from the kubernetes cluster, that would really help solve the problem. I tried --dry-run or kubectl diff but don't seem to be helping.
I am new to kubernetes, hence do not want to experiment with commands directly on the cluster.
Any help here would be greatly appreciated.
Cheers,
Ashley
You can try with edit:
kubectl -n <namespace name> edit [deployment, pod, svc] <name>
You can get the current YAML of individual resources with:
kubectl get <resource> -o yaml
But you can't get all the resources that you created with this file at once because Kubernetes doesn't keep track of the manifest files in which the resource definitions were supplied.
So you would need to check which resources were created by your file and get them individually as above. Or if all the resources in this file have common labels, perhaps you could get them more easily by these labels.

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

Location of a kubernetes objects definition file

How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post

Accidentally deleted Kubernetes namespace

I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state.
Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?
Thanks
A few interesting articles about backing up and restoring Kubernetes cluster using various tools:
https://medium.com/#pmvk/kubernetes-backups-and-recovery-efc33180e89d
https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark
https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state
I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.
Please notice that in all of those articles they use namespace deletion to simulate disaster scenario so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually namespace deletion removes all kubernetes cluster resources in a given namespace including LoadBalancers or PersistentVolumes. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. PersistentVolume by running Pod).
You can try and run this script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add --all-namespaces flag to list them.
You may also try to dump any resource which is still available manually. If you still can see some resources like Pods, Deployments etc. and you can run on them kubectl get you may try to save their definition to a yaml file:
kubectl get deployment nginx-deployment -o yaml > deployment_backup.yaml
Once you have your resources backed up you should be able to recreate your cluster more easily.
backup most resource configuration reguarly:
kubectl get all --all-namespaces -o yaml > all-deploy-resources.yaml
but this is not includes all resources.
another ways
by ark/velero:
https://github.com/vmware-tanzu/velero (Backup and migrate Kubernetes applications and their persistent volumes https://velero.io)