How can I regenerate the Corefile in k8s? - kubernetes

My core dns corefile got corrupted somehow and now I need to regenerate it or reset it to it's default installed value. How do I do that? I've tried copying and pasting a locally-saved version of the file via kubectl edit cm coredns -n kube-system but I get validation errors
error: configmaps "coredns" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-suzaq.yaml"
error: Edit cancelled, no valid changes were saved.

When you directly edit the setting, it used to give the error.
What can you do?
before you run anything, please take a backup:
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
Ways #1, forcely apply it.
kubectl apply --force -f /tmp/kubectl-edit-suzaq.yaml
In most cases, it will apply the latest setting successfully by this way. If failed, go through the error, update the file /tmp/kubectl-edit-suzaq.yaml and forcely apply again.
Ways #2, delete and apply again.
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
# do a backup, if you don't 100% sure the change will work
cp coredns.yaml coredns.yaml.orig
# update the change in coredns.yaml
# delete coredns
kubectl delete configmap coredns
# apply new change
kubectl apply -f coredns.yaml
Be careful, above steps will generate outage. if you work on a prod environment, you should think about to backup all kubernetes setting before doing above change.

Related

how to recover pods with UnexpectedAdmissionError

My pods terminate automatic and finally I found the disk usage was 100% and auto dropped by kubernetes(v1.15.2).Now I am free disk and how to restart the UnexpectedAdmissionError pod like this:
I already tried this:
~ ⌚ 0:34:23
$ kubectl rollout restart deployment kubernetes-dashboard-6466b68b-z6z78
Error from server (NotFound): deployments.extensions "kubernetes-dashboard-6466b68b-z6z78" not found
do not work for me.Any suggestion?
This worked for me:
$ kubectl get pod kubernetes-dashboard-6466b68b-z6z78 -n kube-system -o yaml | kubectl replace --force -f -
pod "kubernetes-dashboard-6466b68b-z6z78" deleted
pod/kubernetes-dashboard-6466b68b-z6z78 replaced
From Documentation:
Replace a resource by filename or stdin.
JSON and YAML formats are accepted. If replacing an existing resource, the complete resource spec must be provided.
This can be obtained by
$ kubectl get TYPE NAME -o yaml
It is worth checking kubectl replace --help as well.
Hope this help you.

find which yaml file was used for the any kubernetes resource?

How to find, that which yaml file was used for the deployment of any kubernetes resource.
I checked "kubectl describe", it doesn't list the same, is there anyway to know.
use case:
I want to update the yaml and redeploy, one option, I guess is to generate the yaml from running resource, update and redeploy.
any suggestions ?
To get yaml for your k8s application deployment.
Use this
kubectl get deploy my-deployment -o yaml --export
OR
kubectl get pod my-pod -o yaml --export
OR
kubectl get svc my-svc -o yaml --export
Editing is also simple.
kubectl get deploy my-deployment -o yaml --export > my-deployment.yml
<Edit the my-deployment.yml file and kubectl apply -f my-deployment.yml>
OR
kubectl edit deployment my-deployment
Hope this helps.
You can use the following command to get the content of an yaml file, that was used to create a deployment:
kubectl apply view-last-applied <resource_type> <resource_name>
In your case it will be similar to:
kubectl apply view-last-applied deployment <deployment_name>
I think you can choose here from two options.
Option 1:
You can grep all YAMLs looking for specific annotations or labels.
$ grep "app: nginx-test" *.yaml
or
$ grep -e "prometheus.io/scheme: http" *.yaml
When you find proper file you can edit it (vi, nano, etc) and apply.
$ kubectl apply -f [yaml-name]
Option 2:
When you know name of your deployment you can edit it.
$ kubectl edit deployment [deployment-name]
You will see current deployment YAML with a status: section which describing current status of deployment. If you don't like vi, you can use nano instead
$ KUBE_EDITOR="nano" kubectl edit [deployment-name]
If you want to create YAML from your current deployment I would advise you to use kubectl edit with --export flag. It will removes unneeded information (like status: from previous comment).
$ kubectl get deploy [your-deployment] -oyaml --export >> newDeployment.yaml
Hope it will help.

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

How to redeploy metrics-server

I have a Kubernetes cluster running on my local machine(via docker-for-desktop) and a metrics-server has been deployed to monitor CPU Usage. I want to make some changes in the metrics-server-deployment.yaml file which resides in /metrics-server/deploy/1.8+
I am done with the changes but I can't figure how to redeploy the metrics-server so that it will reflect the new changes. I am new to K8S and would love to get some help/tips or useful resources.
Thanks in advance
From the directory where you have metrics-server-deployment.yaml, just run:
kubectl apply -f metrics-server-deployment.yaml
If it complains, you can also manually delete it and run:
kubectl create -f metrics-server-deployment.yaml
You can manually edit the file(s) and then use
kubectl delete -f /metrics-server/deploy/1.8+
kubectl apply -f /metrics-server/deploy/1.8+
or (in my opinion the nicer version) you can just edit the deployment itself with
kubectl edit deployment -n kube-system metrics-server

Kubernetes rolling deployment using the yaml file

I have deployed an application into Kubernetes using the following command.
kubectl apply -f deployment.yaml -n <NAMESPACE>
I have my deployment content in the deployment yaml file.
This is working fine. Now, I have updated few things in the deployment.yaml file and hence would like to update the deployment.
Option 1:- Delete and deploy again
kubectl delete -f deployment.yaml -n <NAMESPACE>
kubectl apply -f deployment.yaml -n <NAMESPACE>
Option 2:- Use set to update changes
kubectl set image deployment/nginx-deployment nginx=nginx:1.91
I don't want to use this approach as I am keeping my deployment.yaml file in GitHUB.
Option 3:- Using edit command
kubectl edit deployment/nginx-deployment
I don't want to use the above 3 options.
Is there any way to update the deployment using the file itself.
Like,
kubectl update deployment.yaml -n NAMESPACE
This way, I will make sure that I will always have the latest deployment file in my GitHub repo.
As #Daisy Shipton has said, what you want to do could be simplified with a simple command: kubectl apply -f deployment.yaml.
I will also add that I don't think it's correct to utilize the Option 2 to update the image utilized by the Pod with an imperative command! If the source of truth is the Deployment file present on your GitHub, you should simply update that file, by modifying the image that is used by your Pod's container there!
Next time you desire to update your Deployment object, unless you don't forget to modify the .yaml file, you will be setting the Pods to use the previous Nginx's image.
So there certainly should exist some restriction in using imperative commands to update the specification of any Kubernetes's object!