How to redeploy metrics-server - kubernetes

I have a Kubernetes cluster running on my local machine(via docker-for-desktop) and a metrics-server has been deployed to monitor CPU Usage. I want to make some changes in the metrics-server-deployment.yaml file which resides in /metrics-server/deploy/1.8+
I am done with the changes but I can't figure how to redeploy the metrics-server so that it will reflect the new changes. I am new to K8S and would love to get some help/tips or useful resources.
Thanks in advance

From the directory where you have metrics-server-deployment.yaml, just run:
kubectl apply -f metrics-server-deployment.yaml
If it complains, you can also manually delete it and run:
kubectl create -f metrics-server-deployment.yaml

You can manually edit the file(s) and then use
kubectl delete -f /metrics-server/deploy/1.8+
kubectl apply -f /metrics-server/deploy/1.8+
or (in my opinion the nicer version) you can just edit the deployment itself with
kubectl edit deployment -n kube-system metrics-server

Related

How to promote a pod to a deployment for scaling

I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod", however in K8s version 1.25, the --replicate parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?
There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.
So easy step you can take input of existing running POD to YAML file first
kubectl get pod <POD name> -o yaml > pod-spec.yaml
Create deployment spec YAML file now
kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
Edit the deployment-spec.yaml file
and in other tab pod-spec.yaml you can copy the Spec part from POD file to new deployment file.
Once deployment-spec.yaml is ready you can apply it. Make sure if you are running service labels get matched properly
kubectl apply -f deployment-spec.yaml
Delete the single running POD
kubectl delete pod <POD name>

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx

kubernetes -I changed the deployment name ,then redeploying to the environment,how to cleanup the old deployment and pods with old name?

Had requirement to change the pod or deployment name.now when we deploy,we have 2 deployments and 3 pods each with old and new name
So far i was deleting the old deployments manually.
do i need to manually delete the old deployment and pods or is there a better method?
To delete deployment use
$ kubectl delete deploy/old_deployment_name
This will delete deployment, including pods and rs, if you had them.
And dont do this mistake the second time :) #Kamol is right - the best way of managing resources is change config file(e.g your deployment) and re-apply with
kubectl apply -f deployment.yaml
I think we can also remove everything if its installed with the apply command
kubectl apply -f deployment.yaml
you can delete
kubectl delete -f deployment.yaml

How can I regenerate the Corefile in k8s?

My core dns corefile got corrupted somehow and now I need to regenerate it or reset it to it's default installed value. How do I do that? I've tried copying and pasting a locally-saved version of the file via kubectl edit cm coredns -n kube-system but I get validation errors
error: configmaps "coredns" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-suzaq.yaml"
error: Edit cancelled, no valid changes were saved.
When you directly edit the setting, it used to give the error.
What can you do?
before you run anything, please take a backup:
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
Ways #1, forcely apply it.
kubectl apply --force -f /tmp/kubectl-edit-suzaq.yaml
In most cases, it will apply the latest setting successfully by this way. If failed, go through the error, update the file /tmp/kubectl-edit-suzaq.yaml and forcely apply again.
Ways #2, delete and apply again.
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
# do a backup, if you don't 100% sure the change will work
cp coredns.yaml coredns.yaml.orig
# update the change in coredns.yaml
# delete coredns
kubectl delete configmap coredns
# apply new change
kubectl apply -f coredns.yaml
Be careful, above steps will generate outage. if you work on a prod environment, you should think about to backup all kubernetes setting before doing above change.

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.