k3s cleanup of HelmChart? - kubernetes

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?

Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs

I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

Related

kubectl expose deployment not working with ingress-controller

I'm currently following a course on udemy called Microservices with Node JS and React from Stephen Grider, and I've come to a part where I need to run a command:
kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system
And this command is producing this error:
Error from server (NotFound): deployments.apps "ingress-nginx-controller" not found
when I run the command kubectl get deployments I do not see an ingress-nginx-controller deployment so I tried kubectl get namespace and I saw then entry ingress-nginx from that so I then tried kubectl get deployments -n ingress-nginx and then I finally see ingress-nginx-controller from output of that command. So I now know where the ingress-nginx-controller is but I am still pretty clueless as to how i get the initial command of kubectl explose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system to work i've been stuck on this for a long time now any help is appreciated, thanks.
Edit 1: this is probably not relevant but I also tried putting ingress-nginx after the -n instead of kube-system and it did not work
Also I am using minikube on ubuntu
Edit 2: this is a screenshot of what the course wants me to do because I'm running minikube
The first time you ran it (with the correct namespace) it worked and you probably didn't notice. Your tutorial seems to be fairly out of date, you might want to find a newer one. If you want to remove the previously created service and do it again, kubectl delete service -n ingress-nginx ingress-nginx-controller.

How to give annotations by using run command in kubernetes to a pod

I attempted but there is an error..i also see See 'kubectl run --help' for usage.
but i can't fix it..
kubectl run pod pod4 --image=aamirpinger/helloworld:latest --port=80 --annotaions=createdBy="Muhammad Shahbaz" --restart=Never
Error: unknown flag: --annotaions
kubectl run supports specifying annotations via the --annotations flag that can be specified multiple times to apply multiple annotations.
For example:
$ kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod
results in the following:
$ kubectl get pod mypod -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
foo: bar
another: one
[...]
kubectl run doesn't have an option to set annotations.
Unless you're running a one-off debugging pod, it's usually better practice to write out the full (Deployment) YAML file, commit to source control, and install it using kubectl apply -f. That will let you specify any Kubernetes object property you need to.
As David Maze mentioned ,there is no --annotations flag for kubectl run command.It is better to write deployment yaml file compared to running using kubectl run command.
However you can add annotations to kubernetes resources using Kubectl annotate command.All Kubernetes objects support the ability to store additional data with the object as annotations.
Hope this helps.

How to redeploy metrics-server

I have a Kubernetes cluster running on my local machine(via docker-for-desktop) and a metrics-server has been deployed to monitor CPU Usage. I want to make some changes in the metrics-server-deployment.yaml file which resides in /metrics-server/deploy/1.8+
I am done with the changes but I can't figure how to redeploy the metrics-server so that it will reflect the new changes. I am new to K8S and would love to get some help/tips or useful resources.
Thanks in advance
From the directory where you have metrics-server-deployment.yaml, just run:
kubectl apply -f metrics-server-deployment.yaml
If it complains, you can also manually delete it and run:
kubectl create -f metrics-server-deployment.yaml
You can manually edit the file(s) and then use
kubectl delete -f /metrics-server/deploy/1.8+
kubectl apply -f /metrics-server/deploy/1.8+
or (in my opinion the nicer version) you can just edit the deployment itself with
kubectl edit deployment -n kube-system metrics-server

How can I delete the Kubernetes dashboard from kube-system?

I can not remove kubernetes-dashboard from Minikube. I tried deleting the deployment "deployment.apps/kubernetes-dashboard" multiple times. But it gets recreated automatically in a few seconds.
I am using the following command to delete the deployment:
kubectl delete deployment.apps/kubernetes-dashboard -n kube-system
I even tried to edit the deployment by setting the replica count to zero. But even it gets reset automatically after a few seconds.
The same thing happens for nginx-ingress deployment in kube-system.
I had to disable the dashboard addon using minikube first. Then deleting the deployment did work for me.
minikube addons disable dashboard
And in case of ingress:
minikube addons disable ingress
Try the below command:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Please do try the following:
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard

How do I add log_output_level argument to istio-sidecar-injector on GKE?

I am following along this article and try this on GKE. After adding the argument - --log_output_level=default:debug the change seems accepted as I get deployment.extensions/istio-sidecar-injector edited
, but I how do I know for sure?
The output of
pod=$(kubectl -n istio-system get pods -l istio=sidecar-injector -o jsonpath='{.items[0].metadata.name}')
and then
kubectl -n istio-system logs -f $pod
is the same as before and when I do (again)kubectl -n istio-system edit deployment istio-sidecar-injector the added argument is not there...
Depends on how installed Istio on GKE. There are multiple ways to install Istio from GKE.
If you're installing from http://cloud.google.com/istio which installs a Google-managed version of istio to your cluster, editing like kubectl -n istio-system edit deployment istio-sidecar-injector is a really bad idea, because Google will either revert it or the next version will wipe your modifications (so don't do it).
If you're installing yourself from Istio open source release, Istio is distributed as a Helm chart, and has bunch of kubernetes .yaml manifests. You can go edit those YAML manifests –or update Helm values.yaml files to add that argument. Then you can perform the Istio installation with the updated values.
If you're interested in getting help debugging istio, please get to a contributor community forum like Istio on Rocket Chat: https://istio.rocket.chat/ .