I can not remove kubernetes-dashboard from Minikube. I tried deleting the deployment "deployment.apps/kubernetes-dashboard" multiple times. But it gets recreated automatically in a few seconds.
I am using the following command to delete the deployment:
kubectl delete deployment.apps/kubernetes-dashboard -n kube-system
I even tried to edit the deployment by setting the replica count to zero. But even it gets reset automatically after a few seconds.
The same thing happens for nginx-ingress deployment in kube-system.
I had to disable the dashboard addon using minikube first. Then deleting the deployment did work for me.
minikube addons disable dashboard
And in case of ingress:
minikube addons disable ingress
Try the below command:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Please do try the following:
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
Related
I'm currently following a course on udemy called Microservices with Node JS and React from Stephen Grider, and I've come to a part where I need to run a command:
kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system
And this command is producing this error:
Error from server (NotFound): deployments.apps "ingress-nginx-controller" not found
when I run the command kubectl get deployments I do not see an ingress-nginx-controller deployment so I tried kubectl get namespace and I saw then entry ingress-nginx from that so I then tried kubectl get deployments -n ingress-nginx and then I finally see ingress-nginx-controller from output of that command. So I now know where the ingress-nginx-controller is but I am still pretty clueless as to how i get the initial command of kubectl explose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system to work i've been stuck on this for a long time now any help is appreciated, thanks.
Edit 1: this is probably not relevant but I also tried putting ingress-nginx after the -n instead of kube-system and it did not work
Also I am using minikube on ubuntu
Edit 2: this is a screenshot of what the course wants me to do because I'm running minikube
The first time you ran it (with the correct namespace) it worked and you probably didn't notice. Your tutorial seems to be fairly out of date, you might want to find a newer one. If you want to remove the previously created service and do it again, kubectl delete service -n ingress-nginx ingress-nginx-controller.
I am using K8S version 19.
I tried to install second nginx-ingress controller on my server (I have already one for Linux so I tried to install for Windows as well)
helm install nginx-ingress-win ingress-nginx/ingress-nginx
-f internal-ingress.yaml
--set controller.nodeSelector."beta\.kubernetes\.io/os"=windows
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=windows
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=windows
--set tcp.9000="default/frontarena-ads-win-test:9000"
This failed with "Error: failed pre-install: timed out waiting for the condition".
So I have run helm uninstall to remove that chart
helm uninstall nginx-ingress-win
release "nginx-ingress-win" uninstalled
But I am getting Validation Webhook Pod created constantly
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-win-ingress-nginx-admission-create-f2qcx 0/1 ContainerCreating 0 41m
I delete pod with kubectl delete pod but it get created again and again.
I tried also
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-win-ingress-nginx-admission but I am getting message not found for all combinations. How I can resolve this and how I can get rid off this?
Thank you!!!
If this Pod is managed by a Deployment,StatefulSet,DaemonSet etc., it will be automatically recreated every time you delete it, so trying to remove a Pod in most situations makes not much sense.
If you want to check what controlls this Pod, run:
kubectl describe pod nginx-ingress-win-ingress-nginx-admission-create-f2qcx | grep Controlled
You would probably see some ReplicaSet, which is also managed by a Deployment or another object. Suppose I want to check what I should delete to get rid of my nginx-deployment-574b87c764-kjpf6 Pod. I can do this as follows:
$ kubectl describe pod nginx-deployment-574b87c764-kjpf6 | grep -i controlled
Controlled By: ReplicaSet/nginx-deployment-574b87c764
then I need to run again kubectl describe on the name of the ReplicaSet we found:
$ kubectl describe rs nginx-deployment-574b87c764 | grep -i controlled
Controlled By: Deployment/nginx-deployment
Finally we can see that it is managed by a Deployment named nginx-deployment and this is the resource we need to delete to get rid of our nginx-deployment-574b87c764-kjpf6 Pod.
I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx
I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.
I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment