kubectl expose deployment not working with ingress-controller - kubernetes

I'm currently following a course on udemy called Microservices with Node JS and React from Stephen Grider, and I've come to a part where I need to run a command:
kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system
And this command is producing this error:
Error from server (NotFound): deployments.apps "ingress-nginx-controller" not found
when I run the command kubectl get deployments I do not see an ingress-nginx-controller deployment so I tried kubectl get namespace and I saw then entry ingress-nginx from that so I then tried kubectl get deployments -n ingress-nginx and then I finally see ingress-nginx-controller from output of that command. So I now know where the ingress-nginx-controller is but I am still pretty clueless as to how i get the initial command of kubectl explose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system to work i've been stuck on this for a long time now any help is appreciated, thanks.
Edit 1: this is probably not relevant but I also tried putting ingress-nginx after the -n instead of kube-system and it did not work
Also I am using minikube on ubuntu
Edit 2: this is a screenshot of what the course wants me to do because I'm running minikube

The first time you ran it (with the correct namespace) it worked and you probably didn't notice. Your tutorial seems to be fairly out of date, you might want to find a newer one. If you want to remove the previously created service and do it again, kubectl delete service -n ingress-nginx ingress-nginx-controller.

Related

Pod is not found when trying to delete, however, can be patched

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

HELM admission is constantly creating Pod in status "Container Creating"

I am using K8S version 19.
I tried to install second nginx-ingress controller on my server (I have already one for Linux so I tried to install for Windows as well)
helm install nginx-ingress-win ingress-nginx/ingress-nginx
-f internal-ingress.yaml
--set controller.nodeSelector."beta\.kubernetes\.io/os"=windows
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=windows
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=windows
--set tcp.9000="default/frontarena-ads-win-test:9000"
This failed with "Error: failed pre-install: timed out waiting for the condition".
So I have run helm uninstall to remove that chart
helm uninstall nginx-ingress-win
release "nginx-ingress-win" uninstalled
But I am getting Validation Webhook Pod created constantly
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-win-ingress-nginx-admission-create-f2qcx 0/1 ContainerCreating 0 41m
I delete pod with kubectl delete pod but it get created again and again.
I tried also
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-win-ingress-nginx-admission but I am getting message not found for all combinations. How I can resolve this and how I can get rid off this?
Thank you!!!
If this Pod is managed by a Deployment,StatefulSet,DaemonSet etc., it will be automatically recreated every time you delete it, so trying to remove a Pod in most situations makes not much sense.
If you want to check what controlls this Pod, run:
kubectl describe pod nginx-ingress-win-ingress-nginx-admission-create-f2qcx | grep Controlled
You would probably see some ReplicaSet, which is also managed by a Deployment or another object. Suppose I want to check what I should delete to get rid of my nginx-deployment-574b87c764-kjpf6 Pod. I can do this as follows:
$ kubectl describe pod nginx-deployment-574b87c764-kjpf6 | grep -i controlled
Controlled By: ReplicaSet/nginx-deployment-574b87c764
then I need to run again kubectl describe on the name of the ReplicaSet we found:
$ kubectl describe rs nginx-deployment-574b87c764 | grep -i controlled
Controlled By: Deployment/nginx-deployment
Finally we can see that it is managed by a Deployment named nginx-deployment and this is the resource we need to delete to get rid of our nginx-deployment-574b87c764-kjpf6 Pod.

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

How can I delete the Kubernetes dashboard from kube-system?

I can not remove kubernetes-dashboard from Minikube. I tried deleting the deployment "deployment.apps/kubernetes-dashboard" multiple times. But it gets recreated automatically in a few seconds.
I am using the following command to delete the deployment:
kubectl delete deployment.apps/kubernetes-dashboard -n kube-system
I even tried to edit the deployment by setting the replica count to zero. But even it gets reset automatically after a few seconds.
The same thing happens for nginx-ingress deployment in kube-system.
I had to disable the dashboard addon using minikube first. Then deleting the deployment did work for me.
minikube addons disable dashboard
And in case of ingress:
minikube addons disable ingress
Try the below command:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Please do try the following:
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.