Error when getting IngressClass nginx: "nginx" not found - kubernetes

I'm using Kubernetes version: 1.19.16 on bare metal Ubuntu-18.04lts server. When i tried to deploy the nginx-ingress yaml file it always fails with below errors.
Following steps followed to deploy nginx-ingress,
$ git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress/deployments
kubernetes-ingress/deployments$ git branch
* main
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f deployment/nginx-ingress.yaml
deployment.apps/nginx-ingress created
$ kubectl get pods -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-75c4bd64bd-mm52x 0/1 Error 2 21s 10.244.1.5 k8s-master <none> <none>
$ kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c4bd64bd-mm52x 0/1 CrashLoopBackOff 12 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c4bd64bd 1 1 0 38m
$ kubectl logs nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
W1003 04:53:02.833073 1 flags.go:273] Ignoring unhandled arguments: []
I1003 04:53:02.833154 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.1 PlusFlag=false
I1003 04:53:02.833158 1 flags.go:191] Commit=a8742472b9ddf27433b6b1de49d250aa9a7cb47e Date=2022-09-16T08:09:31Z DirtyState=false Arch=linux/amd64 Go=go1.18.5
I1003 04:53:02.844374 1 main.go:210] Kubernetes version: 1.19.16
F1003 04:53:02.846604 1 main.go:225] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" not found
$ kubectl describe pods nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned nginx-ingress/nginx-ingress-75c4bd64bd-mm52x to k8s-worker-1
Normal Pulled 87s (x5 over 3m5s) kubelet Container image "nginx/nginx-ingress:2.3.1" already present on machine
Normal Created 87s (x5 over 3m5s) kubelet Created container nginx-ingress
Normal Started 87s (x5 over 3m5s) kubelet Started container nginx-ingress
Warning BackOff 75s (x10 over 3m3s) kubelet Back-off restarting failed container
Nginx Ingress controller Deployment file Link for the reference.
As I'm using kubernetes-ingress.git repository main branch, not sure whether main branch is compatible with my Kubernetes version or not.
Can anyone share some pointer to solve this?

I think you missed to install ingress-controller "NGINX" that is why it is not able to identify the same https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4
kubectl apply -f common/ingress-class.yaml
You can follow thie steps from this document: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

Related

Nginx ingress failed: HTTP probe failed with statuscode: 503

I'm following this Link to install nginx-ingress-controller on my bare metal server Kubernetes-v.1.19.16
The below commands i have executed as part of installation.
$ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.4.0
$ cd kubernetes-ingress/deployments
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f rbac/apdos-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f common/ingress-class.yaml
$ kubectl apply -f daemon-set/nginx-ingress.yaml
I have followed DaemonSet method.
$ kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-bcrk5 0/1 Running 0 19m
pod/nginx-ingress-ndpfz 0/1 Running 0 19m
pod/nginx-ingress-nvp98 0/1 Running 0 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 3 3 0 3 0 <none> 19m
For all three nginx-ingress pods same error it shown.
$ kubectl describe pods nginx-ingress-bcrk5 -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-bcrk5 to node-4
Normal Pulling 38m kubelet Pulling image "nginx/nginx-ingress:2.4.0"
Normal Pulled 37m kubelet Successfully pulled image "nginx/nginx-ingress:2.4.0" in 19.603066401s
Normal Created 37m kubelet Created container nginx-ingress
Normal Started 37m kubelet Started container nginx-ingress
Warning Unhealthy 3m13s (x2081 over 37m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
$ kubectl logs -l app=nginx-ingress -n nginx-ingress
E1007 03:18:37.278678 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
W1007 03:18:55.714313 1 reflector.go:424] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E1007 03:18:55.714361 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
W1007 03:19:00.542294 1 reflector.go:424] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
E1007 03:19:00.542340 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
Still READY and UP-TO-DATE state showing 0, Ideally it show 3 in both the categories. Please let me know what i'm missing here as part of installation?
Any help is appreciated.
I'd recommend installing it using helm
See https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ingress nginx-stable/nginx-ingress \
--namespace $NAMESPACE \
--version $VERSION
You can look for versions compatibles with your Kubernetes cluster version using:
helm search repo nginx-stable/nginx-ingress --versions
When installation is well finished, you should see ingress-controller service that holds an $EXTERNAL-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d
With the below branch i could able to see all nginx-ingress pods are in running.
git clone https://github.com/nginxinc/kubernetes-ingress/
cd kubernetes-ingress/deployments
git checkout v1.10.0
Can you share the nginx logs using below command?
kubectl -n nginx-ingress logs -l app=nginx-ingress
I can't guess anything..

After Installing Artifactory with Helm, pods having issues

doing some projects to learn kubernetes
I have an EKS (aws) Cluster, using helm to install Artifactory. Installation goes well I go to access Artifactory through Load Balancer URL and its not working. I check the pods and..
$ kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
artifactory-0 0/1 Init:3/5 0 90m
artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk 0/1 Running 11 (98s ago) 89m
artifactory-postgresql-0 1/1 Running 0 3h32m
jenkins-0 2/2 Running 0 3h34m
Tried to troubleshoot it
$ kubectl logs artifactory-0 -n tools
Defaulted container "artifactory" out of: artifactory, delete-db-properties (init), remove-lost-found (init), copy-system-yaml (init), wait-for-db (init), migration-artifactory (init)
Error from server (BadRequest): container "artifactory" in pod "artifactory-0" is waiting to start: PodInitializing
$ kubectl logs artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk -n tools
Defaulted container "nginx" out of: nginx, setup (init)
Used kubectl describe on artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 110s (x1235 over 111m) kubelet Startup probe failed:
artifactory-0 had no events

Why is pod not found? Error from server (NotFound): pods "curl-pod" not found

I can list pods in my prod namespace
kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
curl-pod 1/1 Running 1 (32m ago) 38m
web 1/1 Running 1 (33m ago) 38m
I got error
kubectl describe pods curl-pod
Error from server (NotFound): pods "curl-pod" not found
Get events show
Normal Scheduled pod/curl-pod Successfully assigned prod/curl-pod to minikube
Why?
kubernetes manages by namespace, so you must specify namespace otherwise kubernetes will use namespace default.
So, you must type:
kubectl describe pod/curl-pod -n prod

Failed to move past 1 pod has unbound immediate PersistentVolumeClaims

I am new to Kubernetes, and trying to get apache airflow working using helm charts. After almost a week of struggling, I am nowhere - even to get the one provided in the apache airflow documentation working. I use Pop OS 20.04 and microk8s.
When I run these commands:
kubectl create namespace airflow
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow --namespace airflow
The helm installation times out after five minutes.
kubectl get pods -n airflow
shows this list:
NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 0/1 Pending 0 4m8s
airflow-redis-0 0/1 Pending 0 4m8s
airflow-worker-0 0/2 Pending 0 4m8s
airflow-scheduler-565d8587fd-vm8h7 0/2 Init:0/1 0 4m8s
airflow-triggerer-7f4477dcb6-nlhg8 0/1 Init:0/1 0 4m8s
airflow-webserver-684c5d94d9-qhhv2 0/1 Init:0/1 0 4m8s
airflow-run-airflow-migrations-rzm59 1/1 Running 0 4m8s
airflow-statsd-84f4f9898-sltw9 1/1 Running 0 4m8s
airflow-flower-7c87f95f46-qqqqx 0/1 Running 4 4m8s
Then when I run the below command:
kubectl describe pod airflow-postgresql-0 -n airflow
I get the below (trimmed up to the events):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s (x2 over 58s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Then I deleted the namespace using the following commands
kubectl delete ns airflow
At this point, the termination of the pods gets stuck. Then I bring up the proxy in another terminal:
kubectl proxy
Then issue the following command to force deleting the namespace and all it's pods and resources:
kubectl get ns airflow -o json | jq '.spec.finalizers=[]' | curl -X PUT http://localhost:8001/api/v1/namespaces/airflow/finalize -H "Content-Type: application/json" --data #-
Then I deleted the PVC's using the following command:
kubectl delete pvc --force --grace-period=0 --all -n airflow
You get stuck again, so I had to issue another command to force this deletion:
kubectl patch pvc data-airflow-postgresql-0 -p '{"metadata":{"finalizers":null}}' -n airflow
The PVC's gets terminated at this point and these two commands return nothing:
kubectl get pvc -n airflow
kubectl get all -n airflow
Then I restarted the machine and executed the helm install again (using first and last commands in the first section of this question), but the same result.
I executed the following command then (using the suggestions I found here):
kubectl describe pvc -n airflow
I got the following output (I am posting the event portion of PostgreSQL):
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m58s (x42 over 13m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
So my assumption is that I need to provide storage class as part of the values.yaml
Is my understanding right? How do I provide the required (and what values) in the values.yaml?
If you installed with helm, you can uninstall with helm delete airflow -n airflow.
Here's a way to install airflow for testing purposes using default values:
Generate the manifest helm template airflow apache-airflow/airflow -n airflow > airflow.yaml
Open the "airflow.yaml" with your favorite editor, replace all "volumeClaimTemplates" with emptyDir. Example:
Create the namespace and install:
kubectl create namespace airflow
kubectl apply -f airflow.yaml --namespace airflow
You can copy files out from the pods if needed.
To delete kubectl delete -f airflow.yaml --namespace airflow.

Kubernetes: print namespace in helm status output

In the output of helm status mychart, it show NAMESPACE in which chart is deployed that is NAMESPACE: default.
#=> helm status mychart
LAST DEPLOYED: Tue Sep 24 21:32:45 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 28m
tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 28m
and if I run kubectl get all -A, It show NAMESPACE along with resouces name-
#=> kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
nginx pod/nginx-web-stg-55f55958-v2cxm 0/1 Pending 0 20m
tomcat pod/tomcat-api-stg-6d54498fdd-cqctr 1/1 Running 0 20m
In the Kubectl output, column for NAMESPACE is included in output but not in helm status mychart. I wish to print resources along with NAMESPACE in helm status mychart output.
The output formats of kubectl and helm are completely unrelated. I'm not aware that you can modify the output of helm status in any way to make it display the namespace with each resource.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
According to issue simply execute:
$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=your-chart-name --all-namespaces
Sample output:
user#home:~$ kubectl api-resources -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -l release=terrific-ferret --all-namespaces
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/etcd-0 Healthy {"health": "true"}
componentstatus/etcd-1 Healthy {"health": "true"}
componentstatus/controller-manager Healthy ok
NAMESPACE NAME DATA AGE
default configmap/terrific-ferret-mysql-test 1 12m
NAMESPACE NAME ENDPOINTS AGE
default endpoints/terrific-ferret-mysql aa.bb.cc.dd:port 12m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/terrific-ferret-mysql Bound pvc-896382d2 8Gi RWO standard 12m
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/terrific-ferret-mysql-86588b4646 1/1 Running 0 2m55s
NAMESPACE NAME TYPE DATA AGE
default secret/terrific-ferret-mysql Opaque 2 13m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/terrific-ferret-mysql ClusterIP xx.yy.zz.ww <none> 3306/TCP 13m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deployment.apps/terrific-ferret-mysql 1 1 1 1 13m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/terrific-ferret-mysql-86 1 1 1 13m
We are using kubectl api-resources to list all supported resource types along with their shortnames.
Useful information you can find here: api-resources.
Useful blog: kubectlcheatsheet.