After Installing Artifactory with Helm, pods having issues - kubernetes

doing some projects to learn kubernetes
I have an EKS (aws) Cluster, using helm to install Artifactory. Installation goes well I go to access Artifactory through Load Balancer URL and its not working. I check the pods and..
$ kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
artifactory-0 0/1 Init:3/5 0 90m
artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk 0/1 Running 11 (98s ago) 89m
artifactory-postgresql-0 1/1 Running 0 3h32m
jenkins-0 2/2 Running 0 3h34m
Tried to troubleshoot it
$ kubectl logs artifactory-0 -n tools
Defaulted container "artifactory" out of: artifactory, delete-db-properties (init), remove-lost-found (init), copy-system-yaml (init), wait-for-db (init), migration-artifactory (init)
Error from server (BadRequest): container "artifactory" in pod "artifactory-0" is waiting to start: PodInitializing
$ kubectl logs artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk -n tools
Defaulted container "nginx" out of: nginx, setup (init)
Used kubectl describe on artifactory-artifactory-nginx-7c4dcfdbbc-xn4wk
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 110s (x1235 over 111m) kubelet Startup probe failed:
artifactory-0 had no events

Related

Nginx ingress failed: HTTP probe failed with statuscode: 503

I'm following this Link to install nginx-ingress-controller on my bare metal server Kubernetes-v.1.19.16
The below commands i have executed as part of installation.
$ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.4.0
$ cd kubernetes-ingress/deployments
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f rbac/apdos-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f common/ingress-class.yaml
$ kubectl apply -f daemon-set/nginx-ingress.yaml
I have followed DaemonSet method.
$ kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-bcrk5 0/1 Running 0 19m
pod/nginx-ingress-ndpfz 0/1 Running 0 19m
pod/nginx-ingress-nvp98 0/1 Running 0 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 3 3 0 3 0 <none> 19m
For all three nginx-ingress pods same error it shown.
$ kubectl describe pods nginx-ingress-bcrk5 -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-bcrk5 to node-4
Normal Pulling 38m kubelet Pulling image "nginx/nginx-ingress:2.4.0"
Normal Pulled 37m kubelet Successfully pulled image "nginx/nginx-ingress:2.4.0" in 19.603066401s
Normal Created 37m kubelet Created container nginx-ingress
Normal Started 37m kubelet Started container nginx-ingress
Warning Unhealthy 3m13s (x2081 over 37m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
$ kubectl logs -l app=nginx-ingress -n nginx-ingress
E1007 03:18:37.278678 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
W1007 03:18:55.714313 1 reflector.go:424] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E1007 03:18:55.714361 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
W1007 03:19:00.542294 1 reflector.go:424] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
E1007 03:19:00.542340 1 reflector.go:140] pkg/mod/k8s.io/client-go#v0.25.2/tools/cache/reflector.go:169: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
Still READY and UP-TO-DATE state showing 0, Ideally it show 3 in both the categories. Please let me know what i'm missing here as part of installation?
Any help is appreciated.
I'd recommend installing it using helm
See https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ingress nginx-stable/nginx-ingress \
--namespace $NAMESPACE \
--version $VERSION
You can look for versions compatibles with your Kubernetes cluster version using:
helm search repo nginx-stable/nginx-ingress --versions
When installation is well finished, you should see ingress-controller service that holds an $EXTERNAL-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d
With the below branch i could able to see all nginx-ingress pods are in running.
git clone https://github.com/nginxinc/kubernetes-ingress/
cd kubernetes-ingress/deployments
git checkout v1.10.0
Can you share the nginx logs using below command?
kubectl -n nginx-ingress logs -l app=nginx-ingress
I can't guess anything..

Error when getting IngressClass nginx: "nginx" not found

I'm using Kubernetes version: 1.19.16 on bare metal Ubuntu-18.04lts server. When i tried to deploy the nginx-ingress yaml file it always fails with below errors.
Following steps followed to deploy nginx-ingress,
$ git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress/deployments
kubernetes-ingress/deployments$ git branch
* main
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f deployment/nginx-ingress.yaml
deployment.apps/nginx-ingress created
$ kubectl get pods -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-75c4bd64bd-mm52x 0/1 Error 2 21s 10.244.1.5 k8s-master <none> <none>
$ kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c4bd64bd-mm52x 0/1 CrashLoopBackOff 12 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c4bd64bd 1 1 0 38m
$ kubectl logs nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
W1003 04:53:02.833073 1 flags.go:273] Ignoring unhandled arguments: []
I1003 04:53:02.833154 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.1 PlusFlag=false
I1003 04:53:02.833158 1 flags.go:191] Commit=a8742472b9ddf27433b6b1de49d250aa9a7cb47e Date=2022-09-16T08:09:31Z DirtyState=false Arch=linux/amd64 Go=go1.18.5
I1003 04:53:02.844374 1 main.go:210] Kubernetes version: 1.19.16
F1003 04:53:02.846604 1 main.go:225] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" not found
$ kubectl describe pods nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned nginx-ingress/nginx-ingress-75c4bd64bd-mm52x to k8s-worker-1
Normal Pulled 87s (x5 over 3m5s) kubelet Container image "nginx/nginx-ingress:2.3.1" already present on machine
Normal Created 87s (x5 over 3m5s) kubelet Created container nginx-ingress
Normal Started 87s (x5 over 3m5s) kubelet Started container nginx-ingress
Warning BackOff 75s (x10 over 3m3s) kubelet Back-off restarting failed container
Nginx Ingress controller Deployment file Link for the reference.
As I'm using kubernetes-ingress.git repository main branch, not sure whether main branch is compatible with my Kubernetes version or not.
Can anyone share some pointer to solve this?
I think you missed to install ingress-controller "NGINX" that is why it is not able to identify the same https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4
kubectl apply -f common/ingress-class.yaml
You can follow thie steps from this document: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

Why is pod not found? Error from server (NotFound): pods "curl-pod" not found

I can list pods in my prod namespace
kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
curl-pod 1/1 Running 1 (32m ago) 38m
web 1/1 Running 1 (33m ago) 38m
I got error
kubectl describe pods curl-pod
Error from server (NotFound): pods "curl-pod" not found
Get events show
Normal Scheduled pod/curl-pod Successfully assigned prod/curl-pod to minikube
Why?
kubernetes manages by namespace, so you must specify namespace otherwise kubernetes will use namespace default.
So, you must type:
kubectl describe pod/curl-pod -n prod

Failed to move past 1 pod has unbound immediate PersistentVolumeClaims

I am new to Kubernetes, and trying to get apache airflow working using helm charts. After almost a week of struggling, I am nowhere - even to get the one provided in the apache airflow documentation working. I use Pop OS 20.04 and microk8s.
When I run these commands:
kubectl create namespace airflow
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow --namespace airflow
The helm installation times out after five minutes.
kubectl get pods -n airflow
shows this list:
NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 0/1 Pending 0 4m8s
airflow-redis-0 0/1 Pending 0 4m8s
airflow-worker-0 0/2 Pending 0 4m8s
airflow-scheduler-565d8587fd-vm8h7 0/2 Init:0/1 0 4m8s
airflow-triggerer-7f4477dcb6-nlhg8 0/1 Init:0/1 0 4m8s
airflow-webserver-684c5d94d9-qhhv2 0/1 Init:0/1 0 4m8s
airflow-run-airflow-migrations-rzm59 1/1 Running 0 4m8s
airflow-statsd-84f4f9898-sltw9 1/1 Running 0 4m8s
airflow-flower-7c87f95f46-qqqqx 0/1 Running 4 4m8s
Then when I run the below command:
kubectl describe pod airflow-postgresql-0 -n airflow
I get the below (trimmed up to the events):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s (x2 over 58s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Then I deleted the namespace using the following commands
kubectl delete ns airflow
At this point, the termination of the pods gets stuck. Then I bring up the proxy in another terminal:
kubectl proxy
Then issue the following command to force deleting the namespace and all it's pods and resources:
kubectl get ns airflow -o json | jq '.spec.finalizers=[]' | curl -X PUT http://localhost:8001/api/v1/namespaces/airflow/finalize -H "Content-Type: application/json" --data #-
Then I deleted the PVC's using the following command:
kubectl delete pvc --force --grace-period=0 --all -n airflow
You get stuck again, so I had to issue another command to force this deletion:
kubectl patch pvc data-airflow-postgresql-0 -p '{"metadata":{"finalizers":null}}' -n airflow
The PVC's gets terminated at this point and these two commands return nothing:
kubectl get pvc -n airflow
kubectl get all -n airflow
Then I restarted the machine and executed the helm install again (using first and last commands in the first section of this question), but the same result.
I executed the following command then (using the suggestions I found here):
kubectl describe pvc -n airflow
I got the following output (I am posting the event portion of PostgreSQL):
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m58s (x42 over 13m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
So my assumption is that I need to provide storage class as part of the values.yaml
Is my understanding right? How do I provide the required (and what values) in the values.yaml?
If you installed with helm, you can uninstall with helm delete airflow -n airflow.
Here's a way to install airflow for testing purposes using default values:
Generate the manifest helm template airflow apache-airflow/airflow -n airflow > airflow.yaml
Open the "airflow.yaml" with your favorite editor, replace all "volumeClaimTemplates" with emptyDir. Example:
Create the namespace and install:
kubectl create namespace airflow
kubectl apply -f airflow.yaml --namespace airflow
You can copy files out from the pods if needed.
To delete kubectl delete -f airflow.yaml --namespace airflow.

how to fix coredns service not found after initializing pod network

I started with Kubernetes tutorial and after intializeing the pod network
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
I discovered that coredns is pending but rest of the services were running.
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-gzn5l 0/1 Pending 0 19m
kube-system coredns-74ff55c5b-k2h5m 0/1 Pending 0 19m
the I tried to look at the logs:
# kubectl logs coredns-74ff55c5b-gzn5l
Error from server (NotFound): pods "coredns-74ff55c5b-gzn5l" not found
# kubectl logs coredns-74ff55c5b-k2h5m
Error from server (NotFound): pods "coredns-74ff55c5b-k2h5m" not found