I am getting ServiceUnavailable error when I try to run kubectl top nodes or kubectl top pods command in EKS. I am running my cluster in EKS , and I am not finding any solution for this online. If any one have faced this issue in EKS please let me know how we can resolve this issue
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
out put of kubectl get apiservices v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
creationTimestamp: "2022-02-03T08:22:59Z"
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
resourceVersion: "1373088"
uid: 2066d4cb-8105-4aea-9678-8303595dc47b
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2022-02-03T08:22:59Z"
message: 'failing or missing response from https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1:
Get "https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1": dial tcp 10.16.55.204:4443:
i/o timeout'
reason: FailedDiscoveryCheck
status: "False"
type: Available
metrics-server 1/1 1 1 3d22h
kubectl describe deployment metrics-server -n kube-system
Name: metrics-server
Namespace: kube-system
CreationTimestamp: Thu, 03 Feb 2022 09:22:59 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 2
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.0
Port: 4443/TCP
Host Port: 0/TCP
Args:
--cert-dir=/tmp
--secure-port=4443
--kubelet-insecure-tls=true
--kubelet-preferred-address-types=InternalIP
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--kubelet-use-node-status-port
--metric-resolution=15s
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5dcd6cbcb9 (1/1 replicas created)
Events: <none>
Download the components.yaml, find and replace 4443 to 443 and do a kubectl replace -f components.yaml -n kube-system --force.
Related
I have a Kong deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-test-kong
labels:
app: local-test-kong
spec:
replicas: 1
selector:
matchLabels:
app: local-test-kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: local-test-kong
spec:
automountServiceAccountToken: false
containers:
- envFrom:
- configMapRef:
name: kong-env-vars
image: kong:2.6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /bin/sleep 15 && kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8100
name: status
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: # ToDo
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 256m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: kong-prefix-dir
- mountPath: /tmp
name: tmp-dir
- mountPath: /kong_dbless/
name: kong-custom-dbless-config-volume
terminationGracePeriodSeconds: 30
volumes:
- name: kong-prefix-dir
- name: tmp-dir
- configMap:
defaultMode: 0555
name: kong-declarative
name: kong-custom-dbless-config-volume
I applied this YAML in GKE. Then i ran kubectl describe on its pod.
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-678598ffc6-ll9s8 1/1 Running 0 25m
➜ kubectl describe pod/local-test-kong-678598ffc6-ll9s8
Name: local-test-kong-678598ffc6-ll9s8
Namespace: local-test-kong
Priority: 0
Node: gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95
Start Time: Wed, 23 Nov 2022 00:12:56 +0800
Labels: app=local-test-kong
pod-template-hash=678598ffc6
Annotations: kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00
Status: Running
IP: 10.128.96.104
IPs:
IP: 10.128.96.104
Controlled By: ReplicaSet/local-test-kong-678598ffc6
Containers:
proxy:
Container ID: containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a
Image: kong:2.6
Image ID: docker.io/library/kong#sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:12:58 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl
Normal Pulled 25m kubelet Container image "kong:2.6" already present on machine
Normal Created 25m kubelet Created container proxy
Normal Started 25m kubelet Started container proxy
➜
I applied the same YAML in my localhost's MicroK8S (on MacOS) and then I ran kubectl describe on its pod.
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-54cfc585cb-7grj8 1/1 Running 0 86s
➜ kubectl describe pod/local-test-kong-54cfc585cb-7grj8
Name: local-test-kong-54cfc585cb-7grj8
Namespace: local-test-kong
Priority: 0
Node: microk8s-vm/192.168.64.5
Start Time: Wed, 23 Nov 2022 00:39:33 +0800
Labels: app=local-test-kong
pod-template-hash=54cfc585cb
Annotations: cni.projectcalico.org/podIP: 10.1.254.79/32
cni.projectcalico.org/podIPs: 10.1.254.79/32
kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00
Status: Running
IP: 10.1.254.79
IPs:
IP: 10.1.254.79
Controlled By: ReplicaSet/local-test-kong-54cfc585cb
Containers:
proxy:
Container ID: containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d
Image: kong:2.6
Image ID: docker.io/library/kong#sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:39:37 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 92s default-scheduler Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm
Normal Pulled 90s kubelet Container image "kong:2.6" already present on machine
Normal Created 90s kubelet Created container proxy
Normal Started 89s kubelet Started container proxy
Warning Unhealthy 68s kubelet Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 68s kubelet Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
➜
It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures.
What could i be missing here while deploying to microk8s (on MacOS)?
Your readiness probes are failing on the local pod on port 8100. It looks like you have a firewall(s) rule preventing internal pod and/or pod to pod communication.
As per the docs:
You may need to configure your firewall to allow pod-to-pod and pod-to-internet communication:
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
I am trying to deploy a simple node-red application to my k3s container. Background I have two pi 4's as my server and 3B+ as my workers.
For errors i get
NAME READY STATUS RESTARTS AGE
nodered-559875dbbd-mn474 0/1 CrashLoopBackOff 11 36m
Here are my yaml files
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nodered
name: nodered
namespace: nodered
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nodered
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nodered
spec:
containers:
- image: nodered/node-red
name: nodered
ports:
- containerPort: 1880
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 10
resources: {}
volumeMounts:
- mountPath: /data
name: nodered-claim0
restartPolicy: Always
securityContext:
runAsUser: 0
volumes:
- name: nodered-claim0
persistentVolumeClaim:
claimName: nodered-claim0
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nodered
name: nodered
namespace: nodered
spec:
ports:
- name: "1880"
port: 80
targetPort: 1880
selector:
io.kompose.service: nodered
status:
loadBalancer: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodered-claim0
name: nodered-claim0
namespace: nodered
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodered-ingress
namespace: nodered
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- host: nodered.local
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: nodered
port:
number: 80
Here is the error that I am getting when i type describe the node
Name: nodered-559875dbbd-mn474
Namespace: nodered
Priority: 0
Node: masternode/192.168.1.120
Start Time: Tue, 27 Apr 2021 10:38:28 -0400
Labels: io.kompose.service=nodered
pod-template-hash=559875dbbd
Annotations: kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
Status: Running
IP: 10.42.1.197
IPs:
IP: 10.42.1.197
Controlled By: ReplicaSet/nodered-559875dbbd
Containers:
nodered:
Container ID: containerd://af3290fa34138a3361d5d73b7c1872eb1c523cefb0b8c17195db5d99c609f9fb
Image: nodered/node-red
Image ID: docker.io/nodered/node-red#sha256:f16e1ec7265829bcc381009dec175d9fbbc050ab6a1c42c4c906e689ff3bcf6b
Port: 1880/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 27 Apr 2021 10:38:52 -0400
Finished: Tue, 27 Apr 2021 10:38:52 -0400
Ready: False
Restart Count: 2
Liveness: http-get http://:80/ delay=2s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:80/ delay=2s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from nodered-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-j2f7l (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nodered-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nodered-claim0
ReadOnly: false
default-token-j2f7l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-j2f7l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37s default-scheduler Successfully assigned nodered/nodered-559875dbbd-mn474 to masternode
Normal Pulled 33s kubelet Successfully pulled image "nodered/node-red" in 495.427449ms
Normal Pulled 32s kubelet Successfully pulled image "nodered/node-red" in 504.720641ms
Normal Pulling 14s (x3 over 34s) kubelet Pulling image "nodered/node-red"
Normal Pulled 14s kubelet Successfully pulled image "nodered/node-red" in 506.632077ms
Normal Created 13s (x3 over 33s) kubelet Created container nodered
Normal Started 13s (x3 over 33s) kubelet Started container nodered
Warning BackOff 6s (x5 over 30s) kubelet Back-off restarting failed container
Finally logs show the following:
internal/modules/cjs/loader.js:800
throw err;
^
SyntaxError: /usr/local/lib/node_modules/npm/package.json: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at Object.Module._extensions..json (internal/modules/cjs/loader.js:797:27)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object.<anonymous> (/usr/local/lib/node_modules/npm/lib/utils/unsupported.js:3:17)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
I cant seem to traceback my log issue or atleast not sure where to look. Any help is apprecaited
For those wondering i got this to work. For some reason it was colliding with the install for node js...No idea why. Uninstalled nodejs and reinstalled and all is working. So nothing wrong with the config file above for those want to use it. Once you bind your config to a volume you can add authentication as well.
I'm unable to get the controller working. Tried many times and still I get Error: ImagePullBackOff.
Is there a alternative that I can try or any idea why its failing?
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.0/deploy/static/mandatory.yaml
kubectl describe pod nginx-ingress-controller-7fcb6cffc5-m8m5c -n ingress-nginx
Name: nginx-ingress-controller-7fcb6cffc5-m8m5c
Namespace: ingress-nginx
Priority: 0
Node: ip-10-0-0-244.ap-south-1.compute.internal/10.0.0.244
Start Time: Mon, 07 Dec 2020 08:21:13 -0500
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
pod-template-hash=7fcb6cffc5
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller
kubernetes.io/psp: eks.privileged
prometheus.io/port: 10254
prometheus.io/scrape: true
Status: Pending
IP: 10.0.0.231
IPs:
IP: 10.0.0.231
Controlled By: ReplicaSet/nginx-ingress-controller-7fcb6cffc5
Containers:
nginx-ingress-controller:
Container ID:
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master
Image ID:
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--udp-services-configmap=$(POD_NAMESPACE)/udp-services
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-controller-7fcb6cffc5-m8m5c (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-xtnz9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-serviceaccount-token-xtnz9:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-serviceaccount-token-xtnz9
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-7fcb6cffc5-m8m5c to ip-10-0-0-244.ap-south-1.compute.internal
Normal Pulling 18s kubelet Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master"
Warning Failed 3s kubelet Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 3s kubelet Error: ErrImagePull
Normal BackOff 3s kubelet Back-off pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master"
Warning Failed 3s kubelet Error: ImagePullBackOff
I had the same problem, with the ingress-nginx installation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
For some reason it couldn't get the ingress-nginx-controller.
$ kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RE
ingress-nginx-admission-create-6q4wx 0/1 Completed 0
ingress-nginx-admission-patch-fr5ct 0/1 Completed 1
ingress-nginx-controller-686556747b-dg68h 0/1 ImagePullBackOff 0
What I did was, I ran $ kubectl describe pod ingress-nginx-controller-686556747b-dg68h --namespace ingress-nginx
and got the following output:
Name: ingress-nginx-controller-686556747b-dg68h
Namespace: ingress-nginx
Priority: 0
Node: docker-desktop/x.x.x.x
Start Time: Wed, 11 May 2022 20:11:55 +0430
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=686556747b
Annotations: <none>
Status: Pending
IP: x.x.x.x
IPs:
IP: x.x.x.x
Controlled By: ReplicaSet/ingress-nginx-controller-686556747b
Containers:
controller:
Container ID:
Image: k8s.gcr.io/ingress-nginx/controller:v1.2.0#sha256:d819
Image ID:
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s perio
Readiness: http-get http://:10254/healthz delay=10s timeout=1s perio
Environment:
POD_NAME: ingress-nginx-controller-686556747b-dg68h (v1:metad
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
From Containers.controller.Image, I got the image name that kubernetes is trying to download but is unsuccessful to do so and tried to docker pull that image myself like so:
docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.0#sha256:d819
Docker could pull the image successfully and after that everything worked just fine.
It's failing because kubernetes cannot download the specified image. Check the events section
Warning Failed 3s kubelet Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Maybe you dont have internet connectivity or this image does not exist. You can try running docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master from your computer
As mentioned by John, creating a nat router and nat config allowed docker images to be pulled when I was facing the same issue. If you create a vpc native GKE cluster which is private it by default has no access to the internet. Unless you deploy a NAT router.
gcloud compute routers create nat-router \
--network my-vpc \
--region us-east4
gcloud compute routers nats create nat-config \
--router-region us-east4 \
--router nat-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: your host name
http:
paths:
- backend:
service:
name: your service name
port:
number: 3000
path: /api/?(.*)
pathType: Prefix
I think this YAML file solves your problem. I faced the same issue.
I'm having trouble getting my Kube-registry up and running on cephfs. I'm using rook to set this cluster up. As you can see, I'm having trouble attaching the volume. Any idea what would be causing this issue? any help is appreciated.
kube-registry.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kube-registry
template:
metadata:
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
env:
# Configuration reference: https://docs.docker.com/registry/configuration/
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_HTTP_SECRET
value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
livenessProbe:
httpGet:
path: /
port: registry
readinessProbe:
httpGet:
path: /
port: registry
volumes:
- name: image-store
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false
Storagelass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where operator is deployed.
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: myfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-data0
# Root path of an existing CephFS volume
# Required for provisionVolume: "false"
# rootPath: /absolute/path
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Deletea
kubectl describe pods --namespace=kube-system kube-registry-58659ff99b-j2b4d
Name: kube-registry-58659ff99b-j2b4d
Namespace: kube-system
Priority: 0
Node: minikube/192.168.99.212
Start Time: Wed, 25 Nov 2020 13:19:35 -0500
Labels: k8s-app=kube-registry
kubernetes.io/cluster-service=true
pod-template-hash=58659ff99b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/kube-registry-58659ff99b
Containers:
registry:
Container ID:
Image: registry:2
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
REGISTRY_HTTP_ADDR: :5000
REGISTRY_HTTP_SECRET: Ple4seCh4ngeThisN0tAVerySecretV4lue
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
Mounts:
/var/lib/registry from image-store (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nw4th (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
image-store:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cephfs-pvc
ReadOnly: false
default-token-nw4th:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nw4th
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 13m (x3 over 13m) default-scheduler running "VolumeBinding" filter plugin for pod "kube-registry-58659ff99b-j2b4d": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 13m default-scheduler Successfully assigned kube-system/kube-registry-58659ff99b-j2b4d to minikube
Warning FailedMount 2m6s (x5 over 11m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[image-store], unattached volumes=[image-store default-token-nw4th]: timed out waiting for the condition
Warning FailedAttachVolume 59s (x6 over 11m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6eeff481-eb0a-4269-84c7-e744c9d639d9" : attachdetachment timeout for volume 0001-0009-rook-c
ceph provisioner logs, I restarted my cluster so the name will be different but output is the same
I1127 18:27:19.370543 1 csi-provisioner.go:121] Version: v2.0.0
I1127 18:27:19.370948 1 csi-provisioner.go:135] Building kube configs for running in cluster...
I1127 18:27:19.429190 1 connection.go:153] Connecting to unix:///csi/csi-provisioner.sock
I1127 18:27:21.561133 1 common.go:111] Probing CSI driver for readiness
W1127 18:27:21.905396 1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.
I1127 18:27:22.060963 1 leaderelection.go:243] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
I1127 18:27:22.122303 1 leaderelection.go:253] successfully acquired lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1127 18:27:22.323990 1 controller.go:820] Starting provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 18:27:22.324061 1 clone_controller.go:66] Starting CloningProtection controller
I1127 18:27:22.324205 1 clone_controller.go:84] Started CloningProtection controller
I1127 18:27:22.325240 1 volume_store.go:97] Starting save volume queue
I1127 18:27:22.426790 1 controller.go:869] Started provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 19:08:39.850493 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:39.851034 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"
I1127 19:08:43.670226 1 controller.go:1420] provision "kube-system/cephfs-pvc" class "rook-cephfs": volume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" provisioned
I1127 19:08:43.670262 1 controller.go:1437] provision "kube-system/cephfs-pvc" class "rook-cephfs": succeeded
E1127 19:08:43.692108 1 controller.go:1443] couldn't create key for object pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06: object has no meta: object does not implement the Object interfaces
I1127 19:08:43.692189 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:43.692205 1 controller.go:1326] provision "kube-system/cephfs-pvc" class "rook-cephfs": persistentvolume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" already exists, skipping
I1127 19:08:43.692220 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned
In the pasted YAML for your StorageClass, you have:
reclaimPolicy: Deletea
Was that a paste issue? Regardless, this is likely what is causing your problem.
I just had this exact problem with some of my Ceph RBD volumes, and the reason for it was that I was using a StorageClass that had
reclaimPolicy: Delete
However, the cephcsi driver was not configured to support it (and I don't think it actually supports it either).
Using a StorageClass with
reclaimPolicy: Retain
fixed the issue.
To check this on your cluster, run the following:
$ kubectl get sc rook-cephfs -o yaml
And look for the line that starts with reclaimPolicy:
Then, look at the csidriver your StorageClass is using. In your case it is rook-ceph.cephfs.csi.ceph.com
$ kubectl get csidriver rook-ceph.cephfs.csi.ceph.com -o yaml
And look for the entries under volumeLifecycleModes
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
creationTimestamp: "2020-11-16T22:18:55Z"
name: rook-ceph.cephfs.csi.ceph.com
resourceVersion: "29863971"
selfLink: /apis/storage.k8s.io/v1beta1/csidrivers/rook-ceph.cephfs.csi.ceph.com
uid: a9651d30-935d-4a7d-a7c9-53d5bc90c28c
spec:
attachRequired: true
podInfoOnMount: false
volumeLifecycleModes:
- Persistent
If the only entry under volumeLifecycleModes is Persistent, then your driver is not configured to support reclaimPolicy: Delete.
If instead you see
volumeLifecycleModes:
- Persistent
- Ephemeral
Then your driver should support reclaimPolicy: Delete
Tried to configure kube-ops-view on a local cluster created using Kubernetes KinD not able to access it.
helm install kube-ops-view stable/kube-ops-view
WARNING: This chart is deprecated
NAME: kube-ops-view
LAST DEPLOYED: Wed Dec 2 15:05:45 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To access the Kubernetes Operational View UI:
1. First start the kubectl proxy:
kubectl proxy
2. Now open the following URL in your browser:
http://localhost:8001/api/v1/proxy/namespaces/default/services/kube-ops-view/
Please try reloading the page if you see "ServiceUnavailable / no endpoints available for service", pod creation might take a moment.
kubectl proxy
Starting to serve on 127.0.0.1:8001
kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-ops-view-7bc8944b46-nmc8k 1/1 Running 0 5m9s
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-ops-view ClusterIP 10.96.242.129 <none> 80/TCP 5m28s
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
kube-ops-view 1/1 1 1 5m48
kubectl describe deployment kube-ops-view
Name: kube-ops-view
Namespace: default
CreationTimestamp: Wed, 02 Dec 2020 15:05:45 +0800
Labels: app.kubernetes.io/instance=kube-ops-view
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kube-ops-view
app.kubernetes.io/part-of=kube-ops-view
app.kubernetes.io/version=20.4.0
helm.sh/chart=kube-ops-view-1.2.4
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: kube-ops-view
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/part-of=kube-ops-view
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/instance=kube-ops-view
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kube-ops-view
app.kubernetes.io/part-of=kube-ops-view
app.kubernetes.io/version=20.4.0
helm.sh/chart=kube-ops-view-1.2.4
Service Account: default
Containers:
kube-ops-view:
Image: hjacobs/kube-ops-view:20.4.0
Port: 8080/TCP
Host Port: 0/TCP
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 80m
memory: 64Mi
Liveness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: kube-ops-view-7bc8944b46 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m58s deployment-controller Scaled up replica set kube-ops-view-7bc8944b46 to 1
Trying to access using the URL but it's not working
http://localhost:8001/api/v1/proxy/namespaces/default/services/kube-ops-view/
Got it fixed
Deleted the kube-ops-view deployment and svc.
helm install --set rbac.create=true --set ingress.enabled=true kube-ops-view stable/kube-ops-view