nginx to upstream headless service gets connection refused but I can curl from within the webapp container - kubernetes

I am using microk8s and have an nginx frontend service connect to a headless webapplication (ClusterIP = None). However, the nginx service is refused connection to the backend service.
nginx configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.config: |
user nginx;
worker_processes auto;
# set open fd limit to 30000
#worker_rlimit_nofile 10000;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
rewrite_log on;
upstream svc-web {
server localhost:8080;
keepalive 1024;
}
server {
listen 80;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://svc-web;
proxy_http_version 1.1;
}
}
}
$ k get all
NAME READY STATUS RESTARTS AGE
pod/blazegraph-0 1/1 Running 0 19h
pod/default-http-backend-587b7d64b5-c4rzj 1/1 Running 0 19h
pod/mysql-0 1/1 Running 0 19h
pod/nginx-7fdcdfcc7d-nlqc2 1/1 Running 0 12s
pod/nginx-ingress-microk8s-controller-b9xcd 1/1 Running 0 19h
pod/web-0 1/1 Running 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.152.183.94 <none> 80/TCP 19h
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 22h
service/svc-db ClusterIP None <none> 3306/TCP,9999/TCP 19h
service/svc-frontend NodePort 10.152.183.220 <none> 80:32282/TCP,443:31968/TCP 12s
service/svc-web ClusterIP None <none> 8080/TCP,8443/TCP 15s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 19h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 19h
deployment.apps/nginx 1 1 1 1 12s
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-587b7d64b5 1 1 1 19h
replicaset.apps/nginx-7fdcdfcc7d 1 1 1 12s
NAME DESIRED CURRENT AGE
statefulset.apps/blazegraph 1 1 19h
statefulset.apps/mysql 1 1 19h
statefulset.apps/web 1 1 15s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot <unknown>/55% 1 1 0 19h
$ k describe pod web-0
Name: web-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: khteh-t580/192.168.86.93
Start Time: Fri, 30 Nov 2018 09:19:53 +0800
Labels: app=app-web
controller-revision-hash=web-5b9476f774
statefulset.kubernetes.io/pod-name=web-0
Annotations: <none>
Status: Running
IP: 10.1.1.203
Controlled By: StatefulSet/web
Containers:
web-service:
Container ID: docker://b5c68ba1d9466c352af107df69f84608aaf233d117a9d71ad307236d10aec03a
Image: khteh/tomcat:tomcat-webapi
Image ID: docker-pullable://khteh/tomcat#sha256:c246d322872ab315948f6f2861879937642a4f3e631f75e00c811afab7f4fbb9
Ports: 8080/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Fri, 30 Nov 2018 09:20:02 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/web/html from web-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s6bpp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
web-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: web-persistent-storage-web-0
ReadOnly: false
default-token-s6bpp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s6bpp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/web-0 to khteh-t580
Normal Pulling 11m kubelet, khteh-t580 pulling image "khteh/tomcat:tomcat-webapi"
Normal Pulled 11m kubelet, khteh-t580 Successfully pulled image "khteh/tomcat:tomcat-webapi"
Normal Created 11m kubelet, khteh-t580 Created container
Normal Started 11m kubelet, khteh-t580 Started container
$ k describe svc svc-frontend
Name: svc-frontend
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"svc-frontend","namespace":"default"},"spec":{"ports":[{"name":"ht...
Selector: app=nginx,tier=frontend
Type: NodePort
IP: 10.152.183.159
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30879/TCP
Endpoints: 10.1.1.204:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31929/TCP
Endpoints: 10.1.1.204:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
curl <nodportIP>:32282/webapi/greeting would hang.
curl <pod IP>:8080/webapi/greeting WORKS.
curl <endpoint IP>:80/webapi/greeting results in "Bad Gateway":
$ curl http://10.1.1.204/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
Inside the nginx container:
root#nginx-7fdcdfcc7d-nlqc2:/var/log/nginx# tail -f app.error_log
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost:32282"
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://[::1]:8080/", host: "localhost:32282"
$ k get endpoints
NAME ENDPOINTS AGE
default-http-backend 10.1.1.246:80 6d20h
kubernetes 192.168.86.93:6443 6d22h
svc-db 10.1.1.248:9999,10.1.1.253:9999,10.1.1.248:3306 + 1 more... 5h48m
svc-frontend 10.1.1.242:80,10.1.1.242:443 6h13m
svc-web 10.1.1.245:8443,10.1.1.245:8080 6h13m
khteh#khteh-T580:/usr/src/kubernetes/cluster1 2950 $ curl 10.1.1.242:80/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
khteh#khteh-T580:/usr/src/kubernetes/cluster1 2951 $

fix upstream configuration by using the name of the upstream service and curl using http://clusterip/...

Related

Nginx Ingress not working on k3s running on Raspberry Pi

I have k3s installed on 4 Raspberry Pi's with traefik disabled.
I'm trying to run Home assistant on it using Nginx Ingress controller, kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/baremetal/deploy.yaml.
But for some reason, I just cannot expose the service. The ingress assigned 192.168.0.57, which is one of the nodes' IP. Am I missing something?
root#rpi1:~# kubectl get ingress -n home-assistant home-assistant-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
home-assistant-ingress nginx smart.home 192.168.0.57 80 20h
root#rpi1:~# curl http://192.168.0.57/
curl: (7) Failed to connect to 192.168.0.57 port 80: Connection refused
root#rpi1:~# curl http://smart.home/
curl: (7) Failed to connect to smart.home port 80: Connection refused
Please see following.
Pod:
root#rpi1:~# kubectl describe pod -n home-assistant home-assistant-deploy-7c4674b679-zbwn7
Name: home-assistant-deploy-7c4674b679-zbwn7
Namespace: home-assistant
Priority: 0
Node: rpi4/192.168.0.58
Start Time: Tue, 16 Aug 2022 20:31:28 +0100
Labels: app=home-assistant
pod-template-hash=7c4674b679
Annotations: <none>
Status: Running
IP: 10.42.3.7
IPs:
IP: 10.42.3.7
Controlled By: ReplicaSet/home-assistant-deploy-7c4674b679
Containers:
home-assistant:
Container ID: containerd://c7ec189112e9f2d085bd7f9cc7c8086d09b312e30771d7d1fef424685fcfbd07
Image: ghcr.io/home-assistant/home-assistant:stable
Image ID: ghcr.io/home-assistant/home-assistant#sha256:0555dc6a69293a1a700420224ce8d03048afd845465f836ef6ad60f5763b44f2
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 17 Aug 2022 18:06:16 +0100
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Tue, 16 Aug 2022 20:33:33 +0100
Finished: Wed, 17 Aug 2022 18:06:12 +0100
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5tb7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-n5tb7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 43m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 43m kubelet Container image "ghcr.io/home-assistant/home-assistant:stable" already present on machine
Normal Created 43m kubelet Created container home-assistant
Normal Started 43m kubelet Started container home-assistant
The pod is listening at port 8123
root#rpi1:~# kubectl exec -it -n home-assistant home-assistant-deploy-7c4674b679-zbwn7 -- netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8123 0.0.0.0:* LISTEN 60/python3
tcp6 0 0 :::8123 :::* LISTEN 60/python3
Deployment:
root#rpi1:~# kubectl describe deployments.apps -n home-assistant
Name: home-assistant-deploy
Namespace: home-assistant
CreationTimestamp: Tue, 16 Aug 2022 20:31:28 +0100
Labels: app=home-assistant
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=home-assistant
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=home-assistant
Containers:
home-assistant:
Image: ghcr.io/home-assistant/home-assistant:stable
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: home-assistant-deploy-7c4674b679 (1/1 replicas created)
Events: <none>
Service with port set to 8080 and target port to 8123:
root#rpi1:~# kubectl describe svc -n home-assistant home-assistant-service
Name: home-assistant-service
Namespace: home-assistant
Labels: app=home-assistant
Annotations: <none>
Selector: app=home-assistant
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.248.75
IPs: 10.43.248.75
LoadBalancer Ingress: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Port: <unset> 8080/TCP
TargetPort: 8123/TCP
NodePort: <unset> 31678/TCP
Endpoints: 10.42.3.7:8123
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedIngressIP 20h svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.58
Normal UpdatedIngressIP 20h (x2 over 22h) svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Normal AppliedDaemonSet 20h (x19 over 22h) svccontroller Applied LoadBalancer DaemonSet kube-system/svclb-home-assistant-service-f2675711
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Normal AppliedDaemonSet 47m (x8 over 47m) svccontroller Applied LoadBalancer DaemonSet kube-system/svclb-home-assistant-service-f2675711
My Ingress:
root#rpi1:~# kubectl describe ingress -n home-assistant home-assistant-ingress
Name: home-assistant-ingress
Labels: <none>
Namespace: home-assistant
Address: 192.168.0.57
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
smart.home
/ home-assistant-service:8080 (10.42.3.7:8123)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 19h (x2 over 19h) nginx-ingress-controller Scheduled for sync
Normal Sync 49m (x3 over 50m) nginx-ingress-controller Scheduled for sync
root#rpi1:~# kubectl get ingress -n home-assistant home-assistant-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
home-assistant-ingress nginx smart.home 192.168.0.57 80 19h
Can confirm I have Nginx ingress controller running:
root#rpi1:~# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-2thj7 0/1 Completed 0 22h
ingress-nginx-admission-patch-kwm4m 0/1 Completed 1 22h
ingress-nginx-controller-6dc865cd86-9h8wt 1/1 Running 2 (52m ago) 22h
Ingress Nginx Controller log
root#rpi1:~# kubectl logs -n ingress-nginx ingress-nginx-controller-6dc865cd86-9h8wt
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.3.0
Build: 2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
W0818 06:51:52.008386 7 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0818 06:51:52.009962 7 main.go:230] "Creating API client" host="https://10.43.0.1:443"
I0818 06:51:52.123762 7 main.go:274] "Running in Kubernetes cluster" major="1" minor="24" git="v1.24.3+k3s1" state="clean" commit="990ba0e88c90f8ed8b50e0ccd375937b841b176e" platform="linux/arm64"
I0818 06:51:52.594773 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0818 06:51:52.691571 7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0818 06:51:52.773089 7 nginx.go:258] "Starting NGINX Ingress controller"
I0818 06:51:52.807863 7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"21ae6485-bb0e-447e-b098-c510e43b171e", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0818 06:51:53.912887 7 store.go:429] "Found valid IngressClass" ingress="home-assistant/home-assistant-ingress" ingressclass="nginx"
I0818 06:51:53.913414 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"8719", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0818 06:51:53.975141 7 nginx.go:301] "Starting NGINX process"
I0818 06:51:53.975663 7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I0818 06:51:53.976173 7 nginx.go:321] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0818 06:51:53.980492 7 controller.go:167] "Configuration changes detected, backend reload required"
I0818 06:51:54.025524 7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I0818 06:51:54.025924 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-6dc865cd86-9h8wt"
I0818 06:51:54.039912 7 status.go:214] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6dc865cd86-9h8wt" node="rpi3"
I0818 06:51:54.051540 7 status.go:299] "updating Ingress status" namespace="home-assistant" ingress="home-assistant-ingress" currentValue=[{IP:192.168.0.57 Hostname: Ports:[]}] newValue=[]
I0818 06:51:54.071502 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"14445", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0818 06:51:54.823911 7 controller.go:184] "Backend successfully reloaded"
I0818 06:51:54.824200 7 controller.go:195] "Initial sync, sleeping for 1 second"
I0818 06:51:54.824334 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6dc865cd86-9h8wt", UID:"def1db3a-4766-4751-b611-ae3461911bc6", APIVersion:"v1", ResourceVersion:"14423", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0818 06:51:57.788759 7 controller.go:1111] Service "home-assistant/home-assistant-service" does not have any active Endpoint.
I0818 06:52:54.165805 7 status.go:299] "updating Ingress status" namespace="home-assistant" ingress="home-assistant-ingress" currentValue=[] newValue=[{IP:192.168.0.57 Hostname: Ports:[]}]
I0818 06:52:54.190556 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"14590", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
Endpoints
root#rpi1:~# kubectl get endpoints -A
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.0.53:6443 35h
kube-system kube-dns 10.42.0.12:53,10.42.0.12:53,10.42.0.12:9153 35h
home-assistant home-assistant-service 10.42.3.9:8123 35h
kube-system metrics-server 10.42.0.14:4443 35h
ingress-nginx ingress-nginx-controller-admission 10.42.2.13:8443 35h
ingress-nginx ingress-nginx-controller 10.42.2.13:443,10.42.2.13:80 35h
Can also confirm the Traefik Ingress controller is disabled
root#rpi1:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-2thj7 0/1 Completed 0 22h
ingress-nginx ingress-nginx-admission-patch-kwm4m 0/1 Completed 1 22h
kube-system local-path-provisioner-7b7dc8d6f5-jcm4p 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-w88fv 1/1 Running 1 (59m ago) 22h
kube-system coredns-b96499967-rml6k 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-rv8rf 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-9qk8m 1/1 Running 2 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-m62sl 1/1 Running 1 (59m ago) 22h
home-assistant home-assistant-deploy-7c4674b679-zbwn7 1/1 Running 1 (59m ago) 22h
kube-system metrics-server-668d979685-rp2wm 1/1 Running 1 (59m ago) 22h
ingress-nginx ingress-nginx-controller-6dc865cd86-9h8wt 1/1 Running 2 (59m ago) 22h
Ingress Nginx Controller Service:
root#rpi1:~# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.43.254.114 <none> 80:32313/TCP,443:31543/TCP 23h
ingress-nginx-controller-admission ClusterIP 10.43.135.213 <none> 443/TCP 23h
root#rpi1:~# kubectl describe svc -n ingress-nginx ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.254.114
IPs: 10.43.254.114
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32313/TCP
Endpoints: 10.42.2.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31543/TCP
Endpoints: 10.42.2.10:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Updated: added Ingress Nginx Controller service
Updated2: added Ingress Nginx Controller log and endpoints

Kubernetes Dashboard CrashLoopBackOff: timeout error on Raspberry Pi cluster

Should be a simple task, I simply want to run the Kubernetes Dashboard on a clean install of Kubernetes on a Raspberry Pi cluster.
What I've done:
Setup the initial cluster (hostname, static ip, cgroup, swapspace, install and configure docker, install kubernetes, setup kubernetes network and join nodes)
I have flannel installed
I have applied the dashboard
Bunch of random testing trying to figure this out
Obviously, as seen below, the container in the dashboard pod is not working because it cannot access kubernetes-dashboard-csrf. I have no idea why this cannot be accessed, my only thought is that I missed a step when setting up the cluster. I've followed about 6 different guides without success, prioritizing the official guide. I have also seen quite a few people having the same or similar issues that most have not posted a resolution. Thanks!
Nodes: kubectl get nodes
NAME STATUS ROLES AGE VERSION
gus3 Ready <none> 346d v1.23.1
juliet3 Ready <none> 346d v1.23.1
shawn4 Ready <none> 346d v1.23.1
vick4 Ready control-plane,master 346d v1.23.1
All Pods: kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-7j2xg 1/1 Running 27 346d
kube-system coredns-74ff55c5b-cb2x8 1/1 Running 27 346d
kube-system etcd-vick4 1/1 Running 2 169m
kube-system kube-apiserver-vick4 1/1 Running 2 169m
kube-system kube-controller-manager-vick4 1/1 Running 2 169m
kube-system kube-flannel-ds-gclmp 1/1 Running 0 11m
kube-system kube-flannel-ds-hshjv 1/1 Running 0 12m
kube-system kube-flannel-ds-kdd4w 1/1 Running 0 11m
kube-system kube-flannel-ds-wzhkt 1/1 Running 0 10m
kube-system kube-proxy-4t25v 1/1 Running 26 346d
kube-system kube-proxy-b6vbx 1/1 Running 26 346d
kube-system kube-proxy-jgj4s 1/1 Running 27 346d
kube-system kube-proxy-n65sl 1/1 Running 26 346d
kube-system kube-scheduler-vick4 1/1 Running 2 169m
kubernetes-dashboard dashboard-metrics-scraper-5b8896d7fc-99wfk 1/1 Running 0 77m
kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p 0/1 CrashLoopBackOff 18 77m
Resources: kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-5b8896d7fc-99wfk 1/1 Running 0 79m
pod/kubernetes-dashboard-897c7599f-qss5p 0/1 CrashLoopBackOff 19 79m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 172.20.0.191 <none> 8000/TCP 79m
service/kubernetes-dashboard ClusterIP 172.20.0.15 <none> 443/TCP 79m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 79m
deployment.apps/kubernetes-dashboard 0/1 1 0 79m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-5b8896d7fc 1 1 1 79m
replicaset.apps/kubernetes-dashboard-897c7599f 1 1 0 79m
Notice CrashLoopBackOff
Pod Details: kubectl describe pods kubernetes-dashboard-897c7599f-qss5p -n kubernetes-dashboard
Name: kubernetes-dashboard-897c7599f-qss5p
Namespace: kubernetes-dashboard
Priority: 0
Node: shawn4/192.168.10.71
Start Time: Fri, 17 Dec 2021 18:52:15 +0000
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=897c7599f
Annotations: <none>
Status: Running
IP: 172.19.1.75
IPs:
IP: 172.19.1.75
Controlled By: ReplicaSet/kubernetes-dashboard-897c7599f
Containers:
kubernetes-dashboard:
Container ID: docker://894a354e40ca1a95885e149dcd75415e0f186ead3f2e05ec0787f4b1c7a29622
Image: kubernetesui/dashboard:v2.4.0
Image ID: docker-pullable://kubernetesui/dashboard#sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 17 Dec 2021 20:10:19 +0000
Finished: Fri, 17 Dec 2021 20:10:49 +0000
Ready: False
Restart Count: 19
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-wq9m8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-wq9m8:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-wq9m8
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 21s (x327 over 79m) kubelet Back-off restarting failed container
Logs: kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p
2021/12/17 20:10:19 Starting overwatch
2021/12/17 20:10:19 Using namespace: kubernetes-dashboard
2021/12/17 20:10:19 Using in-cluster config to connect to apiserver
2021/12/17 20:10:19 Using secret token for csrf signing
2021/12/17 20:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://172.20.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 172.20.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0x400055fae8)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x350
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0x40001fc080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0x8c
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x40001fc080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x40
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1dc
If you need any more information please ask!
UPDATE 12/29/21:
Fixed this issue by reinstalling the cluster to the newest versions of Kubernetes and Ubuntu.
Turned out there were several issues:
I was using Ubuntu Buster which is deprecated.
My client/server Kubernetes versions were +/-0.3 out of sync
I was following outdated instructions
I reinstalled the whole cluster following Kubernetes official guide and, with a few snags along the way, it works!

K8s tutorial fails on my local installation with i/o timeout

I'm working on a local kubernetes installation with three nodes. They are installed via geerlingguy/kubernetes Ansible role (with default settings). I've recreated the whole VMs multiple times. I try to follow the Kubernetes tutorials on https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/ to get services up and running inside the cluster and try to reach them now.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
enceladus Ready <none> 162m v1.17.9
mimas Ready <none> 162m v1.17.9
titan Ready master 162m v1.17.9
I tried it with the 1.17.9 or 1.18.6, I tried it with https://github.com/geerlingguy/ansible-role-kubernetes and https://github.com/kubernetes-sigs/kubespray on fresh Debian-Buster VMs. I tried it with Flannel and Calico network plugin. There is no a firewall configured.
I can deploy the kubernetes-bootcamp and exec into it, but when I try to reach the pod via kubectl proxy and curl I'm getting an error.
# kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
# kubectl describe pods
Name: kubernetes-bootcamp-69fbc6f4cf-nq4tj
Namespace: default
Priority: 0
Node: enceladus/192.168.10.12
Start Time: Thu, 06 Aug 2020 10:53:34 +0200
Labels: app=kubernetes-bootcamp
pod-template-hash=69fbc6f4cf
Annotations: <none>
Status: Running
IP: 10.244.1.4
IPs:
IP: 10.244.1.4
Controlled By: ReplicaSet/kubernetes-bootcamp-69fbc6f4cf
Containers:
kubernetes-bootcamp:
Container ID: docker://77eae93ca1e6b574ef7b0623844374a5b2f3054075025492b708b23fc3474a45
Image: gcr.io/google-samples/kubernetes-bootcamp:v1
Image ID: docker-pullable://gcr.io/google-samples/kubernetes-bootcamp#sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 06 Aug 2020 10:53:35 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kkcvk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kkcvk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kkcvk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/kubernetes-bootcamp-69fbc6f4cf-nq4tj to enceladus
Normal Pulled 9s kubelet, enceladus Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine
Normal Created 9s kubelet, enceladus Created container kubernetes-bootcamp
Normal Started 9s kubelet, enceladus Started container kubernetes-bootcamp
Update service list
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d20h
I can exec curl inside the deployment. It is running.
# kubectl exec -ti kubernetes-bootcamp-69fbc6f4cf-nq4tj curl http://localhost:8080/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-69fbc6f4cf-nq4tj | v=1
But, when I try to curl from master node the response is not good:
curl http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-nq4tj/proxy/
Error trying to reach service: 'dial tcp 10.244.1.4:80: i/o timeout'
The curl itself needs ca. 30sec to return. The version etc. is available. The proxy is running fine.
# curl http://localhost:8001/version
{
"major": "1",
"minor": "17",
"gitVersion": "v1.17.9",
"gitCommit": "4fb7ed12476d57b8437ada90b4f93b17ffaeed99",
"gitTreeState": "clean",
"buildDate": "2020-07-15T16:10:45Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
The tutorial shows on kubectl describe pods that the container has open ports (in my case it's <none>):
Port: 8080/TCP
Host Port: 0/TCP
Ok, I than created an apply-file bootcamp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-bootcamp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-bootcamp
template:
metadata:
labels:
app: kubernetes-bootcamp
spec:
containers:
- name: kubernetes-bootcamp
image: gcr.io/google-samples/kubernetes-bootcamp:v1
ports:
- containerPort: 8080
protocol: TCP
I removed the previous deployment
# kubectl delete deployments.apps kubernetes-bootcamp --force
# kubectl apply -f bootcamp.yaml
But after that I'm getting still the same i/o timeout on the new deployment.
So, what is my problem?

Weave CrashLoopBackOff on fresh HA Cluster

I create a HA clusters with kubeadm by following this guides:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
https://medium.com/faun/configuring-ha-kubernetes-cluster-on-bare-metal-servers-with-kubeadm-1-2-1e79f0f7857b
I've already have the ETCD nodes up and runnig, the APIserver through the HAproxy and keepalive running.
And 1 master node running with weave-net network.
I use this subnets
networking:
podSubnet: 192.168.240.0/22
serviceSubnet: 192.168.244.0/22
But when I join the second master node to the cluster, the weave pod created got CrashLoopBackOff.
I run the weave-net plugin with this line:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=192.168.240.0/21"
I also found that the /etc/cni/net.d isn't created by kubelet when apply weave conf.
The Master Nodes
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubemaster01 Ready master 17h v1.18.1 192.168.129.137 <none> Ubuntu 18.04.4 LTS 4.15.0-96-generic docker://19.3.8
kubemaster02 Ready master 83m v1.18.1 192.168.129.138 <none> Ubuntu 18.04.4 LTS 4.15.0-91-generic docker://19.3.8
The Pods
oot#kubemaster01:~# kubectl get pods,svc --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/coredns-66bff467f8-kh4mh 0/1 Running 0 18h 192.168.240.3 kubemaster01 <none> <none>
kube-system pod/coredns-66bff467f8-xhzjk 0/1 Running 0 18h 192.168.240.2 kubemaster01 <none> <none>
kube-system pod/kube-apiserver-kubemaster01 1/1 Running 0 16h 192.168.129.137 kubemaster01 <none> <none>
kube-system pod/kube-apiserver-kubemaster02 1/1 Running 0 104m 192.168.129.138 kubemaster02 <none> <none>
kube-system pod/kube-controller-manager-kubemaster01 1/1 Running 0 16h 192.168.129.137 kubemaster01 <none> <none>
kube-system pod/kube-controller-manager-kubemaster02 1/1 Running 0 104m 192.168.129.138 kubemaster02 <none> <none>
kube-system pod/kube-proxy-sct5x 1/1 Running 0 18h 192.168.129.137 kubemaster01 <none> <none>
kube-system pod/kube-proxy-tsr65 1/1 Running 0 104m 192.168.129.138 kubemaster02 <none> <none>
kube-system pod/kube-scheduler-kubemaster01 1/1 Running 2 18h 192.168.129.137 kubemaster01 <none> <none>
kube-system pod/kube-scheduler-kubemaster02 1/1 Running 0 104m 192.168.129.138 kubemaster02 <none> <none>
kube-system pod/weave-net-4zdg6 2/2 Running 0 3h 192.168.129.137 kubemaster01 <none> <none>
kube-system pod/weave-net-bf8mq 1/2 CrashLoopBackOff 38 104m 192.168.129.138 kubemaster02 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 192.168.244.1 <none> 443/TCP 20h <none>
kube-system service/kube-dns ClusterIP 192.168.244.10 <none> 53/UDP,53/TCP,9153/TCP 18h k8s-app=kube-dns
IP Routes in Master Nodes
root#kubemaster01:~# ip r
default via 192.168.128.1 dev ens3 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.128.0/21 dev ens3 proto kernel scope link src 192.168.129.137
192.168.240.0/21 dev weave proto kernel scope link src 192.168.240.1
root#kubemaster02:~# ip r
default via 192.168.128.1 dev ens3 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.128.0/21 dev ens3 proto kernel scope link src 192.168.129.138
Description of the weave pod running on the second master node
root#kubemaster01:~# kubectl describe pod/weave-net-bf8mq -n kube-system
Name: weave-net-bf8mq
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: kubemaster02./192.168.129.138
Start Time: Fri, 17 Apr 2020 12:28:09 -0300
Labels: controller-revision-hash=79478b764c
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.129.138
IPs:
IP: 192.168.129.138
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://93bff012aaebb34dc338001bf73798b5eeefe32a4d50b82731b0ef003c63c786
Image: docker.io/weaveworks/weave-kube:2.6.2
Image ID: docker-pullable://weaveworks/weave-kube#sha256:a1f58e75f24f02e1c2fa2a95b9e55a1b94930f455e75bd5f4799e1a55671971f
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 17 Apr 2020 14:15:59 -0300
Finished: Fri, 17 Apr 2020 14:16:29 -0300
Ready: False
Restart Count: 39
Requests:
cpu: 10m
Readiness: http-get http://127.0.0.1:6784/status delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
IPALLOC_RANGE: 192.168.240.0/21
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-xp46t (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://4de9116cae90cf3f6d59279dd1531938b102adcdd1b76464e5bbe2f2b013b060
Image: docker.io/weaveworks/weave-npc:2.6.2
Image ID: docker-pullable://weaveworks/weave-npc#sha256:5694b0b77003780333ccd1fc79810469434779cd86e926a17675cc5b70470459
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 17 Apr 2020 12:28:24 -0300
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-xp46t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-xp46t:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-xp46t
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
:NoExecute
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 11m (x17 over 81m) kubelet, kubemaster02. Container image "docker.io/weaveworks/weave-kube:2.6.2" already present on machine
Warning BackOff 85s (x330 over 81m) kubelet, kubemaster02. Back-off restarting failed container
The logs files complains about timeout, but that is because there is no network running.
root#kubemaster02:~# kubectl logs weave-net-bf8mq -name weave -n kube-system
FATA: 2020/04/17 17:22:04.386233 [kube-peers] Could not get peers: Get https://192.168.244.1:443/api/v1/nodes: dial tcp 192.168.244.1:443: i/o timeout
Failed to get peers
root#kubemaster02:~# kubectl logs weave-net-bf8mq -name weave-npc -n kube-system | more
INFO: 2020/04/17 15:28:24.851287 Starting Weaveworks NPC 2.6.2; node name "kubemaster02"
INFO: 2020/04/17 15:28:24.851469 Serving /metrics on :6781
Fri Apr 17 15:28:24 2020 <5> ulogd.c:408 registering plugin `NFLOG'
Fri Apr 17 15:28:24 2020 <5> ulogd.c:408 registering plugin `BASE'
Fri Apr 17 15:28:24 2020 <5> ulogd.c:408 registering plugin `PCAP'
Fri Apr 17 15:28:24 2020 <5> ulogd.c:981 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
WARNING: scheduler configuration failed: Function not implemented
DEBU: 2020/04/17 15:28:24.887619 Got list of ipsets: []
ERROR: logging before flag.Parse: E0417 15:28:54.923915 19321 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:321: Failed to list *v1.Pod: Get https://192.168.244.1:443/api/v1/pods?limit=500&resourceVersion=0: dial
tcp 192.168.244.1:443: i/o timeout
ERROR: logging before flag.Parse: E0417 15:28:54.923895 19321 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:322: Failed to list *v1.NetworkPolicy: Get https://192.168.244.1:443/apis/networking.k8s.io/v1/networkpo
licies?limit=500&resourceVersion=0: dial tcp 192.168.244.1:443: i/o timeout
ERROR: logging before flag.Parse: E0417 15:28:54.924071 19321 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:320: Failed to list *v1.Namespace: Get https://192.168.244.1:443/api/v1/namespaces?limit=500&resourceVer
sion=0: dial tcp 192.168.244.1:443: i/o timeout
Any advice or suggestions?
Regards.
The error was in a misconfiguration with CRI-O as CRI runtime. Follow this installation guide corrects the issue.
https://kubernetes.io/docs/setup/production-environment/container-runtimes/

Jenkins app is not accessible outside Kubernetes cluster

On CentOS 7.4, I have set up a Kubernetes master node, pulled down jenkins image and deployed it to the cluster defining the jenkins service on a NodePort as below.
I can curl the jenkins app from the worker or master nodes using the IP defined by the service. But, I can not access the Jenkins app (dashboard) from my browser (outside cluster) using the public IP of the master node.
[administrator#abcdefgh ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
abcdefgh Ready master 19h v1.13.1
hgfedcba Ready <none> 19h v1.13.1
[administrator#abcdefgh ~]$ sudo docker pull jenkinsci/jenkins:2.154-alpine
[administrator#abcdefgh ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 days ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 days ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 days ago 79.6MB
jenkinsci/jenkins 2.154-alpine aa25058d8320 2 weeks ago 222MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[administrator#abcdefgh ~]$ ls -l
total 8
-rw------- 1 administrator administrator 678 Dec 18 06:12 jenkins-deployment.yaml
-rw------- 1 administrator administrator 410 Dec 18 06:11 jenkins-service.yaml
[administrator#abcdefgh ~]$ cat jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
selector:
app: jenkins-master
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
spec:
selector:
app: jenkins-master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: jenkins-slaves
[administrator#abcdefgh ~]$ cat jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- image: jenkins/jenkins:2.154-alpine
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
[administrator#abcdefgh ~]$ kubectl create -f jenkins-service.yaml
service/jenkins-ui created
service/jenkins-discovery created
[administrator#abcdefgh ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery ClusterIP 10.98.--.-- <none> 50000/TCP 19h
jenkins-ui NodePort 10.97.--.-- <none> 8080:31587/TCP 19h
kubernetes ClusterIP 10.96.--.-- <none> 443/TCP 20h
[administrator#abcdefgh ~]$ kubectl create -f jenkins-deployment.yaml
deployment.extensions/jenkins created
[administrator#abcdefgh ~]$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 19h
[administrator#abcdefgh ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-6497cf9dd4-f9r5b 1/1 Running 0 19h
kube-system coredns-86c58d9df4-jfq5b 1/1 Running 0 20h
kube-system coredns-86c58d9df4-s4k6d 1/1 Running 0 20h
kube-system etcd-abcdefgh 1/1 Running 1 20h
kube-system kube-apiserver-abcdefgh 1/1 Running 1 20h
kube-system kube-controller-manager-abcdefgh 1/1 Running 5 20h
kube-system kube-flannel-ds-amd64-2w68w 1/1 Running 1 20h
kube-system kube-flannel-ds-amd64-6zl4g 1/1 Running 1 20h
kube-system kube-proxy-9r4xt 1/1 Running 1 20h
kube-system kube-proxy-s7fj2 1/1 Running 1 20h
kube-system kube-scheduler-abcdefgh 1/1 Running 8 20h
[administrator#abcdefgh ~]$ kubectl describe pod jenkins-6497cf9dd4-f9r5b
Name: jenkins-6497cf9dd4-f9r5b
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: hgfedcba/10.41.--.--
Start Time: Tue, 18 Dec 2018 06:32:50 -0800
Labels: app=jenkins-master
pod-template-hash=6497cf9dd4
Annotations: <none>
Status: Running
IP: 10.244.--.--
Controlled By: ReplicaSet/jenkins-6497cf9dd4
Containers:
jenkins:
Container ID: docker://55912512a7aa1f782784690b558d74001157f242a164288577a85901ecb5d152
Image: jenkins/jenkins:2.154-alpine
Image ID: docker-pullable://jenkins/jenkins#sha256:b222875a2b788f474db08f5f23f63369b0f94ed7754b8b32ac54b8b4d01a5847
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 18 Dec 2018 07:16:32 -0800
Ready: True
Restart Count: 0
Environment:
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wqph5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wqph5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wqph5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[administrator#abcdefgh ~]$ kubectl describe svc jenkins-ui
Name: jenkins-ui
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=jenkins-master
Type: NodePort
IP: 10.97.--.--
Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 31587/TCP
Endpoints: 10.244.--.--:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# Check if NodePort along with Kubernetes ports are open
[administrator#abcdefgh ~]$ sudo su root
[root#abcdefgh administrator]# systemctl start firewalld
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
Warning: ALREADY_ENABLED: 6443:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
Warning: ALREADY_ENABLED: 2379-2380:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
Warning: ALREADY_ENABLED: 10250:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
Warning: ALREADY_ENABLED: 10251:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
Warning: ALREADY_ENABLED: 10252:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
Warning: ALREADY_ENABLED: 10255:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=31587/tcp # NodePort of jenkins-ui service
Warning: ALREADY_ENABLED: 31587:tcp
success
[root#abcdefgh administrator]# firewall-cmd --reload
success
[administrator#abcdefgh ~]$ kubectl cluster-info
Kubernetes master is running at https://10.41.--.--:6443
KubeDNS is running at https://10.41.--.--:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[administrator#hgfedcba ~]$ curl 10.41.--.--:8080
curl: (7) Failed connect to 10.41.--.--:8080; Connection refused
# Successfully curl jenkins app using its service IP from the worker node
[administrator#hgfedcba ~]$ curl 10.97.--.--:8080
<!DOCTYPE html><html><head resURL="/static/5882d14a" data-rooturl="" data-resurl="/static/5882d14a">
<title>Dashboard [Jenkins]</title><link rel="stylesheet" ...
...
Would you know how to do that? Happy to provide additional logs. Also, I have installed jenkins from yum on another similar machine without any docker or kubernetes and it's possible to access it through 10.20.30.40:8080 in my browser so there is no provider firewall preventing me from doing that.
Your Jenkins Service is of type NodePort. That means that a specific port number, on any node within your cluster, will deliver your Jenkins UI.
When you described your Service, you can see that the port assigned was 31587.
You should be able to browse to http://SOME_IP:31587