Nginx Ingress not working on k3s running on Raspberry Pi - kubernetes

I have k3s installed on 4 Raspberry Pi's with traefik disabled.
I'm trying to run Home assistant on it using Nginx Ingress controller, kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/baremetal/deploy.yaml.
But for some reason, I just cannot expose the service. The ingress assigned 192.168.0.57, which is one of the nodes' IP. Am I missing something?
root#rpi1:~# kubectl get ingress -n home-assistant home-assistant-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
home-assistant-ingress nginx smart.home 192.168.0.57 80 20h
root#rpi1:~# curl http://192.168.0.57/
curl: (7) Failed to connect to 192.168.0.57 port 80: Connection refused
root#rpi1:~# curl http://smart.home/
curl: (7) Failed to connect to smart.home port 80: Connection refused
Please see following.
Pod:
root#rpi1:~# kubectl describe pod -n home-assistant home-assistant-deploy-7c4674b679-zbwn7
Name: home-assistant-deploy-7c4674b679-zbwn7
Namespace: home-assistant
Priority: 0
Node: rpi4/192.168.0.58
Start Time: Tue, 16 Aug 2022 20:31:28 +0100
Labels: app=home-assistant
pod-template-hash=7c4674b679
Annotations: <none>
Status: Running
IP: 10.42.3.7
IPs:
IP: 10.42.3.7
Controlled By: ReplicaSet/home-assistant-deploy-7c4674b679
Containers:
home-assistant:
Container ID: containerd://c7ec189112e9f2d085bd7f9cc7c8086d09b312e30771d7d1fef424685fcfbd07
Image: ghcr.io/home-assistant/home-assistant:stable
Image ID: ghcr.io/home-assistant/home-assistant#sha256:0555dc6a69293a1a700420224ce8d03048afd845465f836ef6ad60f5763b44f2
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 17 Aug 2022 18:06:16 +0100
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Tue, 16 Aug 2022 20:33:33 +0100
Finished: Wed, 17 Aug 2022 18:06:12 +0100
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5tb7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-n5tb7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 43m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 43m kubelet Container image "ghcr.io/home-assistant/home-assistant:stable" already present on machine
Normal Created 43m kubelet Created container home-assistant
Normal Started 43m kubelet Started container home-assistant
The pod is listening at port 8123
root#rpi1:~# kubectl exec -it -n home-assistant home-assistant-deploy-7c4674b679-zbwn7 -- netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8123 0.0.0.0:* LISTEN 60/python3
tcp6 0 0 :::8123 :::* LISTEN 60/python3
Deployment:
root#rpi1:~# kubectl describe deployments.apps -n home-assistant
Name: home-assistant-deploy
Namespace: home-assistant
CreationTimestamp: Tue, 16 Aug 2022 20:31:28 +0100
Labels: app=home-assistant
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=home-assistant
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=home-assistant
Containers:
home-assistant:
Image: ghcr.io/home-assistant/home-assistant:stable
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: home-assistant-deploy-7c4674b679 (1/1 replicas created)
Events: <none>
Service with port set to 8080 and target port to 8123:
root#rpi1:~# kubectl describe svc -n home-assistant home-assistant-service
Name: home-assistant-service
Namespace: home-assistant
Labels: app=home-assistant
Annotations: <none>
Selector: app=home-assistant
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.248.75
IPs: 10.43.248.75
LoadBalancer Ingress: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Port: <unset> 8080/TCP
TargetPort: 8123/TCP
NodePort: <unset> 31678/TCP
Endpoints: 10.42.3.7:8123
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedIngressIP 20h svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.58
Normal UpdatedIngressIP 20h (x2 over 22h) svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Normal AppliedDaemonSet 20h (x19 over 22h) svccontroller Applied LoadBalancer DaemonSet kube-system/svclb-home-assistant-service-f2675711
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57
Normal UpdatedIngressIP 47m svccontroller LoadBalancer Ingress IP addresses updated: 192.168.0.53, 192.168.0.56, 192.168.0.57, 192.168.0.58
Normal AppliedDaemonSet 47m (x8 over 47m) svccontroller Applied LoadBalancer DaemonSet kube-system/svclb-home-assistant-service-f2675711
My Ingress:
root#rpi1:~# kubectl describe ingress -n home-assistant home-assistant-ingress
Name: home-assistant-ingress
Labels: <none>
Namespace: home-assistant
Address: 192.168.0.57
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
smart.home
/ home-assistant-service:8080 (10.42.3.7:8123)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 19h (x2 over 19h) nginx-ingress-controller Scheduled for sync
Normal Sync 49m (x3 over 50m) nginx-ingress-controller Scheduled for sync
root#rpi1:~# kubectl get ingress -n home-assistant home-assistant-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
home-assistant-ingress nginx smart.home 192.168.0.57 80 19h
Can confirm I have Nginx ingress controller running:
root#rpi1:~# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-2thj7 0/1 Completed 0 22h
ingress-nginx-admission-patch-kwm4m 0/1 Completed 1 22h
ingress-nginx-controller-6dc865cd86-9h8wt 1/1 Running 2 (52m ago) 22h
Ingress Nginx Controller log
root#rpi1:~# kubectl logs -n ingress-nginx ingress-nginx-controller-6dc865cd86-9h8wt
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.3.0
Build: 2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
W0818 06:51:52.008386 7 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0818 06:51:52.009962 7 main.go:230] "Creating API client" host="https://10.43.0.1:443"
I0818 06:51:52.123762 7 main.go:274] "Running in Kubernetes cluster" major="1" minor="24" git="v1.24.3+k3s1" state="clean" commit="990ba0e88c90f8ed8b50e0ccd375937b841b176e" platform="linux/arm64"
I0818 06:51:52.594773 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0818 06:51:52.691571 7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0818 06:51:52.773089 7 nginx.go:258] "Starting NGINX Ingress controller"
I0818 06:51:52.807863 7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"21ae6485-bb0e-447e-b098-c510e43b171e", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0818 06:51:53.912887 7 store.go:429] "Found valid IngressClass" ingress="home-assistant/home-assistant-ingress" ingressclass="nginx"
I0818 06:51:53.913414 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"8719", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0818 06:51:53.975141 7 nginx.go:301] "Starting NGINX process"
I0818 06:51:53.975663 7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I0818 06:51:53.976173 7 nginx.go:321] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0818 06:51:53.980492 7 controller.go:167] "Configuration changes detected, backend reload required"
I0818 06:51:54.025524 7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I0818 06:51:54.025924 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-6dc865cd86-9h8wt"
I0818 06:51:54.039912 7 status.go:214] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6dc865cd86-9h8wt" node="rpi3"
I0818 06:51:54.051540 7 status.go:299] "updating Ingress status" namespace="home-assistant" ingress="home-assistant-ingress" currentValue=[{IP:192.168.0.57 Hostname: Ports:[]}] newValue=[]
I0818 06:51:54.071502 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"14445", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0818 06:51:54.823911 7 controller.go:184] "Backend successfully reloaded"
I0818 06:51:54.824200 7 controller.go:195] "Initial sync, sleeping for 1 second"
I0818 06:51:54.824334 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6dc865cd86-9h8wt", UID:"def1db3a-4766-4751-b611-ae3461911bc6", APIVersion:"v1", ResourceVersion:"14423", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0818 06:51:57.788759 7 controller.go:1111] Service "home-assistant/home-assistant-service" does not have any active Endpoint.
I0818 06:52:54.165805 7 status.go:299] "updating Ingress status" namespace="home-assistant" ingress="home-assistant-ingress" currentValue=[] newValue=[{IP:192.168.0.57 Hostname: Ports:[]}]
I0818 06:52:54.190556 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"home-assistant", Name:"home-assistant-ingress", UID:"eeb12441-9cd4-4571-b0da-5b2978ff3267", APIVersion:"networking.k8s.io/v1", ResourceVersion:"14590", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
Endpoints
root#rpi1:~# kubectl get endpoints -A
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.0.53:6443 35h
kube-system kube-dns 10.42.0.12:53,10.42.0.12:53,10.42.0.12:9153 35h
home-assistant home-assistant-service 10.42.3.9:8123 35h
kube-system metrics-server 10.42.0.14:4443 35h
ingress-nginx ingress-nginx-controller-admission 10.42.2.13:8443 35h
ingress-nginx ingress-nginx-controller 10.42.2.13:443,10.42.2.13:80 35h
Can also confirm the Traefik Ingress controller is disabled
root#rpi1:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-2thj7 0/1 Completed 0 22h
ingress-nginx ingress-nginx-admission-patch-kwm4m 0/1 Completed 1 22h
kube-system local-path-provisioner-7b7dc8d6f5-jcm4p 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-w88fv 1/1 Running 1 (59m ago) 22h
kube-system coredns-b96499967-rml6k 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-rv8rf 1/1 Running 1 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-9qk8m 1/1 Running 2 (59m ago) 22h
kube-system svclb-home-assistant-service-f2675711-m62sl 1/1 Running 1 (59m ago) 22h
home-assistant home-assistant-deploy-7c4674b679-zbwn7 1/1 Running 1 (59m ago) 22h
kube-system metrics-server-668d979685-rp2wm 1/1 Running 1 (59m ago) 22h
ingress-nginx ingress-nginx-controller-6dc865cd86-9h8wt 1/1 Running 2 (59m ago) 22h
Ingress Nginx Controller Service:
root#rpi1:~# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.43.254.114 <none> 80:32313/TCP,443:31543/TCP 23h
ingress-nginx-controller-admission ClusterIP 10.43.135.213 <none> 443/TCP 23h
root#rpi1:~# kubectl describe svc -n ingress-nginx ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.254.114
IPs: 10.43.254.114
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32313/TCP
Endpoints: 10.42.2.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31543/TCP
Endpoints: 10.42.2.10:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Updated: added Ingress Nginx Controller service
Updated2: added Ingress Nginx Controller log and endpoints

Related

kubernetes - unable to expose Prometheus using NodePort

I have Prometheus installed on GCP, and i'm able to do a port-forward and access the Prometheus UI
Prometheus Pods, Events on GCP :
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-5ccfb68647-8fjrz 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
grafana-5ccfb68647-h7vbr 1/1 Running 0 5h24m 10.76.0.9 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-hw6d5 1/1 Running 0 5h24m 10.76.0.4 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-znjs6 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
prometheus-prometheus-0 2/2 Running 0 5h24m 10.76.0.10 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-1 2/2 Running 0 5h24m 10.76.0.7 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-2 2/2 Running 0 5h24m 10.76.0.11 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get endpoints -n monitoring
NAME ENDPOINTS AGE
grafana 10.76.0.9:3000 28h
grafana-lb 10.76.0.9:3000 54m
prometheus-lb 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 155m
prometheus-nodeport 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 149m
prometheus-operated 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 28h
prometheus-operator 10.76.0.4:8080 29h
I've create a NodePort(port 30900), and also create a firewall allowing ingress to the port 30900
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get svc -n monitoring | grep prometheus-nodeport
prometheus-nodeport NodePort 10.80.7.195 <none> 9090:30900/TCP 146m
However, when i try to access using http://<node_ip>:30900,
the url is not accessible.
Also, telnet to the host/port is not working
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11 30900
Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7
PING 10.76.0.7 (10.76.0.7): 56 data bytes
Request timeout for icmp_seq 0
Here is the yaml used to create the NodePort (in monitoring namespace)
apiVersion: v1
kind: Service
metadata:
name: prometheus-nodeport
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: prometheus
Any ideas on what the issue is ?
How do i debug/resolve this ?
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11
30900 Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7 PING
10.76.0.7 (10.76.0.7): 56 data bytes
The IP that you used above appeared to be in the Pod CIDR range when judged from the EndPoints result in the question. These are not the worker node IP, which means you need to first check if you can reach any of the worker node over the network that you reside now (home? vpn? internet?), and the worker node already has the correct port (30900) opened.

Getting prometheus/grafana and k3s to work together

T learn kubernetes I've built myself a bare metal cluster using 4 Raspberry PIs set it up using k3s:
# curl -sfL https://get.k3s.io | sh -
Added nodes etc., and everything comes up and I can see all the nodes and almost everything is working as expected.
I wanted to monitor the PIs so I installed the kube-prometheus-stack with helm:
$ kubectl create namespace monitoring
$ helm install prometheus --namespace monitoring prometheus-community/kube-prometheus-stack
And now everything looks fantastic:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-crd-s8zw5 0/1 Completed 0 5d21h
kube-system helm-install-traefik-rc9f2 0/1 Completed 1 5d21h
monitoring prometheus-prometheus-node-exporter-j85rw 1/1 Running 10 28h
kube-system metrics-server-86cbb8457f-mvbkl 1/1 Running 12 5d21h
kube-system coredns-7448499f4d-t7sp8 1/1 Running 13 5d21h
monitoring prometheus-prometheus-node-exporter-mmh2q 1/1 Running 9 28h
monitoring prometheus-prometheus-node-exporter-j4k4c 1/1 Running 10 28h
monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 10 28h
kube-system svclb-traefik-zkqd6 2/2 Running 6 19h
monitoring prometheus-prometheus-node-exporter-bft5t 1/1 Running 10 28h
kube-system local-path-provisioner-5ff76fc89d-g8tm6 1/1 Running 12 5d21h
kube-system svclb-traefik-jcxd2 2/2 Running 28 5d21h
kube-system svclb-traefik-mpbjm 2/2 Running 22 5d21h
kube-system svclb-traefik-7kxtw 2/2 Running 20 5d21h
monitoring prometheus-grafana-864598fd54-9548l 2/2 Running 10 28h
kube-system traefik-65969d48c7-9lh9m 1/1 Running 3 19h
monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 10 28h
monitoring prometheus-kube-state-metrics-76f66976cb-m8k2h 1/1 Running 6 28h
monitoring prometheus-kube-prometheus-operator-5c758db547-zsv4s 1/1 Running 6 28h
The services are all there:
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d21h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d21h
kube-system metrics-server ClusterIP 10.43.80.65 <none> 443/TCP 5d21h
kube-system prometheus-kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 28h
kube-system prometheus-kube-prometheus-kube-scheduler ClusterIP None <none> 10251/TCP 28h
monitoring prometheus-kube-prometheus-operator ClusterIP 10.43.180.73 <none> 443/TCP 28h
kube-system prometheus-kube-prometheus-coredns ClusterIP None <none> 9153/TCP 28h
kube-system prometheus-kube-prometheus-kube-etcd ClusterIP None <none> 2379/TCP 28h
kube-system prometheus-kube-prometheus-kube-controller-manager ClusterIP None <none> 10252/TCP 28h
monitoring prometheus-kube-prometheus-alertmanager ClusterIP 10.43.195.99 <none> 9093/TCP 28h
monitoring prometheus-prometheus-node-exporter ClusterIP 10.43.171.218 <none> 9100/TCP 28h
monitoring prometheus-grafana ClusterIP 10.43.20.165 <none> 80/TCP 28h
monitoring prometheus-kube-prometheus-prometheus ClusterIP 10.43.207.29 <none> 9090/TCP 28h
monitoring prometheus-kube-state-metrics ClusterIP 10.43.229.14 <none> 8080/TCP 28h
kube-system prometheus-kube-prometheus-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 28h
monitoring alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 28h
monitoring prometheus-operated ClusterIP None <none> 9090/TCP 28h
kube-system traefik LoadBalancer 10.43.20.17 192.168.76.200,192.168.76.201,192.168.76.202,192.168.76.203 80:31131/TCP,443:31562/TCP 5d21h
Namespaces:
$ kubectl get namespaces
NAME STATUS AGE
kube-system Active 5d21h
default Active 5d21h
kube-public Active 5d21h
kube-node-lease Active 5d21h
monitoring Active 28h
But I couldn't reach the grafana service.
Fair enough I thought, let's define an Ingress but it didn't work:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: prometheus-grafana
port:
number: 80
I have no idea why it isn't getting to the service and I can't really see where the problem is, although I understand containers, etc. (I first had everything running on docker swarm), I don't really know where, if anywhere, it would be shown in the logs.
I've spent the past couple of days trying all sorts of things and I finally found a hint about name spaces and problems calling services and something called "type: ExternalName".
I checked with curl from a pod inside the cluster and it is delivering the data inside of the "monitoring" name space but traefik can't get there or maybe even see it?
Having looked at the Traefik documentation I found this regarding namespaces but I have no idea where I would start to find the mentioned:
providers:
kubernetesCRD:
namespaces:
I'm assuming that k3s has set this up correctly as an empty array because I can't find anything on their site that tells me what to do with their combination of "klipper-lb" and "traefik".
I finally tried to define another service with an external name:
---
apiVersion: v1
kind: Service
metadata:
name: grafana-named
namespace: kube-system
spec:
type: ExternalName
externalName: prometheus-grafana.monitoring.svc.cluster.local
ports:
- name: service
protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: grafana-named
port:
number: 80
After 2-3 days, I've tried everything I can think of, google everything under the sun and I can't get to grafana from outside of the internal cluster nodes.
I am at a loss as to how I can make anything work with k3s. I installed Lens on my main PC and can see almost everything there, but I think that the missing metrics information requires an Ingress or something like that too.
What do I have to do to get traefik to do what I think is basically it's job, route incoming requests to the backend services?
I filed a bug report on github and one of the people there (thanks again brandond) pointed me in the right direction.
The network layer uses flannel to process the "in cluster" networking. The default implementation for that is something called "vxlan" and that is seemingly more complex with virtual ethernet adapters.
For my requirements (read: getting the cluster to even work), the solution was to change the implementation to "host-gw".
This is done by adding "--flannel-backend=host-gw" to the k3s.service option on the controller.
$ sudo systemctl edit k3s.service
### Editing /etc/systemd/system/k3s.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file
[Service]
ExecStart=
ExecStart=/usr/local/bin/k3s \
server \
'--flannel-backend=host-gw'
### Lines below this comment will be discarded
The first "ExecStart=" clears the existing default start command to enable it to be replaced by the 2nd one.
Now everything is working as I expected, and I can finally move forward with learning K8s.
I'll probably reactivate "vxlan" at some point and figure that out too.

Kubernetes high latency access svc ip on other nodes BUT works well in nodePort

My k8s env:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready master 46h v1.18.0 172.18.90.100 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://19.3.8
k8s-node01 Ready <none> 46h v1.18.0 172.18.90.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://19.3.8
kube-system:
kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66bff467f8-9dg27 1/1 Running 0 16h 10.244.1.62 k8s-node01 <none> <none>
coredns-66bff467f8-blgch 1/1 Running 0 16h 10.244.0.5 k8s-master01 <none> <none>
etcd-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-flannel-ds-amd64-scgkt 1/1 Running 0 17h 172.19.90.194 k8s-node01 <none> <none>
kube-flannel-ds-amd64-z6fk9 1/1 Running 0 44h 172.19.90.189 k8s-master01 <none> <none>
kube-proxy-8pbmz 1/1 Running 0 16h 172.19.90.194 k8s-node01 <none> <none>
kube-proxy-sgpds 1/1 Running 0 16h 172.19.90.189 k8s-master01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
My Deployment and Service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
spec:
selector:
matchLabels:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9376
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
My svc info:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames ClusterIP 10.106.24.115 <none> 80/TCP 42m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
The problem:
When I curl 10.106.24.115 on the k8s-master01 responding a high deley about a minute,But I can get response right away on k8s-node01.
I edited my svc and changed ClusterIP to NodePort:
kubectl edit svc hostnames
spec:
clusterIP: 10.106.24.115
ports:
- name: default
port: 80
protocol: TCP
targetPort: 9376
nodePort: 30888
selector:
app: hostnames
sessionAffinity: None
type: NodePort
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames NodePort 10.106.24.115 <none> 80:30888/TCP 64m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
Now, I curl each node with: nodeIp:30888. It works well and response right away.Why It occured high delay when I access throuth ClusterIP on other node.I also have another k8s cluster, it has no problem.Then the same delay response using curl 127.0.0.1:30555 on k8s-master01. so weird!
There are no errors in my kube-controller-manager:
'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-mbh4k
I0330 09:11:20.953439 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7pd8r
I0330 09:11:36.488237 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"f42d9cbc-c757-48f0-96a4-d15f75082a88", APIVersion:"v1", ResourceVersion:"250956", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I0330 09:11:44.753349 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"250936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-z7fps
I0330 09:12:46.690043 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-flannel-ds-amd64", UID:"12cda6e4-fd07-4328-887d-6dd9ca8a86d7", APIVersion:"apps/v1", ResourceVersion:"251183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-flannel-ds-amd64-scgkt
I0330 09:19:35.915568 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"251982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9dg27
I0330 09:19:42.808373 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"f42d9cbc-c757-48f0-96a4-d15f75082a88", APIVersion:"v1", ResourceVersion:"252221", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I0330 09:19:52.606633 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"252222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-blgch
I0330 09:20:36.488412 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"33fa53f5-2240-4020-9b1f-14025bb3ab0b", APIVersion:"apps/v1", ResourceVersion:"252365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-sgpds
I0330 09:20:46.686463 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"33fa53f5-2240-4020-9b1f-14025bb3ab0b", APIVersion:"apps/v1", ResourceVersion:"252416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-8pbmz
I0330 09:24:31.015395 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hostnames", UID:"b54625e7-6f84-400a-9048-acd4a9207d86", APIVersion:"apps/v1", ResourceVersion:"252991", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hostnames-68b5ff98ff to 3
I0330 09:24:31.020097 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-gzvxb
I0330 09:24:31.024513 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-kl29m
I0330 09:24:31.024538 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-czrqx
I0331 00:56:33.245614 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hostnames", UID:"10e9b06c-9e0c-4303-aff9-9ec03f5c5919", APIVersion:"apps/v1", ResourceVersion:"381792", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hostnames-68b5ff98ff to 3
I0331 00:56:33.251743 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-7z4bb
I0331 00:56:33.256083 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-2zwxf
I0331 00:56:33.256171 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-x289b
The output of describe ep kube-dns:
kubectl describe ep kube-dns --namespace=kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-03-31T04:27:42Z
Subsets:
Addresses: 10.244.0.2,10.244.0.3
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
dns-tcp 53 TCP
metrics 9153 TCP
dns 53 UDP
Events: <none>
Based on the information that you provided there are couple of things that can be checked/done:
Your kube-controller-manager reports an error with endpoints:
Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
Going further you may also notice that your your kube-dns endpoints does not match your core-dns ip addresses.
This could be caused by previous kubeadm installation that was not entirely cleaned up and did not remove the cni and flannel interfaces.
I would assure and check for any virtual NIC's created by flannel with previous installation. You can check them using ip link command and then delete them:
ip link delete cni0
ip link delete flannel.1
Alternatively use brctl command (brctl delbr cni0)
Please also note that you reported initializing cluster with 10.244.0.0/16 but I can see that your system pods are running with different one (except the coreDNS pods which have the correct one). All the system pods should have the same pod subnet that you specified using the --pod-network-cidr flag. Your Pod network must not overlap with any of the host networks. Looking at your system pods having the same subnet as the host this may also be the reason for that.
Second thing is to check iptables-save for master and worker. Your reported that using NodePort you don't experience latency. I would assume it because you using NodePort you are bypassing the flannel networking and going straight to the pod that is running on the worker (I can see that you have only one). This also indicates an issues with CNI.

Jenkins app is not accessible outside Kubernetes cluster

On CentOS 7.4, I have set up a Kubernetes master node, pulled down jenkins image and deployed it to the cluster defining the jenkins service on a NodePort as below.
I can curl the jenkins app from the worker or master nodes using the IP defined by the service. But, I can not access the Jenkins app (dashboard) from my browser (outside cluster) using the public IP of the master node.
[administrator#abcdefgh ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
abcdefgh Ready master 19h v1.13.1
hgfedcba Ready <none> 19h v1.13.1
[administrator#abcdefgh ~]$ sudo docker pull jenkinsci/jenkins:2.154-alpine
[administrator#abcdefgh ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 days ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 days ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 days ago 79.6MB
jenkinsci/jenkins 2.154-alpine aa25058d8320 2 weeks ago 222MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[administrator#abcdefgh ~]$ ls -l
total 8
-rw------- 1 administrator administrator 678 Dec 18 06:12 jenkins-deployment.yaml
-rw------- 1 administrator administrator 410 Dec 18 06:11 jenkins-service.yaml
[administrator#abcdefgh ~]$ cat jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
selector:
app: jenkins-master
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
spec:
selector:
app: jenkins-master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: jenkins-slaves
[administrator#abcdefgh ~]$ cat jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- image: jenkins/jenkins:2.154-alpine
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
[administrator#abcdefgh ~]$ kubectl create -f jenkins-service.yaml
service/jenkins-ui created
service/jenkins-discovery created
[administrator#abcdefgh ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery ClusterIP 10.98.--.-- <none> 50000/TCP 19h
jenkins-ui NodePort 10.97.--.-- <none> 8080:31587/TCP 19h
kubernetes ClusterIP 10.96.--.-- <none> 443/TCP 20h
[administrator#abcdefgh ~]$ kubectl create -f jenkins-deployment.yaml
deployment.extensions/jenkins created
[administrator#abcdefgh ~]$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 19h
[administrator#abcdefgh ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-6497cf9dd4-f9r5b 1/1 Running 0 19h
kube-system coredns-86c58d9df4-jfq5b 1/1 Running 0 20h
kube-system coredns-86c58d9df4-s4k6d 1/1 Running 0 20h
kube-system etcd-abcdefgh 1/1 Running 1 20h
kube-system kube-apiserver-abcdefgh 1/1 Running 1 20h
kube-system kube-controller-manager-abcdefgh 1/1 Running 5 20h
kube-system kube-flannel-ds-amd64-2w68w 1/1 Running 1 20h
kube-system kube-flannel-ds-amd64-6zl4g 1/1 Running 1 20h
kube-system kube-proxy-9r4xt 1/1 Running 1 20h
kube-system kube-proxy-s7fj2 1/1 Running 1 20h
kube-system kube-scheduler-abcdefgh 1/1 Running 8 20h
[administrator#abcdefgh ~]$ kubectl describe pod jenkins-6497cf9dd4-f9r5b
Name: jenkins-6497cf9dd4-f9r5b
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: hgfedcba/10.41.--.--
Start Time: Tue, 18 Dec 2018 06:32:50 -0800
Labels: app=jenkins-master
pod-template-hash=6497cf9dd4
Annotations: <none>
Status: Running
IP: 10.244.--.--
Controlled By: ReplicaSet/jenkins-6497cf9dd4
Containers:
jenkins:
Container ID: docker://55912512a7aa1f782784690b558d74001157f242a164288577a85901ecb5d152
Image: jenkins/jenkins:2.154-alpine
Image ID: docker-pullable://jenkins/jenkins#sha256:b222875a2b788f474db08f5f23f63369b0f94ed7754b8b32ac54b8b4d01a5847
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 18 Dec 2018 07:16:32 -0800
Ready: True
Restart Count: 0
Environment:
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wqph5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wqph5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wqph5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[administrator#abcdefgh ~]$ kubectl describe svc jenkins-ui
Name: jenkins-ui
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=jenkins-master
Type: NodePort
IP: 10.97.--.--
Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 31587/TCP
Endpoints: 10.244.--.--:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# Check if NodePort along with Kubernetes ports are open
[administrator#abcdefgh ~]$ sudo su root
[root#abcdefgh administrator]# systemctl start firewalld
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
Warning: ALREADY_ENABLED: 6443:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
Warning: ALREADY_ENABLED: 2379-2380:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
Warning: ALREADY_ENABLED: 10250:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
Warning: ALREADY_ENABLED: 10251:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
Warning: ALREADY_ENABLED: 10252:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
Warning: ALREADY_ENABLED: 10255:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=31587/tcp # NodePort of jenkins-ui service
Warning: ALREADY_ENABLED: 31587:tcp
success
[root#abcdefgh administrator]# firewall-cmd --reload
success
[administrator#abcdefgh ~]$ kubectl cluster-info
Kubernetes master is running at https://10.41.--.--:6443
KubeDNS is running at https://10.41.--.--:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[administrator#hgfedcba ~]$ curl 10.41.--.--:8080
curl: (7) Failed connect to 10.41.--.--:8080; Connection refused
# Successfully curl jenkins app using its service IP from the worker node
[administrator#hgfedcba ~]$ curl 10.97.--.--:8080
<!DOCTYPE html><html><head resURL="/static/5882d14a" data-rooturl="" data-resurl="/static/5882d14a">
<title>Dashboard [Jenkins]</title><link rel="stylesheet" ...
...
Would you know how to do that? Happy to provide additional logs. Also, I have installed jenkins from yum on another similar machine without any docker or kubernetes and it's possible to access it through 10.20.30.40:8080 in my browser so there is no provider firewall preventing me from doing that.
Your Jenkins Service is of type NodePort. That means that a specific port number, on any node within your cluster, will deliver your Jenkins UI.
When you described your Service, you can see that the port assigned was 31587.
You should be able to browse to http://SOME_IP:31587

nginx to upstream headless service gets connection refused but I can curl from within the webapp container

I am using microk8s and have an nginx frontend service connect to a headless webapplication (ClusterIP = None). However, the nginx service is refused connection to the backend service.
nginx configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.config: |
user nginx;
worker_processes auto;
# set open fd limit to 30000
#worker_rlimit_nofile 10000;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
rewrite_log on;
upstream svc-web {
server localhost:8080;
keepalive 1024;
}
server {
listen 80;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://svc-web;
proxy_http_version 1.1;
}
}
}
$ k get all
NAME READY STATUS RESTARTS AGE
pod/blazegraph-0 1/1 Running 0 19h
pod/default-http-backend-587b7d64b5-c4rzj 1/1 Running 0 19h
pod/mysql-0 1/1 Running 0 19h
pod/nginx-7fdcdfcc7d-nlqc2 1/1 Running 0 12s
pod/nginx-ingress-microk8s-controller-b9xcd 1/1 Running 0 19h
pod/web-0 1/1 Running 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.152.183.94 <none> 80/TCP 19h
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 22h
service/svc-db ClusterIP None <none> 3306/TCP,9999/TCP 19h
service/svc-frontend NodePort 10.152.183.220 <none> 80:32282/TCP,443:31968/TCP 12s
service/svc-web ClusterIP None <none> 8080/TCP,8443/TCP 15s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 19h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 19h
deployment.apps/nginx 1 1 1 1 12s
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-587b7d64b5 1 1 1 19h
replicaset.apps/nginx-7fdcdfcc7d 1 1 1 12s
NAME DESIRED CURRENT AGE
statefulset.apps/blazegraph 1 1 19h
statefulset.apps/mysql 1 1 19h
statefulset.apps/web 1 1 15s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot <unknown>/55% 1 1 0 19h
$ k describe pod web-0
Name: web-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: khteh-t580/192.168.86.93
Start Time: Fri, 30 Nov 2018 09:19:53 +0800
Labels: app=app-web
controller-revision-hash=web-5b9476f774
statefulset.kubernetes.io/pod-name=web-0
Annotations: <none>
Status: Running
IP: 10.1.1.203
Controlled By: StatefulSet/web
Containers:
web-service:
Container ID: docker://b5c68ba1d9466c352af107df69f84608aaf233d117a9d71ad307236d10aec03a
Image: khteh/tomcat:tomcat-webapi
Image ID: docker-pullable://khteh/tomcat#sha256:c246d322872ab315948f6f2861879937642a4f3e631f75e00c811afab7f4fbb9
Ports: 8080/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Fri, 30 Nov 2018 09:20:02 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/web/html from web-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s6bpp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
web-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: web-persistent-storage-web-0
ReadOnly: false
default-token-s6bpp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s6bpp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/web-0 to khteh-t580
Normal Pulling 11m kubelet, khteh-t580 pulling image "khteh/tomcat:tomcat-webapi"
Normal Pulled 11m kubelet, khteh-t580 Successfully pulled image "khteh/tomcat:tomcat-webapi"
Normal Created 11m kubelet, khteh-t580 Created container
Normal Started 11m kubelet, khteh-t580 Started container
$ k describe svc svc-frontend
Name: svc-frontend
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"svc-frontend","namespace":"default"},"spec":{"ports":[{"name":"ht...
Selector: app=nginx,tier=frontend
Type: NodePort
IP: 10.152.183.159
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30879/TCP
Endpoints: 10.1.1.204:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31929/TCP
Endpoints: 10.1.1.204:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
curl <nodportIP>:32282/webapi/greeting would hang.
curl <pod IP>:8080/webapi/greeting WORKS.
curl <endpoint IP>:80/webapi/greeting results in "Bad Gateway":
$ curl http://10.1.1.204/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
Inside the nginx container:
root#nginx-7fdcdfcc7d-nlqc2:/var/log/nginx# tail -f app.error_log
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost:32282"
2018/11/24 08:17:04 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.1.1, server: , request: "GET / HTTP/1.1", upstream: "http://[::1]:8080/", host: "localhost:32282"
$ k get endpoints
NAME ENDPOINTS AGE
default-http-backend 10.1.1.246:80 6d20h
kubernetes 192.168.86.93:6443 6d22h
svc-db 10.1.1.248:9999,10.1.1.253:9999,10.1.1.248:3306 + 1 more... 5h48m
svc-frontend 10.1.1.242:80,10.1.1.242:443 6h13m
svc-web 10.1.1.245:8443,10.1.1.245:8080 6h13m
khteh#khteh-T580:/usr/src/kubernetes/cluster1 2950 $ curl 10.1.1.242:80/webapi/greeting
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.7</center>
</body>
</html>
khteh#khteh-T580:/usr/src/kubernetes/cluster1 2951 $
fix upstream configuration by using the name of the upstream service and curl using http://clusterip/...