How to access kubernetes microk8s dashboard remotely without Ingress? - kubernetes

I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>

Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md

Related

unable to access nodeIP:port, serviceIP:port or podIP:port in minikube k8s

I am using k8s in minikube under Ubuntu and deployed nginx server. Which i want to access from different level eg from serviceip, nodeip or pod ip and none of them is reachable.Not sure why?? I am running my curl command to access ip:port from the ubuntu host machine where minikube node is installed. below is the log
/home/ravi/k8s>kgp
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-775bf4d7fb-jqxxv 1/1 Running 0 13m 172.17.0.3 minikube <none> <none>
kube-system coredns-66bff467f8-gtsl7 1/1 Running 0 9h 172.17.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-proxy-nphlc 1/1 Running 0 7h28m 192.168.49.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 21 9h 192.168.49.2 minikube <none> <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kgs
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h <none>
default nginx-service NodePort 10.101.107.62 <none> 80:31000/TCP 13m app=nginx-app
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9h k8s-app=kube-dns
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx-app
Type: NodePort
IP: 10.101.107.62
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31000/TCP
Endpoints: 172.17.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 172.17.0.3:8000
curl: (7) Failed to connect to 172.17.0.3 port 8000: No route to host
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.1.52:31000
curl: (7) Failed to connect to 192.168.1.52 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 10.101.107.62:80 ---> also hangs
......
......
/home/ravi/k8s>
/home/ravi/k8s>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 9h v1.18.20 192.168.49.2 <none> Ubuntu 20.04.1 LTS 5.13.0-40-generic docker://20.10.3
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.49.2:31000
curl: (7) Failed to connect to 192.168.49.2 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s> kubectl logs nginx-deployment-775bf4d7fb-jqxxv ---> no log shown
/home/ravi/k8s>cat 2_nginx_nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 31000
port: 80
targetPort: 8000
/home/ravi/k8s>
root#nginx-deployment-775bf4d7fb-jqxxv:~# curl 172.17.0.3:80 ---> working on port 80 instead of 8000 as set in yaml
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</body>
</html>

404 Not Found error after configuring the Nginx Ingress Controller

UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...

Ambassador responds with "no healthy upstream"

I have a simple k3s cluster with the Ambassador ingress controller installed as per the docs
When I try to access the service through my browser, I just get a "no healthy upstream" message.
These are my configs:
$ kubectl describe svc web-test-service
Name: web-test-service
Namespace: default
Labels: app=web-test
Annotations: Selector: app=web-test
Type: ClusterIP
IP: 10.43.109.123
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.42.1.19:8080
Session Affinity: None
Events: <none>
$ kubectl describe svc ambassador
Name: ambassador
Namespace: default
Labels: app.kubernetes.io/component=ambassador-service
Annotations: Selector: service=ambassador
Type: LoadBalancer
IP: 10.43.12.194
LoadBalancer Ingress: 10.136.64.114
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30005/TCP
Endpoints: 10.42.0.10:8080,10.42.1.28:8080,10.42.1.29:8080
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30928
Events: <none>
$ kubectl get po
NAME READY STATUS RESTARTS AGE
web-test-5594bffd47-8pzdk 1/1 Running 0 175m
svclb-ambassador-p5rr7 1/1 Running 0 24m
svclb-ambassador-k4j52 1/1 Running 0 24m
ambassador-58b444b8-tqjkk 1/1 Running 0 24m
ambassador-58b444b8-b9x7v 1/1 Running 0 24m
ambassador-58b444b8-wfclj 1/1 Running 0 24m
I've checked the service logs and the application is up and running and listening on port 8080.

Kubernetes Unable to Access pods

I have one master and worker node and both are up & running, I deployed an angular application in my k8 cluster. When I'm inspecting my pod log all things are working file without any error.
I am trying to access the application in browser using master and worker IP address followed by a node port number like below, and getting error like unable to connect.
http://10.0.0.1:32394/
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
pod-template-hash=6848bc9666
Annotations: <none>
Status: Running
IP: 10.32.0.3
IPs:
IP: 10.32.0.3
Controlled By: ReplicaSet/frontend-app-6848bc9666
Containers:
frontend-app:
Container ID: docker://292199347e391c9feecd667e1668f32931f1fd7c670514eb1e05e4a37b8109ad
Image: frontend-app:future-master-fix-7ba35fbe
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 17 Jan 2020 05:04:15 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m44s default-scheduler Successfully assigned pre-release/frontend-app-6848bc9666-9ggz7 to SBT-poc-worker2
Normal Pulled 5m41s kubelet, SBT-poc-worker2 Container image "frontend-app:future-master-fix-7ba35fbe" already present on machine
Normal Created 5m39s kubelet, SBT-poc-worker2 Created container frontend-app
Normal Started 5m39s kubelet, SBT-poc-worker2 Started container frontend-app
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods -n pre-release
NAME READY STATUS RESTARTS AGE
frontend-app-6848bc9666-9ggz7 1/1 Running 0 7m26s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get services -n pre-release
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.6.77 <none> 8080:32394/TCP 7m36s
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment -n pre-release
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 11m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get -o yaml -n pre-release svc frontend-app
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"frontend-app"},"name":"frontend-app","namespace":"pre-release"},"spec":{"ports":[{"port":8080,"targetPort":8080}],"selector":{"name":"frontend-app"},"type":"NodePort"}}
creationTimestamp: "2020-01-17T05:04:10Z"
labels:
name: frontend-app
name: frontend-app
namespace: pre-release
resourceVersion: "1972713"
selfLink: /api/v1/namespaces/pre-release/services/frontend-app
uid: 91b87f9e-d723-498c-af05-5969645a82ee
spec:
clusterIP: 10.96.6.77
externalTrafficPolicy: Cluster
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m 10.32.0.5 SBT-poc-worker2 <none> <none>
root#jenkins-linux-vm:/home/SBT-admin# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-app-7c7cf68f9c-n9tct 1/1 Running 0 58m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-app NodePort 10.96.21.202 <none> 8080:31098/TCP 59m
root#jenkins-linux-vm:/home/SBT-admin# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 1/1 1 1 59m
can you please someone help me to fix this.
Label on the POD is app=frontend-app as seen from logs on your problem statement.
Your POD description shows below label
Name: frontend-app-6848bc9666-9ggz7
Namespace: pre-release
Priority: 0
Node: SBT-poc-worker2/10.0.0.5
Start Time: Fri, 17 Jan 2020 05:04:10 +0000
Labels: app=frontend-app
Selector field on service yaml file is name: frontend-app , you should change this label on service yaml file to app: frontend-app and updated the service created.
Your current selector value is as below and is wrong comparing the label on POD
ports:
- nodePort: 32394
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: frontend-app
Change it to
selector:
app: frontend-app
You should try to establish that
There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.
For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.
Ingress IPv4 TCP 30000 - 32767 0.0.0.0/0
Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:
Steps:
Deploy a NGINX deployment using below nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Verify deployment is up and running
$ kubectl apply -f nginx.yaml
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-75897978cd-ptqv9 1/1 Running 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-75897978cd 1 1 1 33s
Now create service to expose the nginx deployment using below example
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
run: my-nginx
Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)
$ kubectl apply -f service.yaml
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d11h
my-nginx NodePort 10.96.174.234 <none> 8080:32502/TCP 12s
In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1 Ready master 4d11h v1.17.0 131.112.113.101 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-1 Ready <none> 4d11h v1.17.0 131.112.113.102 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
node-2 Ready <none> 4d11h v1.17.0 131.112.113.103 <none> Ubuntu 16.04.6 LTS 4.4.0-169-generic docker://18.6.2
Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below
Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your frontend-app better. Hope this will help you understand the issue at your node/cluster level if any.

Accessing service using istio ingress gives 503 error when mTLS is enabled

I have a mutual TLS enabled Istio mesh. My setup is as follows
A service running inside a pod (Service container + envoy)
An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes /info/ route to the above service.
Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs /info/ path to the service described in 2
I'm attempting to access the service from the ingress gateway using a curl command such as:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
But I'm getting a 503 not found error as below:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.105.138.94...
* Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0)
> GET /info/ HTTP/1.1
> Host: istio-ingressgateway.istio-system
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization: Bearer ...
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sat, 12 Jan 2019 13:30:13 GMT
< server: envoy
<
* Connection #0 to host istio-ingressgateway.istio-system left intact
I checked the logs of istio-ingressgateway pod and the following line was logged there
[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-"
If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.
# curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v
Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.
Here are the config details for my service mesh in case you need additional info.
Pods
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m
default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m
default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m
ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m
ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m
istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m
istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m
istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m
istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m
istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m
istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m
istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m
istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m
istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m
istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m
istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m
istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m
istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m
istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m
istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m
istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m
kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m
kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m
kube-system etcd-ubuntu 1/1 Running 0 49m
kube-system kube-apiserver-ubuntu 1/1 Running 0 49m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m
kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m
kube-system kube-proxy-r868m 1/1 Running 0 50m
kube-system kube-scheduler-ubuntu 1/1 Running 0 49m
Services
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hr--gateway-service ClusterIP 10.100.238.144 <none> 80/TCP,443/TCP 39m
default hr--hr-service ClusterIP 10.96.193.43 <none> 80/TCP 39m
default hr--sts-service ClusterIP 10.99.54.137 <none> 8080/TCP,8081/TCP,8090/TCP 39m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
ingress-nginx default-http-backend ClusterIP 10.109.166.229 <none> 80/TCP 44m
ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m
istio-system grafana ClusterIP 10.102.141.231 <none> 3000/TCP 44m
istio-system istio-citadel ClusterIP 10.101.128.187 <none> 8060/TCP,9093/TCP 44m
istio-system istio-egressgateway ClusterIP 10.102.157.204 <none> 80/TCP,443/TCP 44m
istio-system istio-galley ClusterIP 10.96.31.251 <none> 443/TCP,9093/TCP 44m
istio-system istio-ingressgateway LoadBalancer 10.105.138.94 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m
istio-system istio-pilot ClusterIP 10.100.170.73 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m
istio-system istio-policy ClusterIP 10.104.77.184 <none> 9091/TCP,15004/TCP,9093/TCP 44m
istio-system istio-sidecar-injector ClusterIP 10.100.180.152 <none> 443/TCP 44m
istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 <none> 9102/TCP,9125/UDP 44m
istio-system istio-telemetry ClusterIP 10.110.55.232 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 44m
istio-system jaeger-collector ClusterIP 10.102.43.21 <none> 14267/TCP,14268/TCP 44m
istio-system jaeger-query ClusterIP 10.104.182.189 <none> 16686/TCP 44m
istio-system prometheus ClusterIP 10.100.0.70 <none> 9090/TCP 44m
istio-system servicegraph ClusterIP 10.97.65.37 <none> 8088/TCP 44m
istio-system tracing ClusterIP 10.109.87.118 <none> 80/TCP 44m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 52m
Gateway and virtual service described in point 2
$ kubectl describe gateways.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
App: hr--gateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
Hosts:
*
Port:
Name: https
Number: 443
Protocol: HTTPS
Tls:
Mode: PASSTHROUGH
$ kubectl describe virtualservices.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
Labels: app=hr--gateway
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
...
Spec:
Gateways:
hr--gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Rewrite:
Uri: /
Route:
Destination:
Host: hr--hr-service
Gateway and virtual service described in point 3
$ kubectl describe gateways.networking.istio.io ingress-gateway
Name: ingress-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
$ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs
Name: hr--gateway-ingress-vs
Namespace: default
Labels: app=hr--gateway
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Spec:
Gateways:
ingress-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Route:
Destination:
Host: hr--gateway-service
Events: <none>
The problem is probably as follows: istio-ingressgateway initiates mTLS to hr--gateway-service on port 80, but hr--gateway-service expects plain HTTP connections.
There are multiple solutions:
Define a DestinationRule to instruct clients to disable mTLS on calls to hr--gateway-service
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: hr--gateway-service-disable-mtls
spec:
host: hr--gateway-service.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
Instruct hr-gateway-service to accept mTLS connections. For that, configure the server TLS options on port 80 to be MUTUAL and to use Istio certificates and the private key. Specify serverCertificate, caCertificates and privateKey to be /etc/certs/cert-chain.pem, /etc/certs/root-cert.pem, /etc/certs/key.pem, respectively.