Kubernetes nslookup kubernetes.default fails - kubernetes

My Environment:
OS - CentOS-8.2
Kubernetes Vesion:
Client Version: v1.18.8
Server Version: v1.18.8
I have successfully configured Kubernetes cluster (One master & one worker), But currently while checking the dns resolution with below code it is failing.
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default dnsutils 1/1 Running 0 4m38s 10.244.1.20 K8s-Worker-1 <none> <none>
kube-system coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h 10.244.0.5 K8s-Master <none> <none>
kube-system coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h 10.244.0.4 K8s-Master <none> <none>
kube-system etcd-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-apiserver-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-controller-manager-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-flannel-ds-amd64-d6h9c 1/1 Running 61 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-flannel-ds-amd64-tc4qf 1/1 Running 202 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-proxy-cl9n4 1/1 Running 0 45h 65.66.67.6 K8s-Worker-1 <none> <none>
kube-system kube-proxy-s7jlc 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
kube-system kube-scheduler-K8s-Master 1/1 Running 1 4d14h 65.66.67.5 K8s-Master <none> <none>
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dnsutils 1/1 Running 0 22m
Currently below commands executed on Kubernetes cluster master and nslookup kubernetes.default is failing.
# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
# kubectl exec -ti dnsutils -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local company.domain.com
options ndots:5
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-2q4z9 1/1 Running 1 4d14h
coredns-66bff467f8-ktbd4 1/1 Running 1 4d14h
# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
# kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d14h
# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.4:53,10.244.0.5:53,10.244.0.4:9153 + 3 more... 4d14h
# kubectl describe svc -n kube-system kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.4:53,10.244.0.5:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.4:9153,10.244.0.5:9153
Session Affinity: None
Events: <none>
# kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 65.66.67.5:6443
Session Affinity: None
Events: <none>
Can anyone please help me to debug this issue. Thanks.

I have uninstalled and re-installed Kubernetes version - v1.19.0 Now everything working fine. Thanks.

Related

Unable to reach pod service from kubernetes master node , from worker nodes it is working

I have done a fresh kubernetes installation in my vm setup .I have two centos-8 servers which are master and slave. both are configured with 'network bridged'. kubernetes version is v1.21.9 , docker version is 23.0.0. I have deployed a simple hello world nodejs app as pod. these are the currently running pods
The issue Is I Can access the pod service through it's nod's IP address as http://192.168.1.27:31500/ But I'm unable to access the pod service from master node( expecting it to work in http://192.168.1.26:31500/) , can some one help me to resolve this?
there are no restarts in k8 network components and as I have checked there are no errors in kube-proxy pods
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default helloworldnodejsapp-deployment-86966cfcc5-85dgm 1/1 Running 0 17m 10.244.1.2 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-226w7 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-4cdhn 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system coredns-558bd4d5db-ht6sp 1/1 Running 0 63m 10.244.0.3 master-server26 <none> <none>
kube-system coredns-558bd4d5db-wq774 1/1 Running 0 63m 10.244.0.2 master-server26 <none> <none>
kube-system etcd-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-apiserver-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-controller-manager-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-ftsmp 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-xhccg 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-system kube-scheduler-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
Node details
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-server26 Ready control-plane,master 70m v1.21.9 192.168.1.26 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
worker-server27 Ready <none> 30m v1.21.9 192.168.1.27 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
configuration of /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],"dns": ["8.8.8.8", "8.8.4.4","192.168.1.1"]
}
Hello world pod deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworldnodejsapp-deployment
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: web
spec:
containers:
- name: helloworldnodejsapp
image: "********:helloworldnodejs"
ports:
- containerPort: 8010
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: helloworldnodejsapp-svc
labels:
app: web
spec:
type: NodePort
selector:
app: web
ports:
- port: 8010
targetPort: 8010
nodePort: 31500
Form the explanation I got following details
Node IP: 192.168.1.27
Master node IP: 192.168.1.26
Port: 31500
And you want to access the app using your master node IP which is 192.168.1.26. By default you can’t access your application directly using your master node ip because the pod is present on your worker node(192.168.1.27) even when you configured NodePort it will be binded to the worker node’s IP. So you need to expose your application using the clusterIP for accessing your application using the master node IP. follow this documentation for more details.

Nginx Ingress Controller not curling localhost on worker nodes

CentOS 7, 3 VMs -- 1 master and 2 workers, Kubernetes 1.26 (kubelet is 1.25.5.0), cri-dockerd, Calico CNI
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rxxxx-vm1 Ready control-plane 4h48m v1.25.5 10.253.137.20 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm2 Ready <none> 4h27m v1.25.5 10.253.137.17 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
rxxxx-vm3 Ready <none> 4h27m v1.25.5 10.253.137.10 <none> CentOS Linux 7 (Core) 3.10.0-1160.80.1.el7.x86_64 docker://20.10.22
NGINX Ingress controller is deployed as a daemonset:
# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-685568b969-b8mfr 1/1 Running 0 4h50m 172.17.0.6 rxxxx-vm1 <none> <none>
calico-apiserver calico-apiserver-685568b969-xrj2h 1/1 Running 0 4h50m 172.17.0.7 rxxxx-vm1 <none> <none>
calico-system calico-kube-controllers-67df98bdc8-2zdnj 1/1 Running 0 4h51m 172.17.0.4 rxxxx-vm1 <none> <none>
calico-system calico-node-498bb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
calico-system calico-node-sblv9 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
calico-system calico-node-zkn28 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-mq52d 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
calico-system calico-typha-76c8f59f87-zk6jr 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-6mq5k 1/1 Running 0 4h51m 172.17.0.3 rxxxx-vm1 <none> <none>
kube-system coredns-787d4945fb-kmqcv 1/1 Running 0 4h51m 172.17.0.2 rxxxx-vm1 <none> <none>
kube-system etcd-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-apiserver-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-controller-manager-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-proxy-g9dbt 1/1 Running 0 4h29m 10.253.137.10 rxxxx-vm3 <none> <none>
kube-system kube-proxy-mnzks 1/1 Running 0 4h30m 10.253.137.17 rxxxx-vm2 <none> <none>
kube-system kube-proxy-n98xb 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
kube-system kube-scheduler-rxxxx-vm1 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
nginx-ingress nginx-ingress-2chhn 1/1 Running 0 4h29m 172.17.0.2 rxxxx-vm3 <none> <none>
nginx-ingress nginx-ingress-95h7s 1/1 Running 0 4h30m 172.17.0.2 rxxxx-vm2 <none> <none>
nginx-ingress nginx-ingress-wbxng 1/1 Running 0 4h51m 172.17.0.5 rxxxx-vm1 <none> <none>
play apple-app 1/1 Running 0 4h45m 172.17.0.8 rxxxx-vm1 <none> <none>
play banana-app 1/1 Running 0 4h45m 172.17.0.9 rxxxx-vm1 <none> <none>
tigera-operator tigera-operator-7795f5d79b-hmm5g 1/1 Running 0 4h51m 10.253.137.20 rxxxx-vm1 <none> <none>
Services:
# kubectl get svc -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
calico-apiserver calico-api ClusterIP 10.111.117.42 <none> 443/TCP 6h34m apiserver=true
calico-system calico-kube-controllers-metrics ClusterIP 10.99.121.254 <none> 9094/TCP 6h35m k8s-app=calico-kube-controllers
calico-system calico-typha ClusterIP 10.104.50.90 <none> 5473/TCP 6h35m k8s-app=calico-typha
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h36m <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6h36m k8s-app=kube-dns
play apple-service ClusterIP 10.98.78.251 <none> 5678/TCP 6h29m app=apple
play banana-service ClusterIP 10.103.87.112 <none> 5678/TCP 6h29m app=banana
Service details:
# kubectl -n play describe svc apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: <none>
Selector: app=apple
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.78.251
IPs: 10.98.78.251
Port: <unset> 5678/TCP
TargetPort: 5678/TCP
Endpoints: 172.17.0.8:5678
Session Affinity: None
Events: <none>
Endpoints:
# kubectl get ep -A
NAMESPACE NAME ENDPOINTS AGE
calico-apiserver calico-api 172.17.0.6:5443,172.17.0.7:5443 6h39m
calico-system calico-kube-controllers-metrics 172.17.0.4:9094 6h39m
calico-system calico-typha 10.253.137.10:5473,10.253.137.20:5473 6h40m
default kubernetes 10.253.137.20:6443 6h40m
kube-system kube-dns 172.17.0.2:53,172.17.0.3:53,172.17.0.2:53 + 3 more... 6h40m
play apple-service 172.17.0.8:5678 6h34m
play banana-service 172.17.0.9:5678 6h34m
Endpoint details:
# kubectl -n play describe ep apple-service
Name: apple-service
Namespace: play
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2023-01-11T20:21:27Z
Subsets:
Addresses: 172.17.0.8
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 5678 TCP
Events: <none>
Ingress resource:
# kubectl get ing -A -o wide
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
play example-ingress nginx localhost 80 6h30m
Ingress details:
# kubectl -n play describe ing example-ingress
Name: example-ingress
Labels: <none>
Namespace: play
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
localhost
/apple apple-service:5678 (172.17.0.8:5678)
/banana banana-service:5678 (172.17.0.9:5678)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
QUESTION:
While curl -kL http://localhost/apple on master node returns apple the same command produces below output on worker nodes:
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
My understanding is that if the ingress controller pod is running on every node, then localhost should be resolved as this is defined as host in ingress resource definition. Is this understanding incorrect? If not, what am I doing wrong?
When I look at the corresponding node's ingress controller's pod's logs, I see this:
2023/01/12 03:02:05 [error] 68#68: *11 connect() failed (113: No route to host) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "GET /apple HTTP/1.1", upstream: "http://172.17.0.8:5678/apple", host: "localhost"
172.17.0.1 - - [12/Jan/2023:03:02:05 +0000] "GET /apple HTTP/1.1" 502 157 "-" "curl/7.29.0" "-"

unable to access nodeIP:port, serviceIP:port or podIP:port in minikube k8s

I am using k8s in minikube under Ubuntu and deployed nginx server. Which i want to access from different level eg from serviceip, nodeip or pod ip and none of them is reachable.Not sure why?? I am running my curl command to access ip:port from the ubuntu host machine where minikube node is installed. below is the log
/home/ravi/k8s>kgp
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-775bf4d7fb-jqxxv 1/1 Running 0 13m 172.17.0.3 minikube <none> <none>
kube-system coredns-66bff467f8-gtsl7 1/1 Running 0 9h 172.17.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system kube-proxy-nphlc 1/1 Running 0 7h28m 192.168.49.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 9h 192.168.49.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 21 9h 192.168.49.2 minikube <none> <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kgs
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h <none>
default nginx-service NodePort 10.101.107.62 <none> 80:31000/TCP 13m app=nginx-app
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9h k8s-app=kube-dns
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: Selector: app=nginx-app
Type: NodePort
IP: 10.101.107.62
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31000/TCP
Endpoints: 172.17.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 172.17.0.3:8000
curl: (7) Failed to connect to 172.17.0.3 port 8000: No route to host
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.1.52:31000
curl: (7) Failed to connect to 192.168.1.52 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 10.101.107.62:80 ---> also hangs
......
......
/home/ravi/k8s>
/home/ravi/k8s>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 9h v1.18.20 192.168.49.2 <none> Ubuntu 20.04.1 LTS 5.13.0-40-generic docker://20.10.3
/home/ravi/k8s>
/home/ravi/k8s>
/home/ravi/k8s>curl 192.168.49.2:31000
curl: (7) Failed to connect to 192.168.49.2 port 31000: Connection refused
/home/ravi/k8s>
/home/ravi/k8s> kubectl logs nginx-deployment-775bf4d7fb-jqxxv ---> no log shown
/home/ravi/k8s>cat 2_nginx_nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 31000
port: 80
targetPort: 8000
/home/ravi/k8s>
root#nginx-deployment-775bf4d7fb-jqxxv:~# curl 172.17.0.3:80 ---> working on port 80 instead of 8000 as set in yaml
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</body>
</html>

Kubernetes No external IP

I installed kubernetes as a single node cluster on a Debian 10 Box.
I changed the dashboard config with :
sudo kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard
and changed ClusterIP to NodePort and set the port to 32321. As described in this tutorial: https://k21academy.com/docker-kubernetes/kubernetes-dashboard/
sudo kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 25m
I Still don't get any external IP and can't access the Dashboard via external ip :(
Any advice?
sudo kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-558bd4d5db-9fxkw 1/1 Running 0 136m
kube-system pod/coredns-558bd4d5db-bq79s 1/1 Running 0 136m
kube-system pod/etcd-dyd-001 1/1 Running 0 136m
kube-system pod/kube-apiserver-dyd-001 1/1 Running 0 136m
kube-system pod/kube-controller-manager-dyd-001 1/1 Running 0 136m
kube-system pod/kube-flannel-ds-amd64-hh5qm 1/1 Running 0 136m
kube-system pod/kube-proxy-4pg4r 1/1 Running 0 136m
kube-system pod/kube-scheduler-dyd-001 1/1 Running 0 136m
kubernetes-dashboard pod/dashboard-metrics-scraper-84f48697d6-6sqqt 1/1 Running 0 19m
kubernetes-dashboard pod/kubernetes-dashboard-689fddb6b4-5sbhf 1/1 Running 0 19m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 136m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 136m
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 19m
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 136m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 136m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 136m
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 19m
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 19m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 136m
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-84f48697d6 1 1 1 19m
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-689fddb6b4 1 1 1 19m
sudo kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-84f48697d6-6sqqt 1/1 Running 0 17m
pod/kubernetes-dashboard-689fddb6b4-5sbhf 1/1 Running 0 17m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 17m
service/kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 17m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 17m
deployment.apps/kubernetes-dashboard 1/1 1 1 17m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-84f48697d6 1 1 1 17m
replicaset.apps/kubernetes-dashboard-689fddb6b4 1 1 1 17m
sudo kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.98.66.248 <none> 8000/TCP 15m
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 15m
and
sudo kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1/1 1 1 17m
and
sudo kubectl describe service kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.91.194
IPs: 10.100.91.194
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32321/TCP
Endpoints: 10.244.0.6:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You will not see the External IP for a nodeport service.
Try accessing your dashboard with your server's public IP and port
https://<server_IP>:32321
Above, port 32321 is taken from your output
sudo kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.100.91.194 <none> 443:32321/TCP 25m
Remember, to use https and that the port will change if you redeploy the service.
You need to create a Kubernetes service of type LoadBalancer like below example that will give you external IP.
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
Remember for using service of type LoadBalancer, you need to use something like Metallb or similar for your network.

Accessing service using istio ingress gives 503 error when mTLS is enabled

I have a mutual TLS enabled Istio mesh. My setup is as follows
A service running inside a pod (Service container + envoy)
An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes /info/ route to the above service.
Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs /info/ path to the service described in 2
I'm attempting to access the service from the ingress gateway using a curl command such as:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
But I'm getting a 503 not found error as below:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.105.138.94...
* Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0)
> GET /info/ HTTP/1.1
> Host: istio-ingressgateway.istio-system
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization: Bearer ...
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sat, 12 Jan 2019 13:30:13 GMT
< server: envoy
<
* Connection #0 to host istio-ingressgateway.istio-system left intact
I checked the logs of istio-ingressgateway pod and the following line was logged there
[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-"
If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.
# curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v
Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.
Here are the config details for my service mesh in case you need additional info.
Pods
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m
default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m
default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m
ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m
ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m
istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m
istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m
istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m
istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m
istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m
istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m
istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m
istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m
istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m
istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m
istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m
istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m
istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m
istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m
istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m
istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m
kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m
kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m
kube-system etcd-ubuntu 1/1 Running 0 49m
kube-system kube-apiserver-ubuntu 1/1 Running 0 49m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m
kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m
kube-system kube-proxy-r868m 1/1 Running 0 50m
kube-system kube-scheduler-ubuntu 1/1 Running 0 49m
Services
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hr--gateway-service ClusterIP 10.100.238.144 <none> 80/TCP,443/TCP 39m
default hr--hr-service ClusterIP 10.96.193.43 <none> 80/TCP 39m
default hr--sts-service ClusterIP 10.99.54.137 <none> 8080/TCP,8081/TCP,8090/TCP 39m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
ingress-nginx default-http-backend ClusterIP 10.109.166.229 <none> 80/TCP 44m
ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m
istio-system grafana ClusterIP 10.102.141.231 <none> 3000/TCP 44m
istio-system istio-citadel ClusterIP 10.101.128.187 <none> 8060/TCP,9093/TCP 44m
istio-system istio-egressgateway ClusterIP 10.102.157.204 <none> 80/TCP,443/TCP 44m
istio-system istio-galley ClusterIP 10.96.31.251 <none> 443/TCP,9093/TCP 44m
istio-system istio-ingressgateway LoadBalancer 10.105.138.94 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m
istio-system istio-pilot ClusterIP 10.100.170.73 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m
istio-system istio-policy ClusterIP 10.104.77.184 <none> 9091/TCP,15004/TCP,9093/TCP 44m
istio-system istio-sidecar-injector ClusterIP 10.100.180.152 <none> 443/TCP 44m
istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 <none> 9102/TCP,9125/UDP 44m
istio-system istio-telemetry ClusterIP 10.110.55.232 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 44m
istio-system jaeger-collector ClusterIP 10.102.43.21 <none> 14267/TCP,14268/TCP 44m
istio-system jaeger-query ClusterIP 10.104.182.189 <none> 16686/TCP 44m
istio-system prometheus ClusterIP 10.100.0.70 <none> 9090/TCP 44m
istio-system servicegraph ClusterIP 10.97.65.37 <none> 8088/TCP 44m
istio-system tracing ClusterIP 10.109.87.118 <none> 80/TCP 44m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 52m
Gateway and virtual service described in point 2
$ kubectl describe gateways.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
App: hr--gateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
Hosts:
*
Port:
Name: https
Number: 443
Protocol: HTTPS
Tls:
Mode: PASSTHROUGH
$ kubectl describe virtualservices.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
Labels: app=hr--gateway
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
...
Spec:
Gateways:
hr--gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Rewrite:
Uri: /
Route:
Destination:
Host: hr--hr-service
Gateway and virtual service described in point 3
$ kubectl describe gateways.networking.istio.io ingress-gateway
Name: ingress-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
$ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs
Name: hr--gateway-ingress-vs
Namespace: default
Labels: app=hr--gateway
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Spec:
Gateways:
ingress-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Route:
Destination:
Host: hr--gateway-service
Events: <none>
The problem is probably as follows: istio-ingressgateway initiates mTLS to hr--gateway-service on port 80, but hr--gateway-service expects plain HTTP connections.
There are multiple solutions:
Define a DestinationRule to instruct clients to disable mTLS on calls to hr--gateway-service
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: hr--gateway-service-disable-mtls
spec:
host: hr--gateway-service.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
Instruct hr-gateway-service to accept mTLS connections. For that, configure the server TLS options on port 80 to be MUTUAL and to use Istio certificates and the private key. Specify serverCertificate, caCertificates and privateKey to be /etc/certs/cert-chain.pem, /etc/certs/root-cert.pem, /etc/certs/key.pem, respectively.