istio unable to access kubernetes dashboard - kubernetes

I am trying to access the Kubernetes Dashboard through an Istio Gateway + Virtual Service.
However, all I get is 404 page not found when I try to access the dashboard with browser. Accessing the Dashboard through k8s NodePort or k8s LoadBalancer service works just as expected. The pod, however, complains in the logs about http: TLS handshake error from 127.0.0.6:52483: remote error: tls: bad certificate.
Running httpbin through Istio (as given in their documentation) works as expected, so Istio seem to be working fine as well.
I am using the official Kubernetes Dashboard YAML-s. I am giving the service below (with type: LoadBalancer added, although it doesn't seem to make a difference for Istio, although it allows me to access the Dashboard through a separate IP).
Just for the record, my k8s cluster is comprised of VirtualBox machines running MetalLB.
kubectl get services --all-namespaces returns the following:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
httpbin httpbin ClusterIP 10.100.186.188 <none> 8000/TCP 47h
istio-system istio-egressgateway ClusterIP 10.109.231.163 <none> 80/TCP,443/TCP 5d3h
istio-system istio-ingressgateway LoadBalancer 10.111.188.94 192.168.56.46 15021:31440/TCP,80:31647/TCP,443:32715/TCP 5d3h
istio-system istiod ClusterIP 10.104.236.247 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 5d3h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.101.131.136 <none> 8000/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service LoadBalancer 10.103.130.244 192.168.56.47 443:30041/TCP 43h
kubernetes-dashboard kubernetes-dashboard-service-np NodePort 10.100.49.224 <none> 8443:30002/TCP 43h
If I try to access the LoadBalancer directly via the IP from above and through browser, I get the usual Kubernetes Dashboard login page. The browser url is https://192.168.56.47.
YAML-s:
istio-gateway.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubernetes-dashboard-gateway
namespace: kubernetes-dashboard
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "*"
istio-virtual-service.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kubernetes-dashboard-virtual-service
namespace: kubernetes-dashboard
spec:
hosts:
- "*"
gateways:
- kubernetes-dashboard-gateway
tls:
- match:
- sniHosts: ["*"]
route:
- destination:
host: kubernetes-dashboard-service
port:
number: 443
dashboard-service.yaml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-service
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
# - port: 8000
# targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer

User suren has mentioned:
your gateway is listening 443. not 80
Yes, this could be a problem. You are trying to reach port 80, but you are exposing only port 443. Try to change your configuration or change your port during request.
See albo documentation about Deploy and Access the Kubernetes Dashboard.

Hm, I got it working with the configuration as above and with explicitly specifying a host in all places where I have previously placed a "*". I had to add that host in /etc/hosts to be able to access it in browser.
It seems that this last part was key, as well as specifying the sniHost in the Virtual Service. The other problems were mostly configuration issues with the TLS. Setting it to PASSTHROUGH seems to work, because it forces Istio to sort of forward the HTTPS request to the Kubernetes Dashboard, which is responsible for decrypting etc.

Related

How to get apiserver endpoint URL in helm template?

As per Helm docs, The lookup function can be used to look up resources in a running cluster.
Is there any way to get the api server endpoint URL using that function?
So far I was able to get the endpoint in two ways.
kubectl describe svc kubernetes -n default
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.0.1
IPs: 10.43.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.0.50.111:6443
Session Affinity: None
Events: <none>
kubectl config view -o jsonpath="{.clusters[?(#.name==\"joseph-rancher-cluster-2\")].cluster.server}"
https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io
But having trouble to use them with lookup. Thanks in advance.
Update:
I tried to extract the ip & https port from the Endpoints resource on the running cluster.
kubectl get ep -n default -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2023-01-04T05:08:43Z"
labels:
endpointslice.kubernetes.io/skip-mirror: "true"
name: kubernetes
namespace: default
resourceVersion: "208"
uid: db5e0476-9169-41cf-bd00-f6f52162c0ef
subsets:
- addresses:
- ip: 10.0.50.111
ports:
- name: https
port: 6443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
But the problem is, it returns a private IP which is unusable in case of cloud clusters. What I need is this;
kubectl cluster-info
Kubernetes control plane is running at https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io
CoreDNS is running at https://api.joseph-rancher-cluster-2.rancher.aveshalabs.io/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Which can be extracted from the kubeconfig file as well. So is there any template function which I can use to get the api server endpoint ?

Kubernetes service URL not responding to API call

I've been following multiple tutorials on how to deploy my (Spring Boot) api on Minikube. I already got it (user-service running on 8081) working in a docker container with an api gateway (port 8080) and eureka (port 8087), but for starters I just want it to run without those. Steps I took:
Push docker container or image (?) to docker hub, I don't know the proper term.
Create a deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: kwetter-service
spec:
type: LoadBalancer
selector:
app: kwetter
ports:
- protocol: TCP
port: 8080
targetPort: 8081
nodePort: 30070
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kwetter-deployment
labels:
app: kwetter
spec:
replicas: 1
selector:
matchLabels:
app: kwetter
template:
metadata:
labels:
app: kwetter
spec:
containers:
- name: user-api
image: cazhero/s6-kwetter-backend_user:latest
ports:
- containerPort: 8081 #is the port it runs on when I manually start it up
kubectl apply -f deployment.yaml
minikube service kwetter-service
It takes me to an empty site with url: http://192.168.49.2:30070 which I thought I could use to make API calls to, but apparently not. How do I make api calls to my application running on minikube?
Get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d4h
kwetter-service LoadBalancer 10.106.42.56 <pending> 8080:30070/TCP 4d
describe svc kwetter-service:
Name: kwetter-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kwetter
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.42.56
IPs: 10.106.42.56
Port: <unset> 8080/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30070/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 6s service-controller LoadBalancer -> NodePort
Made an Ingress in the yaml, used kubectl get ing:
NAME CLASS HOSTS ADDRESS PORTS AGE
kwetter-ingress <none> * 80 49m
To make some things clear:
You need to have pushed your docker image cazhero/s6-kwetter-backend_user:latest to docker hub, check that at https://hub.docker.com/, in your personal repository.
What's the output of minikube service kwetter-service, does it print the URL http://192.168.49.2:30070?
Make sure your pod is running correctly by the following minikube command:
# check pod status
minikube kubectl -- get pods
# if the pod is running, check its container logs
minikube kubectl -- logs po kwetter-deployment-xxxx-xxxx
I see that you are using LoadBalancer service, where a LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.
Check external IP
kubectl get svc
Use the external IP and the port number in this format to access the
application.
http://REPLACE_WITH_EXTERNAL_IP:8080
If you want to access the application using Nodeport (30070), use the Nodeport service instead of LoadBalancer service.
Refer to this documentation for more information on accessing applications through Nodeport and LoadBalancer services.

Exposing kubernetes Dashboard with clusterIP service externally using Ingress rules

I am trying to expose kubernetes-dashboard app externally using Ingress resource. I have installed Nginx Controller and a service called Kubernetes-dashboard is clusterIP type service with port 443.
I have created Ingress resource with YAML file and pointing to backend service which is kubernetes-dashboard but somehow I am not getting the IP address of my host (dashboard.com) so that I can add this entry in /etc/hosts file. what is the resolution here. I am not able to paste the yaml file here as this website complain about code formatting.
I tried to put YAML file here in various ways but it does not work.
yaml file of kubernetes-dashboard as below:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: tls: - hosts: - dashboard.com secretName: kubernetes-dashboard-certs rules: - host: dashboard.com http: paths: - pathType: ImplementationSpecific path: / backend: service: name: kubernetes-dashboard port: number: 443
Kubernetes-dashboard service config as below: Name: kubernetes-dashboard Namespace: kubernetes-dashboard Labels: k8s-app=kubernetes-dashboard Annotations: Selector: k8s-app=kubernetes-dashboard Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.106.1.186 IPs: 10.106.1.186 Port: 443/TCP TargetPort: 8443/TCP Endpoints: 10.44.0.3:8443 Session Affinity: None Events:
I am not getting the IP address of my host
You have to use the Nginx ingress controller service IP everywhere so traffic gets forwarded and managed via Nginx ingress.
you can check the IP of Nginx controller using the
Kubectl get svc -n ingress-nginx
Nginx controller service will be exposed as the type LoadBalancer you can use this IP into the DNS route as A or CNAME record.
Any request coming to your domain will get forwarded to
ingress > Nginx ingress controller > K8s service > K8s PODs

Google Kubernetes Engine Ingress doesn't work

Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.

Unable to connect to external load balancer even after exposing service in kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080
The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505