Kubernetes dashboard: TLS handshake error - kubernetes

I have an EKS 1.18 cluster. When I tried to deploy a k8s dashboard, it's failing with the error below.
Also, my dashboard svc uses a loadBalancer.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: "test.xxx.net"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Pls let me know what does the TLS handshake error mean? What should I do to fix this error?
logs:
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:8279: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:34935: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:24437: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:64552: EOF
2021/03/18 22:03:10 http: TLS handshake error from xx.xxx.x.x:5481: EOF
code:
https://github.com/kubernetes/dashboard/releases/tag/v2.0.3
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
kubectl version
+ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T23:49:20Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Explanation:
zerkms already shed some light on the cause of your issue in his comment:
Your ingress presumably does not understand the dashboard terminates
tls itself and tries to send naked http requests there. So, whatever
you use to ingress/load balance traffic to the app - should be
configured to connect via tls. – zerkms Mar 19 at 2:55
You also told us that there isn't any ingress configured but you use for connection the domain name, pointing to your LoadBalancer's IP. That's fine but keep in mind that when you create a Service of LoadBalancer type on your EKS cluster, by default Classic Load Balancer is created. It works on layer 7 of the OSI model so it recognizes the https traffic, terminates TLS connection and then sends to your backend pods naked http request. As zerkms already explained, your backend is not prepared to handle such connection as it terminates TLS itself.
Solution:
As I already mentioned ,by default, when you create your LoadBalancer service, Classic Load Balancer is created. However you can change this default behaviour by adding to your Service the following annotation as mentioned here:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
As Network Load Balancer operates on layer 4 of the OSI model, it simply passes TCP packets to your backend pods without inspecting their content, terminating TLS etc and expects https traffic.
Alternatively you may set up some ingress controller which is configured to support SSL-passthrough like ngix-ingress as the AWS's ALB unfortunatelly doesn't support it.

Related

Enable SSL connection for Kubernetes Dashboard

I use this command to install and enable Kubernetes dashboard on a remote host:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
kubectl proxy --address='192.168.1.132' --port=8001 --accept-hosts='^*$'
http://192.168.1.132:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
But I get:
Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. Read more here .
Is it possible to enable SSL connection on the Kubernetes host so that I can access it without this warning message and enable login?
From the service definition
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Which exposes port 443 (aka https). So it's already preconfigured. First, use https instead of http in your URL.
Then, instead of doing a kubectl proxy, why not simply
kubectl port-forward -n kubernetes-dashboard services/kubernetes-dashboard 8001:443
Access endpoint via https://127.0.0.1:8001/#/login
Now it's going to give the typical "certificate not signed" since the certificate are self signed (arg --auto-generate-certificates in deployment definition). Just skip it with your browser. See an article like https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/ if you need to configure a signed certificate.
Try this
First do a port forward to your computer, this will forward 8443 of your computer (first port) to the 8443 port in the pod (the one that is exposed acording to the manifest)
kubectl port-forward pod/kubernetes-dashboard 8443:8443 # Make sure you switched to the proper namespace
In your browser go to http://localhost:8443 based on the error message it should work.
if the kubernetes dashboard pod implements ssl in its web server then go to https://localhost:8443

ingress nginx how to debug 502 page even though the ports in service and Ingress are correct?

i have a web application running inside cluster ip on worker node on port 5001,i'm also using k3s for cluster deployment, i checked the cluster connection it's running fine
the deployment has the container port set to 5001:
ports:
- containerPort:5001
Here is the service file:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: user-ms
name: user-ms
spec:
ports:
- name: http
port: 80
targetPort: 5001
selector:
io.kompose.service: user-ms
status:
loadBalancer: {}
and here is the ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ms-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-ms
port:
number: 80
i'm getting 502 Bad Gateway error whenever i type in my worker or master ip address
expected: it should return the web application page
i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:
try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked
the ports are running correct, why is the ingress returning 502?
here is the ingress describe:
and here is the describe of nginx ingress controller pod:
the nginx ingress pod running normally:
here is the logs of the nginx ingress pod:
sorry for the images, but i'm using a streaming machine to access the terminal so i can't copy paste
How should i go with debugging this error?
ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay
I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.
once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:
metadata:
name: user-ms-ingress
annotitations:
kubernetes.io/ingress.class: "nginx"
now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:
i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.
Finally check once more and i can see that i can access my api and swagger page normally.
TLDR:
If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
don't forget to set ingress class inside your ingress configuration
don't setup network policy because nginx pod won't be able to call the other pods in that network
You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)
Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.
Kubectl edit cm -n namespace kubernetes-ingress
apiVersion: v1
data:
disable-access-log: "false" #turn this to false
map-hash-bucket-size: "128"
ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap

GKE config connector issue - Post i/o timeout

I am running into the below error when creating compute IP.
Config connector is already enabled, and it is a private cluster hosted on a shared network.
Version 1.17.15-gke.800
$ kubectl apply -f webapp-compute-ip. yaml
Error from server (InternalError): error when creating "webapp-compute-ip.yaml": Internal error occurred: failed calling webhook "annotation-defaulter.cnrm.cloud.google.com": Post https://cnrm-validating-webhook.cnrm-system.svc:443/annotation-defaulter?timeout=30s: dial tcp 192.168.66.130:9443: i/o timeout
$cat webapp-compute-ip.yaml
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: webapp-ip-test
namespace: sandbox
labels:
app: webapp
environment: test
annotations:
cnrm.cloud.google.com/project-id: "cluster-name"
spec:
location: global`
This problem was due to a config connector version issue.
There was a change in the webhook default port, from 443 to 9443, see
Config connector version depends on GKE version, I did not have any control over it, moreover there no is public documentation available on what config connector version is available with the GKE version. There is an existing request here.
The solution was for me to add port 9443 in the firewall rule.

How to get TLS 1.3 on GKE

I have a service deployed in Google Kubernetes Engine and have gotten the request to support TLS 1.3 connections on that service. Currently I do not get higher than TLS 1.2. Do I need to define my ingress differently?
My ingress is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-tls-__CI_ENVIRONMENT_SLUG__
namespace: __KUBE_NAMESPACE__
labels:
app: __CI_ENVIRONMENT_SLUG__
ref: __CI_ENVIRONMENT_SLUG__
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- __SERVICE_TLS_ENDPOINT__
secretName: __CI_ENVIRONMENT_SLUG__-service-cert
rules:
- host: __SERVICE_TLS_ENDPOINT__
http:
paths:
- path: /
backend:
serviceName: service-__CI_ENVIRONMENT_SLUG__
servicePort: 8080
Master version 1.17.13-gke.600
Pool version 1.17.13-gke.600
Your Ingress resource looks good. I used the same setup as yours and received a message that TLS 1.3 was supported.
The official documentation states:
Default TLS Version and Ciphers
To provide the most secure baseline configuration possible,
nginx-ingress defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.
Please check which version of nginx-ingress-controller you are running:
Kubernetes.github.io: Ingress-nginx: Deploy: Detect installed version
Also you can check if TLS 1.3 is enabled in nginx.conf of your nginx-ingress-controller pod (ssl_protocols TLSv1.2 TLSv1.3;). You will need to exec into the pod.
Troubleshooting steps for ensuring support for TLS 1.3
Does your server (nginx-ingress) supports TLS 1.3?
You can check if your Ingress controller supports it by running an online analysis:
SSLLabs.com: SSLTest: Analyze
You should get a message stating that TLS 1.3 is supported:
You can also use alternative online tools:
Geekflare.dev: TLS test
Geekflare.com: 10 Online Tool to Test SSL, TLS and Latest Vulnerability
Does your client supports TLS 1.3?
Please make sure that the client connecting to your Ingress supports TLS 1.3.
The client connecting to the server was not mentioned in the question:
Assuming that it's a web browser, you can check it with a similar tool to the one used for a server:
Clienttest.ssllabs.com:8443: SSLTest: ViewMyClient
Assuming that it is some other tool (curl, nmap, openssl, etc.) please check its documentation for more reference.
Additional reference:
Github.com: Kubernetes: Ingress nginx: Enable tls 1.3 in the nginx image
En.wikipedia.org: Wiki: Transport Layer Security Adoption

Is it possible to access the Kubernetes API via https ingress?

I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all?
Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)
UPDATE:
Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).
Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.
Setup:
[kubernetes - env] - [FW/SNAT] - [me]
FW/NAT allows only 22,80 and 443 port access
So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.
I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)
What was attempted:
SSL passthrough. Could not enable as kubernetes-ingress controller was not able to start due to not being able to create intermediary cert. Even if started, most likely would have crippled other HTTPS ingresses.
Created self-signed SSL cert. As a result via browser, I could get an API output, when pointing to https://api-kubernetes.node.lan/api. However, kubectl throws an error due to unsigned cert, which is obvious.
Put apiserver.crt into ingress tls: definition. Got an error due to cert is not suitable for api-kubernetes.node.lan. Also obvious.
Followed guide [1] to create kube-ca signed certificate. Now the browser does not show anything at all. Using curl to access https://api-kubernetes.node.lan/api results in an empty output (I can see an HTTP OK when using -v). Kubectl now gets the following error:
$ kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"windows/amd64"}
Error from server: the server responded with the status code 0 but did not return more information
When trying to compare apiserver.pem and my generated cert I see the only difference:
apiserver.pem
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
generated.crt
X509v3 Extended Key Usage:
TLS Web Server Authentication
Ingress configuration:
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-api
namespace: default
labels:
app: kubernetes
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- secretName: kubernetes-api-cert
hosts:
- api-kubernetes.node.lan
rules:
- host: api-kubernetes.node.lan
http:
paths:
- path: "/"
backend:
serviceName: kubernetes
servicePort: 6443
Links:
[1] https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api
You should be able to do it as long as you expose the kube-apiserver pod in the kube-system namespace. I tried it like this:
$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443
service/apiserver exposed
$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apiserver ClusterIP 10.x.x.x <none> 6443/TCP 1m
...
Then go to a cluster machine and point my ~/.kube/config context IP 10.x.x.x:6443
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: https://10.x.x.x:6443
name: kubernetes
...
Then:
$ kubectl version --insecure-skip-tls-verify
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
I used --insecure-skip-tls-verify because 10.x.x.x needs to be valid on the server certificate. You can actually fix it like this: Configure AWS publicIP for a Master in Kubernetes
So maybe a couple of things in your case:
Since you are initially serving SSL on the Ingress you need to use the same kubeapi-server certificates under /etc/kubernetes/pki/ on your master
You need to add the external IP or name to the certificate where the Ingress is exposed. Follow something like this: Configure AWS publicIP for a Master in Kubernetes
Partially answering my own question.
For the moment I am satisfied with token based auth: this allows to have separate access levels and avoid allowing shell users. Keycloak based dashboard auth worked, but after logging in, was not able to logout. There is no logout option. :D
And to access dashboard itself via Ingress I have found somewhere a working rewrite rule:
nginx.ingress.kubernetes.io/configuration-snippet: "rewrite ^(/ui)$ $1/ui/ permanent;"
One note, that UI must be accessed with trailing slash "/": https://server_address/ui/
For those coming here who just want to see their kubernetes API from another network and with another host-name but don't need to change the API to a port other than the default 6443, an ingress isn't necessary.
If this describes you, all you have to do is add additional SAN rules in your API's cert for the DNS you're coming from. This article describes the process in detail