Is it possible to access the Kubernetes API via https ingress? - kubernetes

I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all?
Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)
UPDATE:
Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).
Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.
Setup:
[kubernetes - env] - [FW/SNAT] - [me]
FW/NAT allows only 22,80 and 443 port access
So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.
I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)
What was attempted:
SSL passthrough. Could not enable as kubernetes-ingress controller was not able to start due to not being able to create intermediary cert. Even if started, most likely would have crippled other HTTPS ingresses.
Created self-signed SSL cert. As a result via browser, I could get an API output, when pointing to https://api-kubernetes.node.lan/api. However, kubectl throws an error due to unsigned cert, which is obvious.
Put apiserver.crt into ingress tls: definition. Got an error due to cert is not suitable for api-kubernetes.node.lan. Also obvious.
Followed guide [1] to create kube-ca signed certificate. Now the browser does not show anything at all. Using curl to access https://api-kubernetes.node.lan/api results in an empty output (I can see an HTTP OK when using -v). Kubectl now gets the following error:
$ kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"windows/amd64"}
Error from server: the server responded with the status code 0 but did not return more information
When trying to compare apiserver.pem and my generated cert I see the only difference:
apiserver.pem
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
generated.crt
X509v3 Extended Key Usage:
TLS Web Server Authentication
Ingress configuration:
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-api
namespace: default
labels:
app: kubernetes
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- secretName: kubernetes-api-cert
hosts:
- api-kubernetes.node.lan
rules:
- host: api-kubernetes.node.lan
http:
paths:
- path: "/"
backend:
serviceName: kubernetes
servicePort: 6443
Links:
[1] https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api

You should be able to do it as long as you expose the kube-apiserver pod in the kube-system namespace. I tried it like this:
$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443
service/apiserver exposed
$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apiserver ClusterIP 10.x.x.x <none> 6443/TCP 1m
...
Then go to a cluster machine and point my ~/.kube/config context IP 10.x.x.x:6443
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: https://10.x.x.x:6443
name: kubernetes
...
Then:
$ kubectl version --insecure-skip-tls-verify
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
I used --insecure-skip-tls-verify because 10.x.x.x needs to be valid on the server certificate. You can actually fix it like this: Configure AWS publicIP for a Master in Kubernetes
So maybe a couple of things in your case:
Since you are initially serving SSL on the Ingress you need to use the same kubeapi-server certificates under /etc/kubernetes/pki/ on your master
You need to add the external IP or name to the certificate where the Ingress is exposed. Follow something like this: Configure AWS publicIP for a Master in Kubernetes

Partially answering my own question.
For the moment I am satisfied with token based auth: this allows to have separate access levels and avoid allowing shell users. Keycloak based dashboard auth worked, but after logging in, was not able to logout. There is no logout option. :D
And to access dashboard itself via Ingress I have found somewhere a working rewrite rule:
nginx.ingress.kubernetes.io/configuration-snippet: "rewrite ^(/ui)$ $1/ui/ permanent;"
One note, that UI must be accessed with trailing slash "/": https://server_address/ui/

For those coming here who just want to see their kubernetes API from another network and with another host-name but don't need to change the API to a port other than the default 6443, an ingress isn't necessary.
If this describes you, all you have to do is add additional SAN rules in your API's cert for the DNS you're coming from. This article describes the process in detail

Related

How to solve via Rancher a Kubernetes Ingress Controller Fake Certificate error

I installed Rancher 2.6 on top of a kubernetes cluster. As cert-manager version I used 1.7.1.
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.1 --set installCRDs=true --create-namespace
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=MYDOMAIN.org \
--set bootstrapPassword=MYPASSWORD \
--set ingress.tls.source=letsEncrypt \ //<--- I use letsEncrypt
--set letsEncrypt.email=cert#mydomain.org \
--set letsEncrypt.ingress.class=nginx
After the installation was done, Rancher was successfully deployed on https:\mydomain.org.
LetsEncrypt SSL worked here fine. With Rancher I created a new RKE2 Cluster for my Apps.
So, I created a new Deployment for testing
"rancher/hello-world:latest"
3x Replicas
Direct call of the nodeport ip adress with port, worked. http://XXXXXX:32599/
At this point I want to use a https subdomain hello.mydomain.org.
After study of documentation my approach was to create a new Ingress. I did it like you see on the following picture.
After creation of a new Ingress, I checked the section Ingresses of my hello world deployment. That new Ingress is now available there.
My expectation was that now I can go to **https://**hello.mydomain.org. But https doesn't work here, instead I got:
NET::ERR_CERT_AUTHORITY_INVALID
Subject: Kubernetes Ingress Controller Fake Certificate
Issuer: Kubernetes Ingress Controller Fake Certificate
Expires on: 03.09.2023
Current date: 03.09.2022
Where did I make a mistake? How to use LetsEncrypt for my deployments?
The fake certificate usually implies that the ingress controller is serving a default backend instead of what you expect it to. While a particular Ingress resource might be served over http as expected, the controller doesn't consider it servable over https. Most likely explanation is that the certificate is missing and ingress host isn't configured for https. When you installed rancher you only configured Rancher's own ingress. You need to setup certificates for each Ingress resource separately.
You didn't mention which ingress-controller you are using. With LE or other ACME based certificate issuers you'll usually need a Certificate Controller to manage certificate generation and renewal. I'd recommend cert-manager. There is an excellent tutorial for setting up LE, cert-manager and nginx-ingress-controller. If you're using Traefik, it is capable of generating LE certificates by itself, but the support is only partial in kubernetes environments (ie. no high availability), so your best bet is to use cert-manager even with that.
Even if or once you have set them up, cert-manager doesn't automatically issue certificates for every Ingress but only to those it is requested to. You need annotations for that.
With cert-manager, once you have set up the Issuer/ClusterIssuer and annotation, your ingress resource should look something like this (you can check the YAML from rancher):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
name: hello-ingress
namespace: hello-ns
spec:
rules:
- host: hello.example.com
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- hello.example.com
secretName: hello-letsencrypt-cert
You might need to edit YAML directly and add spec.tls.secretName. If all is well, once you apply metadata.annotations and have set up spec.tls.hosts and spec.tls.secretName, the verification should happen soon and the ingress address should change to https://hello.example.com.
As a side note, I've experienced this issue also when the Ingress is behind a reverse proxy, such as HAproxy, and that reverse proxy (or Ingress) is not properly set up to use proxy protocol. You don't mention using one, but I'll write it just for the record.
If these steps don't solve your problem, you should check kubectl describe on the ingress and kubectl logs on the nginx-controller pods and see if anything stands out.
EDIT: I jumped to a conclusion, so I restructured this answer to also note the possibly of missing certificate manager altogether.

K3s kubeconfig authenticate with token instead of client cert

I set up K3s on a server with:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.
I then hooked up 2 more server nodes to the cluster for a High Availability setup using:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.
My ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.
What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.
Is this some limitation of K3s or am I just doing something wrong (likely)?
My kubectl version output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:
Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
My use is for outside the cluster.
First, I created a new ServiceAccount called cluster-admin:
kubectl create serviceaccount cluster-admin
I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):
kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
Now you have to get the Secret that is created for you when you created your ServiceAccount:
kubectl get serviceaccount cluster-admin -o yaml
You'll see something like this returned:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
Get the Secret content with:
kubectl get secret cluster-admin-token-67jtw -o yaml
In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:
echo {base64-encoded-token} | base64 --decode
Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.
kubectl config set-credentials my-cluster-admin --token={token}
Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:
- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
My user in the kube config looks like this:
- name: my-cluster-admin
user:
token: {token}
Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.
Other resources I found helpful:
Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel

Kubernetes dashboard: TLS handshake error

I have an EKS 1.18 cluster. When I tried to deploy a k8s dashboard, it's failing with the error below.
Also, my dashboard svc uses a loadBalancer.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: "test.xxx.net"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Pls let me know what does the TLS handshake error mean? What should I do to fix this error?
logs:
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:8279: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:34935: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:24437: EOF
2021/03/18 22:03:08 http: TLS handshake error from xx.xxx.x.x:64552: EOF
2021/03/18 22:03:10 http: TLS handshake error from xx.xxx.x.x:5481: EOF
code:
https://github.com/kubernetes/dashboard/releases/tag/v2.0.3
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
kubectl version
+ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T23:49:20Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Explanation:
zerkms already shed some light on the cause of your issue in his comment:
Your ingress presumably does not understand the dashboard terminates
tls itself and tries to send naked http requests there. So, whatever
you use to ingress/load balance traffic to the app - should be
configured to connect via tls. – zerkms Mar 19 at 2:55
You also told us that there isn't any ingress configured but you use for connection the domain name, pointing to your LoadBalancer's IP. That's fine but keep in mind that when you create a Service of LoadBalancer type on your EKS cluster, by default Classic Load Balancer is created. It works on layer 7 of the OSI model so it recognizes the https traffic, terminates TLS connection and then sends to your backend pods naked http request. As zerkms already explained, your backend is not prepared to handle such connection as it terminates TLS itself.
Solution:
As I already mentioned ,by default, when you create your LoadBalancer service, Classic Load Balancer is created. However you can change this default behaviour by adding to your Service the following annotation as mentioned here:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
As Network Load Balancer operates on layer 4 of the OSI model, it simply passes TCP packets to your backend pods without inspecting their content, terminating TLS etc and expects https traffic.
Alternatively you may set up some ingress controller which is configured to support SSL-passthrough like ngix-ingress as the AWS's ALB unfortunatelly doesn't support it.

Ambassador Edge Stack Questions

I'm getting no healthy upstream error. when accessing ambassador. Pods/Services and Loadbalancer seems to be all fine and healthy. Ambassador is on top of aks.
At the moment I have got multiple services running in the Kubernetes cluster and each service has it's on Mapping with its own prefix. Is it possible to point out multiple k8s services to the same mapping so that I don't have too many prefixes? And all my k8s services will be under the same ambassador prefix?
By Default ambassador is taking me through https which is creating certificate issues, although I will be bringing https in near future for now I'm just looking to prove the concept so how can I disable HTTPS and do HTTP only ambassador?
No healthy upstream typically means that, for whatever reason, Ambassador cannot find the service listed in the mapping. The first thing I usually do when I see this is to run kubectl exec -it -n ambassador {my_ambassador_pod_name} -- sh and try to curl -v my-service where "my-service" is the Kube DNS name of the service you are trying to hit. Depending on the response, it can give you some hints on why Ambassador is failing to see the service.
Mappings work on a 1-1 basis with services. If your goal, however, is to avoid prefix usage, there are other ways Ambassador can match to create routes. One common way I've seen is to use host-based routing (https://www.getambassador.io/docs/latest/topics/using/headers/host/) and create subdomains for either individual or logical sets of services.
AES defaults to redirecting to HTTPS, but this behavior can be overwritten by applying a host with insecure routing behavior. A very simple one that I commonly use is this:
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: wildcard
namespace: ambassador
spec:
hostname: "*"
acmeProvider:
authority: none
requestPolicy:
insecure:
action: Route
selector:
matchLabels:
hostname: wildcard

How to enable extensions API in Kubernetes?

I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
using:
$ kubectl create -f test-ingress.yaml
I end up with the following error message:
error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
When I run kubectl version it shows:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me:
All components are up to date.
So how do I enable the extensions/v1beta1 in Kubernetes/GKE?
The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes:
The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.
along with the solution (download the newer binary manually).