Kubernetes: how to enable API Server Bearer Token Auth? - kubernetes

I've been trying to enabled token auth for HTTP REST API Server access from a remote client.
I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.
I then tried to enable token auth via running:
echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
this gives me a token. I then added the token to a token file on my controller containing a token and default user:
$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:
- --token-auth-file=/etc/kubernetes/token
to the startup param list
I then reboot (not sure the best way to restart API Server by itself??)
At this point, kubectl from a remote server quits working(won't connect). I then look at docker ps on the controller and see the api server. I run docker logs container_id and get no output. If I look at other docker containers I see output like:
E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
So it appears that my api-server.yaml config it preventing the API Server from starting properly....
Any suggestions on the proper way to configure API Server for bearer token REST auth?
It is possible to have both TLS configuration and Bearer Token Auth configured, right?
Thanks!

I think your kube-apiserver dies because it's can't find the /etc/kubernetes/token. That's because on your deployment the apiserver is a static pod therefore running in a container which in turn means it has a different root filesystem than that of the host.
Look into /etc/kubernetes/manifests/kube-apiserver.yaml and add a volume and a volumeMount like this (I have omitted the lines that do not need changing and don't help in locating the correct section):
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- name: kube-apiserver
command:
- ...
- --token-auth-file=/etc/kubernetes/token
volumeMounts:
- mountPath: /etc/kubernetes/token
name: token-kubernetes
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/token
name: token-kubernetes
One more note: the file you quoted as token should not end in . (dot) - maybe that was only a copy-paste mistake but check it anyway. The format is documented under static token file:
token,user,uid,"group1,group2,group3"
If your problem perists execute the command below and post the output:
journalctl -u kubelet | grep kube-apiserver

Related

K3s kubeconfig authenticate with token instead of client cert

I set up K3s on a server with:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.
I then hooked up 2 more server nodes to the cluster for a High Availability setup using:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.
My ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.
What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.
Is this some limitation of K3s or am I just doing something wrong (likely)?
My kubectl version output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:
Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
My use is for outside the cluster.
First, I created a new ServiceAccount called cluster-admin:
kubectl create serviceaccount cluster-admin
I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):
kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
Now you have to get the Secret that is created for you when you created your ServiceAccount:
kubectl get serviceaccount cluster-admin -o yaml
You'll see something like this returned:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
Get the Secret content with:
kubectl get secret cluster-admin-token-67jtw -o yaml
In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:
echo {base64-encoded-token} | base64 --decode
Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.
kubectl config set-credentials my-cluster-admin --token={token}
Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:
- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
My user in the kube config looks like this:
- name: my-cluster-admin
user:
token: {token}
Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.
Other resources I found helpful:
Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel

Accessing a remote k3s cluster via Lens IDE

I was trying to configure a new installation of Lens IDE to work with my remote cluster (on a remote server, on a VM), but encountered some errors and can't find a proper explanation for this case.
Lens expects a config file, I gave it to it from my cluster having it changed from
server: https://127.0.0.1:6443
to
server: https://(address to the remote server):(assigned intermediate port to 6443 of the VM with the cluster)
After which in Lens I'm getting this:
2021/06/14 22:55:13 http: proxy error: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.122, not (address to the remote server)
I can see that some cert has to be reconfigured, but I'm absolutely new to the thing.
Here the full contents of the original config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0...
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0...
client-key-data: LS0...
The solution is quite obvious and easy.
k3s has to add the new IP to the certificate. Since by default, it includes only localhost and the IP of the node it's running on, if you (like me) have some kind of machine in from of it(like an lb or a dedicated firewall), the IP of one has to be added manually.
There are two ways how it can be done:
During the installation of k3s:
curl -sfL https://get.k3s.io | sh -s - server --tls-san desired IP
Or this argument can be added to already installed k3s:
sudo nano /etc/systemd/system/k3s.service
ExecStart=/usr/local/bin/k3s \
server \
'--tls-san' \
'desired IP' \
sudo systemctl daemon-reload
P.S. Although, I have faced issues with the second method.

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

Directing http traffic through an external proxy using istio

We are running a bunch of microservices in a istio enabled kubernetes cluster. One of the microservice makes a call to an external service outside of the cluster and I need to route that particular call through the company proxy that is running also external to the cluster.
To explain a bit more, say, I set the HTTP_PROXY in the container and make the curl call to http://external.com the call is success as the call is routed through the proxy but I wanted the istio to do this routing through proxy transparently.
Eg. curl http://external.com from within the container then the istio should automatically route the http call via the company proxy and return back the response
I have created service entries for both external.com and proxy.com to make the call successful
If i understood right what You are looking for is Egress Gateway.
Here is part of tutorial for configuring external HTTPS proxy from Istio documentation:
Configure traffic to external HTTPS proxy
Define a TCP (not HTTP!) Service Entry for the HTTPS proxy. Although applications use the HTTP CONNECT method to establish connections with HTTPS proxies, you must configure the proxy for TCP traffic, instead of HTTP. Once the connection is established, the proxy simply acts as a TCP tunnel.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: proxy
spec:
hosts:
- my-company-proxy.com # ignored
addresses:
- $PROXY_IP/32
ports:
- number: $PROXY_PORT
name: tcp
protocol: TCP
location: MESH_EXTERNAL
EOF
Send a request from the sleep pod in the default namespace. Because the sleep pod has a sidecar, Istio controls its traffic.
$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c "HTTPS_PROXY=$PROXY_IP:$PROXY_PORT curl https://en.wikipedia.org/wiki/Main_Page" | grep -o "<title>.*</title>"
<title>Wikipedia, the free encyclopedia</title>
Check the Istio sidecar proxy’s logs for your request:
$ kubectl logs $SOURCE_POD -c istio-proxy
[2018-12-07T10:38:02.841Z] "- - -" 0 - 702 87599 92 - "-" "-" "-" "-" "172.30.109.95:3128" outbound|3128||my-company-proxy.com 172.30.230.52:44478 172.30.109.95:3128 172.30.230.52:44476 -
Check the access log of the proxy for your request:
$ kubectl exec -it $(kubectl get pod -n external -l app=squid -o jsonpath={.items..metadata.name}) -n external -- tail -f /var/log/squid/access.log
1544160065.248 228 172.30.109.89 TCP_TUNNEL/200 87633 CONNECT en.wikipedia.org:443 - HIER_DIRECT/91.198.174.192 -
Check out whole tutorial as it covers setup requirements and also has steps to simulate an external proxy so You can compare if it working as intended.
istio.io/docs/tasks/traffic-management/egress/http-proxy/

How to install a CA in Minikube so image pulls are trusted

I want to use Minikube for local development. It needs to access my companies internal docker registry which is signed w/ a 3rd party certificate.
Locally, I would copy the cert and run update-ca-trust extract or update-ca-certificates depending on the OS.
For the Minikube vm, how do I get the cert installed, registered, and the docker daemon restarted so that docker pull will trust the server?
I had to do something similar recently. You should be able to just hop on the machine with minikube ssh and then follow the directions here
https://docs.docker.com/engine/security/certificates/#understanding-the-configuration
to place the CA in the appropriate directory (/etc/docker/certs.d/[registry hostname]/). You shouldn't need to restart the daemon for it to work.
Well, the minikube has a feature to copy all the contents of ~/.minikube/files directory to its VM filesystem. So you can place your certificates under
~/.minikube/files/etc/docker/certs.d/<docker registry host>:<docker registry port> path
and these files will be copied into the proper destination on minikube startup automagically.
Shell into Minikube.
Copy your certificates to:
/etc/docker/certs.d/<docker registry host>:<docker registry port>
Ensure that your permissions are correct on the certificate, they must be at least readable.
Restart Docker (systemctl restart docker)
Don't forget to create a secret if your Docker Registry uses basic authentication:
kubectl create secret docker-registry service-registry --docker-server=<docker registry host>:<docker registry port> --docker-username=<name> --docker-password=<pwd> --docker-email=<email>
Have you checked ImagePullSecrets.
You can create a secret with your cert and let your pod use it.
By starting up the minikube with the following :
minikube start --insecure-registry=internal-site.dev:5244
It will start the docker daemon with the --insecure-registry option :
/usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:2376 --label provider=virtualbox --insecure-registry internal-site.dev:5244 --tlsverify --tlscacert=/var/lib/boot2docker/ca.pem --tlscert=/var/lib/boot2docker/server.pem --tlskey=/var/lib/boot2docker/server-key.pem -s aufs
but this expects the connection to be HTTP. Unlike in the Docker registry documentation Basic auth does work, but it needs to be placed in a imagePullSecret from the Kubernetes docs.
I would also recommend reading "Adding imagePulSecrets to service account" (link on the page above) to get the secret added to all pods as they are deployed. Note that this will not impact already deployed pods.
One option that works for me is to run a k8s job to copy the cert to the minikube host...
This is what I used to trust the harbor registry I deployed into my minikube
cat > update-docker-registry-trust.yaml << END
apiVersion: batch/v1
kind: Job
metadata:
name: update-docker-registry-trust
namespace: harbor
spec:
template:
spec:
containers:
- name: update
image: centos:7
command: ["/bin/sh", "-c"]
args: ["find /etc/harbor-certs; find /minikube; mkdir -p /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io; cp /etc/harbor-certs/ca.crt /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io/ca.crt; find /minikube"]
volumeMounts:
- name: harbor-harbor-ingress
mountPath: "/etc/harbor-certs"
readOnly: true
- name: docker-certsd-volume
mountPath: "/minikube/etc/docker/"
readOnly: false
restartPolicy: Never
volumes:
- name: harbor-harbor-ingress
secret:
secretName: harbor-harbor-ingress
- name: docker-certsd-volume
hostPath:
# directory location on host
path: /etc/docker/
# this field is optional
type: Directory
backoffLimit: 4
END
kubectl apply -f update-docker-registry-trust.yaml
You should copy your root certificate to $HOME/.minikube/certs and restart the minikube with --embed-certs flag.
For more details please refer to minikube handbook: https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/
As best as I can tell, there is no way to do this. The next best option is to use the insecure-registry option at startup.
minikube --insecure-registry=foo.com:5000