Why Kubernetes returns unautorized error? - kubernetes

I used Kubernetes document to create a request for user certificate via API-server.
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: myuser
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
I generated the certificate, created the kubeconfig file and created the necessary role/rolebindings successfully. However, when I try to access the cluster, I get the below error. I am quite sure that the issue is with the above yaml definition; but could not figure out.
users error: You must be logged in to the server (Unauthorized)
Any idea please?

Seems, the issue is with the "spec" part. It is user authentication not server authentication. Hence, "server auth" should be client auth.
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth

Related

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

How to make the kubernetes pods unable to decrypt the kubernetes secrets without a key?

The end goal I'm trying to achieve is to create a kubernetes secret (potentially with a key) and a pod which uses that. But the catch is, the pod created should not be able to decode/decrypt the secret value without a particular key.
I have tried the secrets with data encryption at rest but that's not sufficient for my requirement.
Edit: I am trying to making this as step by step solution. (as asked by #Dawid in comments)
Encrypt your data using your-key (your encryption-logic, probably, in a script).
./encrypt.sh --key your-key --data your-data
Create a secret of this encrypted data
kubectl create secret generic your-secret-name --from-literal=secretdata=your-encrypted-data
You could add decryption logic like this in your pod ( either as a sidecar or initContainer)
# decrypt.sh will decode base64 then your decryption logic using your-key
./decrypt.sh --key your-key --data /var/my-secrets
Also you need to mount this secret as volume to your container .
spec:
containers:
- image: "image"
name: app
...
volumeMounts:
- mountPath: "/var/my-secrets"
name: my-secret
volumes:
- name: my-secret
secret:
secretName: your-secret-name
As answered by #Kiran here are the steps I followed to obtain the solution.
Encrypt using the openssl
echo -n "preetham" | openssl enc -e -aes-256-cbc -a -salt -pass pass:<PASSWORD>
Created the secret from the YAML file. preetham-secrets-test.yaml
apiVersion: v1
kind: Secret
metadata:
name: preetham-secrets
type: Opaque
stringData: # Using stringData instead data
username: U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw= # output from the step 1
Create the secret
kubectl apply -f preetham-secrets-test.yaml -n <NAMESPACE>
Mount the secret to volume and exec into the pod. Kubernetes reference
Inside the pod assuming the secret is mounted to /opt/mnt/secrets/.
bash-4.2# cat /opt/mnt/secrets/username
U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw=bash-4.2#
Decrypt the same using the openssl.( you may have to install the openssl based on the image using
bash-4.2# echo "U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw=" | openssl enc -d -aes-256-cbc -a -salt -pass pass:<PASSWORD>
preethambash-4.2#

Parameters to access multiple clusters

I want to access an external k8s cluster which is running in private cloud. Do you have any idea how can I get these parameters? What should I do in order to generate them?
${CLIENT_CERTIFICATE_DATA}
fake-cert-file
fake-key-file
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_ENDPOINT}
name: ${CLUSTER_NAME}
users:
- name: ${USER}
user:
client-certificate-data: $**{CLIENT_CERTIFICATE_DATA}**
contexts:
- context:
cluster: ${CLUSTER_NAME}
user:
client-certificate: **fake-cert-file**
client-key: **fake-key-file**
name: ${USER}-${CLUSTER_NAME}
current-context: ${USER}-${CLUSTER_NAME}
The steps to allow an access for a "bob" user are the followings:
Create a new CSR via openssl
openssl req -new -newkey rsa:4096 -nodes -keyout bob-k8s.key -out bob-k8s.csr -subj "/CN=bob/O=devops"
Create Kubernetes CertificateSigningRequest object
use
kubectl create –edit -f k8s-csr.yaml
and you should input the following
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: bob-k8s-access
spec:
groups:
- system:authenticated
request: # replace with output from shell command: cat bob-k8s.csr | base64 | tr -d '\n'
usages:
- client auth
Verify your CSR object
kubectl get csr
Approve your certificate
kubectl certificate approve bob-k8s-access
Verify your Bob's certificate
kubectl get csr bob-k8s-access -o jsonpath='{.status.certificate}' | base64 --decode > bob-k8s-access.crt
Retrieve the cluster CA certificate
kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > k8s-ca.crt
Setup Bob's kubeconfig file
$ kubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=k8s-ca.crt --kubeconfig=bob-k8s-config --embed-certs
after this command a bob-k8s-config file should be created with Bob's .kube configuration
Setup Bob's credential accesses
kubectl config set-credentials bob --client-certificate=bob-k8s-access.crt --client-key=bob-k8s.key --embed-certs --kubeconfig=bob-k8s-config
Create a context in your config
kubectl config set-context bob --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=<ns-for-bob> --user=bob --kubeconfig=bob-k8s-config
Assign roles within the namespace
kubectl create rolebinding bob-admin --namespace=<ns-for-bob> --clusterrole=admin --user=bob
For more information about permissions, please, look at the Kubernetes configuration page
I've written this instructions starting from this guide that is more exhaustive!

How to integrate Kubernetes with Gitlab

I'm trying to integrate Kubernetes cluster with Gitlab for using the Gitlab Review Apps feature.
Kubernetes cluster is created via Rancher 1.6
Running the kubectl get all from the kubernetes shell gives
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/my-service LoadBalancer x.x.144.67 x.x.13.89 80:32701/TCP 30d
svc/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 30d
On the Gitlab CI / CD > Kubernetes page, we need to enter mainly 3 fields:
API URL
CA Certificate
Token
API URL
If I'm not wrong, we can get the Kubernetes API URL from Rancher Dashboard > Kubernetes > CLI > Generate Config and copy the server url under cluster
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "https://x.x.122.197:8080/r/projects/1a7/kubernetes:6443"
CA Certificate & Token?
Now, the question is, where to get the CA Certificate (pem format) and the Token?
I tried all the ca.crt and token values from all the namespaces from the Kubernetes dashboard, but I'm getting this error on the Gitlab when trying to install Helm Tiller application:
Something went wrong while installing Helm Tiller
Can't start installation process
Here is how my secrets page look like
I'm also dying out with kubernetes and GitLab. I've created a couple single-node "clusters" for testing, one with minikube and another via kubeadm.
I answered this question on the GitLab forum but I'm posting my solution below:
API URL
According to the official documentation, the API URL is only https://hostname:port without trailing slash
List secrets
First, I listed the secrets as usual:
$ kubectl get secrets
NAME TYPE DATA AGE
default-token-tpvsd kubernetes.io/service-account-token 3 2d
k8s-dashboard-sa-token-XXXXX kubernetes.io/service-account-token 3 1d
Get the service token
$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data.token' | base64 -d
eyJhbGci ... sjcuNA8w
Get the CA certificate
Then I got the CA certificate directly from the JSON output via jq with a custom selector:
$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data."ca.crt"' | base64 -d - | tee ca.crt
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
... ... ... ... ... ...
FT55iMtPtFqAOnoYBCiLH6oT6Z1ACxduxPZA/EeQmTUoRJG8joczI0V1cnY=
-----END CERTIFICATE-----
Verity the CA certificate
With the CA certificate on hand you can verify as usual:
$ openssl x509 -in ca.crt -noout -subject -issuer
subject= /CN=kubernetes
issuer= /CN=kubernetes
$ openssl s_client -showcerts -connect 192.168.100.20:6443 < /dev/null &> apiserver.crt
$ openssl verify -verbose -CAfile ca.crt apiserver.crt
apiserver.crt: OK

Kubernetes: how to enable API Server Bearer Token Auth?

I've been trying to enabled token auth for HTTP REST API Server access from a remote client.
I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.
I then tried to enable token auth via running:
echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
this gives me a token. I then added the token to a token file on my controller containing a token and default user:
$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:
- --token-auth-file=/etc/kubernetes/token
to the startup param list
I then reboot (not sure the best way to restart API Server by itself??)
At this point, kubectl from a remote server quits working(won't connect). I then look at docker ps on the controller and see the api server. I run docker logs container_id and get no output. If I look at other docker containers I see output like:
E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
So it appears that my api-server.yaml config it preventing the API Server from starting properly....
Any suggestions on the proper way to configure API Server for bearer token REST auth?
It is possible to have both TLS configuration and Bearer Token Auth configured, right?
Thanks!
I think your kube-apiserver dies because it's can't find the /etc/kubernetes/token. That's because on your deployment the apiserver is a static pod therefore running in a container which in turn means it has a different root filesystem than that of the host.
Look into /etc/kubernetes/manifests/kube-apiserver.yaml and add a volume and a volumeMount like this (I have omitted the lines that do not need changing and don't help in locating the correct section):
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- name: kube-apiserver
command:
- ...
- --token-auth-file=/etc/kubernetes/token
volumeMounts:
- mountPath: /etc/kubernetes/token
name: token-kubernetes
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/token
name: token-kubernetes
One more note: the file you quoted as token should not end in . (dot) - maybe that was only a copy-paste mistake but check it anyway. The format is documented under static token file:
token,user,uid,"group1,group2,group3"
If your problem perists execute the command below and post the output:
journalctl -u kubelet | grep kube-apiserver