Parameters to access multiple clusters - kubernetes

I want to access an external k8s cluster which is running in private cloud. Do you have any idea how can I get these parameters? What should I do in order to generate them?
${CLIENT_CERTIFICATE_DATA}
fake-cert-file
fake-key-file
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_ENDPOINT}
name: ${CLUSTER_NAME}
users:
- name: ${USER}
user:
client-certificate-data: $**{CLIENT_CERTIFICATE_DATA}**
contexts:
- context:
cluster: ${CLUSTER_NAME}
user:
client-certificate: **fake-cert-file**
client-key: **fake-key-file**
name: ${USER}-${CLUSTER_NAME}
current-context: ${USER}-${CLUSTER_NAME}

The steps to allow an access for a "bob" user are the followings:
Create a new CSR via openssl
openssl req -new -newkey rsa:4096 -nodes -keyout bob-k8s.key -out bob-k8s.csr -subj "/CN=bob/O=devops"
Create Kubernetes CertificateSigningRequest object
use
kubectl create –edit -f k8s-csr.yaml
and you should input the following
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: bob-k8s-access
spec:
groups:
- system:authenticated
request: # replace with output from shell command: cat bob-k8s.csr | base64 | tr -d '\n'
usages:
- client auth
Verify your CSR object
kubectl get csr
Approve your certificate
kubectl certificate approve bob-k8s-access
Verify your Bob's certificate
kubectl get csr bob-k8s-access -o jsonpath='{.status.certificate}' | base64 --decode > bob-k8s-access.crt
Retrieve the cluster CA certificate
kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > k8s-ca.crt
Setup Bob's kubeconfig file
$ kubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=k8s-ca.crt --kubeconfig=bob-k8s-config --embed-certs
after this command a bob-k8s-config file should be created with Bob's .kube configuration
Setup Bob's credential accesses
kubectl config set-credentials bob --client-certificate=bob-k8s-access.crt --client-key=bob-k8s.key --embed-certs --kubeconfig=bob-k8s-config
Create a context in your config
kubectl config set-context bob --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=<ns-for-bob> --user=bob --kubeconfig=bob-k8s-config
Assign roles within the namespace
kubectl create rolebinding bob-admin --namespace=<ns-for-bob> --clusterrole=admin --user=bob
For more information about permissions, please, look at the Kubernetes configuration page
I've written this instructions starting from this guide that is more exhaustive!

Related

How to configure kubectl to act as a service account?

I wish to run a Drone CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (e.g. 1, e.g. ) are not built for arm64 architecture, so I believe I need to build my own.
I am attempting to adapt the commands from here (see also README.md, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:
$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
If I copy the ~/.kube/config from my laptop to the docker container, kubectl commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based ~/.kube/config lists client-certificate-data and client-key-data rather than token under users: user:, but I suspect that's because my base config is recording a non-service-account.
How can I set up kubectl to authorize as a service account?
Some reading I have done that didn't answer the question for me:
kubenetes documentation on AuthN/AuthZ
Google Kubernetes Engine article on service accounts
Configure Service Accounts for Pods (this described how to create and associate the accounts, but not how to act as them)
Two blog posts (1, 2) that refer to Service Accounts
It appears you have used | base64 instead of | base64 --decode

Microk8s : Generating Auth Certificates

I'm trying to generate another kubeconfig for a microk8s cluster. For this I chose the certificates approach and I'm using the following script to generate the certificates, create the certificate signing request and populate the kubeconfig file..
rm -rf ./certs_dir || true
mkdir ./certs_dir
sleep 5
openssl genrsa -out ./certs_dir/$USER_NAME.key 2048
openssl req -new -key ./certs_dir/$USER_NAME.key -out ./certs_dir/$USER_NAME.csr -subj "/CN=$USER_NAME"
CERT_S_REQ="
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: user-$USER_NAME-csr
spec:
groups:
- system:authenticated
request: $(cat $USER_NAME.csr | base64)
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 864000000
usages:
- digital signature
- key encipherment
- client auth
"
export KUBECONFIG=../output/$NAME-kubeconfig.yaml
echo -e "$CERT_S_REQ" > ./certs_dir/user_csr.yaml
kubectl apply -f ./certs_dir/user_csr.yaml
kubectl get csr
kubectl certificate approve user-$USER_NAME-csr
sleep 10
kubectl get csr user-$USER_NAME-csr -o jsonpath='{.status.certificate}' | base64 -D > ./certs_dir/$USER_NAME.crt
kubectl create rolebinding user-$USER_NAME --clusterrole=cluster-admin --user=$USER_NAME
APISERVER=$(kubectl config view --raw -o 'jsonpath={..cluster.server}')
unset KUBECONFIG
kubectl config set-credentials "$USER_NAME" \
--client-certificate="./certs_dir/$USER_NAME.crt" \
--client-key="./certs_dir/$USER_NAME.key" \
--kubeconfig=../output/$USER_NAME.yaml \
--embed-certs=true
kubectl config set-cluster $CLUSTER_NAME --server=$APISERVER --kubeconfig=../output/$USER_NAME.yaml
kubectl config set-context default --user=$USER_NAME --cluster=$CLUSTER_NAME --kubeconfig=../output/$USER_NAME.yaml
kubectl config use-context default --kubeconfig=../output/$USER_NAME.yaml
Everything seems to work, but when trying to use the new kubeconfig file with the embedded certs it does not work, failing with following error whenever trying to execute a kubectl command
error: tls: private key does not match public key
Did I miss something?
I'm on MAC OS, running the microk8s cluster via multipass.
The microk8s cluster has the following enabled: ingress, storage, dns, rbac and also dashboard install: https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

RBAC (Role Based Access Control) on K3s

after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla

How to make the kubernetes pods unable to decrypt the kubernetes secrets without a key?

The end goal I'm trying to achieve is to create a kubernetes secret (potentially with a key) and a pod which uses that. But the catch is, the pod created should not be able to decode/decrypt the secret value without a particular key.
I have tried the secrets with data encryption at rest but that's not sufficient for my requirement.
Edit: I am trying to making this as step by step solution. (as asked by #Dawid in comments)
Encrypt your data using your-key (your encryption-logic, probably, in a script).
./encrypt.sh --key your-key --data your-data
Create a secret of this encrypted data
kubectl create secret generic your-secret-name --from-literal=secretdata=your-encrypted-data
You could add decryption logic like this in your pod ( either as a sidecar or initContainer)
# decrypt.sh will decode base64 then your decryption logic using your-key
./decrypt.sh --key your-key --data /var/my-secrets
Also you need to mount this secret as volume to your container .
spec:
containers:
- image: "image"
name: app
...
volumeMounts:
- mountPath: "/var/my-secrets"
name: my-secret
volumes:
- name: my-secret
secret:
secretName: your-secret-name
As answered by #Kiran here are the steps I followed to obtain the solution.
Encrypt using the openssl
echo -n "preetham" | openssl enc -e -aes-256-cbc -a -salt -pass pass:<PASSWORD>
Created the secret from the YAML file. preetham-secrets-test.yaml
apiVersion: v1
kind: Secret
metadata:
name: preetham-secrets
type: Opaque
stringData: # Using stringData instead data
username: U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw= # output from the step 1
Create the secret
kubectl apply -f preetham-secrets-test.yaml -n <NAMESPACE>
Mount the secret to volume and exec into the pod. Kubernetes reference
Inside the pod assuming the secret is mounted to /opt/mnt/secrets/.
bash-4.2# cat /opt/mnt/secrets/username
U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw=bash-4.2#
Decrypt the same using the openssl.( you may have to install the openssl based on the image using
bash-4.2# echo "U2FsdGVkX18VsbQaVpeqrCCJCDEd3LCbefT6nupChvw=" | openssl enc -d -aes-256-cbc -a -salt -pass pass:<PASSWORD>
preethambash-4.2#

How to create a user in a Kubernetes cluster?

I'm trying to create a user in a Kubernetes cluster.
I spinned up 2 droplets on DigitalOcean using a Terraform script of mine.
Then I logged in the master node droplet using ssh:
doctl compute ssh droplet1
Following this, I created a new cluster and a namespace in it:
kubectl create namespace thalasoft
I created a user role in the role-deployment-manager.yml file:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: thalasoft
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
and executed the command:
kubectl create -f role-deployment-manager.yml
I created a role grant in the rolebinding-deployment-manager.yml file:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: thalasoft
subjects:
- kind: User
name: stephane
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
and executed the command:
kubectl create -f rolebinding-deployment-manager.yml
Here is my terminal output:
Last login: Wed Dec 19 10:48:48 2018 from 90.191.151.182
root#droplet1:~# kubectl create namespace thalasoft
namespace/thalasoft created
root#droplet1:~# vi role-deployment-manager.yml
root#droplet1:~# kubectl create -f role-deployment-manager.yml
role.rbac.authorization.k8s.io/deployment-manager created
root#droplet1:~# vi rolebinding-deployment-manager.yml
root#droplet1:~# kubectl create -f rolebinding-deployment-manager.yml
rolebinding.rbac.authorization.k8s.io/deployment-manager-binding created
root#droplet1:~#
Now I'd like to first create a user in the cluster, and then configure the client kubectl with this user so as to operate from my laptop and avoid logging via ssh̀ to the droplet.
I know I can configure a user in the kubectl client:
# Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
# Configure the client with a user credentials
cd;
kubectl config set-credentials stephane --client-certificate=.ssh/id_rsa.pub --client-key=.ssh/id_rsa
But this is only some client side configuration as I understand.
UPDATE: I could add a user credentials with a certificate signed by the Kubernetes CA, running the following commands on the droplet hosting the Kubernetes master node:
# Create a private key
openssl genrsa -out .ssh/thalasoft.key 4096
# Create a certificate signing request
openssl req -new -key .ssh/thalasoft.key -out .ssh/thalasoft.csr -subj "/CN=stephane/O=thalasoft"
# Sign the certificate
export CA_LOCATION=/etc/kubernetes/pki/
openssl x509 -req -in .ssh/thalasoft.csr -CA $CA_LOCATION/ca.crt -CAkey $CA_LOCATION/ca.key -CAcreateserial -out .ssh/thalasoft.crt -days 1024
# Configure a cluster in the client
kubectl config set-cluster digital-ocean-cluster --server=https://${MASTER_IP}:6443 --insecure-skip-tls-verify=true
# Configure a user in the client
# Copy the key and the certificate to the client
scp -o "StrictHostKeyChecking no" root#165.227.171.72:.ssh/thalasoft.* .
# Configure the client with a user credentials
kubectl config set-credentials stephane --client-certificate=.ssh/thalasoft.crt --client-key=.ssh/thalasoft.key
# Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
But this is only some client side configuration as I understand.
What command I should use to create the user ?
Kubernetes doesn't provide user management. This is handled through x509 certificates that can be signed by your cluster CA.
First, you'll need to create a Key:
openssl genrsa -out my-user.key 4096
Second, you'll need to create a signing request:
openssl req -new -key my-user.key -out my-user.csr -subj "/CN=my-user/O=my-organisation"
Third, sign the certificate request:
openssl x509 -req -in my-user.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out my-user.crt -days 500
ca.crt and ca.key is the same cert/key provided by kubeadm or within your master configuration.
You can then give this signed certificate to your user, along with their key, and then can configure access with:
kubectl config set-credentials my-user --client-certificate=my-user.crt --client-key=my-user.key
kubectl config set-context my-k8s-cluster --cluster=cluster-name --namespace=whatever --user=my-user
Bitnami provide a great resource that explains all of this:
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access