How to Add Users to Kubernetes (kubectl)? - kubernetes

I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.
I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data?

For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
Nordstrom/kubelogin
pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user Alice
kubectl create sa alice
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
Get ca.crt from secret (using OSX base64 with -D flag for decode)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
Get information from your kubectl config (current-context, server..)
# get current context
c=$(kubectl config current-context)
# get cluster name of context
name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
# get endpoint of current context
endpoint=$(kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}")
On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:
Install kubectl
brew install kubectl
Set cluster (run in directory where ca.crt is stored)
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=alice
Switch current-context to alice-staging for the user
kubectl config use-context alice-staging
To control user access with policies (using ABAC), you need to create a policy file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.

You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config

bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC.
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Related

K3s kubeconfig authenticate with token instead of client cert

I set up K3s on a server with:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.
I then hooked up 2 more server nodes to the cluster for a High Availability setup using:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.
My ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.
What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.
Is this some limitation of K3s or am I just doing something wrong (likely)?
My kubectl version output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:
Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
My use is for outside the cluster.
First, I created a new ServiceAccount called cluster-admin:
kubectl create serviceaccount cluster-admin
I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):
kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
Now you have to get the Secret that is created for you when you created your ServiceAccount:
kubectl get serviceaccount cluster-admin -o yaml
You'll see something like this returned:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
Get the Secret content with:
kubectl get secret cluster-admin-token-67jtw -o yaml
In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:
echo {base64-encoded-token} | base64 --decode
Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.
kubectl config set-credentials my-cluster-admin --token={token}
Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:
- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
My user in the kube config looks like this:
- name: my-cluster-admin
user:
token: {token}
Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.
Other resources I found helpful:
Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel

Use the Kubernetes REST API without kubectl

You can simply interact with K8s using its REST API. For example to get pods:
curl http://IPADDR/api/v1/pods
However I can't find any example of authentication based only on curl or REST. All the examples show the usage of kubectl as proxy or as a way to get credentials.
If I already own the .kubeconfig, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using kubectl?
The kubeconfig file you download when you first install the cluster includes a client certificate and key. For example:
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
If you extract the client-certificate-data and client-key-data to
files, you can use them to authenticate with curl. To extract the
data:
$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
And then using curl:
$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
Alternately, if your .kubeconfig has tokens in it, like this:
[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
Then you can use that token as a bearer token:
$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

Kubernetes read-only context

I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created

Openshift 3.1: List projects of a serviceaccount using REST - returns empty list

I'm playing around with the openshift REST api and service account with the goal to set up something to partly automate the administration of openshift via REST calls.
Currently still on Openshift 3.1.
Where I got so far:
I managed to create a serviceaccount, give it access and use it for REST calls. I cannot, however, make it have a list of projects.
# account exists with name robot in namespace default
$ oc describe serviceaccount robot
Name: robot
Namespace: default
Labels: <none>
...
# tried a couple of approaches for granting access:
$ oc policy add-role-to-user admin system:serviceaccounts:robot -n default
$ oc policy add-role-to-user admin robot -n default
$ oc policy add-role-to-user admin system:serviceaccounts:default:robot -n default
# get token en do REST call
$ SECRET=`oc describe serviceaccount robot | grep -i tokens | awk '{print $2}'`
$ TOKEN=`oc describe secret $SECRET | grep -i ^token | awk '{print $2}'`
$ curl -X GET -H "Authorization: Bearer $TOKEN" https://$OPENSHIFT_HOSTNAME/oapi/v1/projects --insecure
{
"kind": "ProjectList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/oapi/v1/projects"
},
"items": []
}
I don't understand wy my list of projects is still empty. I've tried adding the serviceaccount to multiple other projects, but the list stays empty.
You are very close to solving it. The user that you used for your policy needs to contain a namespace:
$ oc policy add-role-to-user admin system:serviceaccount:default:robot -n default
This is the namespace of the project in which the robot has been created. In your first oc describe serviceaccount robot the namespace default is used.
Also I see that you retrieve a secret of cloudforms service account, but I guess it is just a typo.
See also more docs about it: OpenShift Origin ServiceAccounts