How to have a single user with access to multiple namespaces with separate certs for each namespace - kubernetes

I'm trying to setup some basic Authorization and Authentication for various users to access a shared K8s cluster.
Requirement: Multiple users can have access to multiple namespaces with a separate set of cert and keys for each of them.
Proposal:
openssl genrsa -out $PRIV_KEY 2048
# Generate CSR
openssl req -new -key $PRIV_KEY -out $CSR -subj "/CN=$USER"
# Create k8s CSR
K8S_CSR=user-request-$USER-$NAMESPACE_NAME-admin
cat <<EOF >./$K8S_CSR.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: $K8S_CSR
spec:
groups:
- system:authenticated
request: $(cat $CSR | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
kubectl create -n $NAMESPACE_NAME -f $K8S_CSR.yaml
# Approve K8s CSR
kubectl certificate approve $K8S_CSR
# Fetch User Certificate
kubectl get csr $K8S_CSR -o jsonpath='{.status.certificate}' | base64 -d > $USER-$NAMESPACE_NAME-admin.crt
# Create Admin Role Binding
kubectl create rolebinding $NAMESPACE_NAME-$USER-admin-binding --clusterrole=admin --user=$USER --namespace=$NAMESPACE_NAME
Problem:
The user cert and/or pkey are not specific to this namespace.
If I just create another rolebinding for the same user in a different namespace he will able to authenticate. How do I prevent that from happening?

The CA in Kubernetes is a cluster-wide CA (not namespaced) so you won't be able to create certs tied to a specific namespace.
system:authenticated and system:unauthenticated are built-in groups in Kubernetes to identify whether a Role or ClusterRole is authenticated. And you can't directly manager groups or users in Kubernetes. You will have to configure an alternative cluster authentication methods to take advantage of users and groups. For example, a static token file or OpenID
Then you can, restrict users or groups defined in your identity provider to a Role that doesn't allow them to create either another Role or RoleBinding, that way they are not able to give themselves access to other namespaces and only the cluster-admin is the one who decides which RoleBindings (or namespaces) that specific user is part of.
For example, in your Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: mynamespace
name: myrole
rules:
- apiGroups: ["extensions", "apps"]
resources: ["deployments"] <== never include role, clusterrole, rolebinding, and clusterrolebinding.
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Another alternative is to restrict base on a Service Account Token by RoleBinding to the service account.

Related

How can I access Microk8s in Read only mode?

I would like to read state of K8s using µK8s, but I don't want to have rights to modify anything. How to achieve this?
The following will give me full access:
microk8s.kubectl Insufficient permissions to access MicroK8s. You can either try again with sudo or add the user digital to the 'microk8s' group:
sudo usermod -a -G microk8s digital sudo chown -f -R digital ~/.kube
The new group will be available on the user's next login.
on Unix/Linux we can just set appropriate file/directory access
permission - just rx, decrease shell limits (like max memory/open
file descriptors), decrease process priority (nice -19). We are
looking for similar solution for K8S
This kind of solutions in Kubernetes are handled via RBAC (Role-based access control). RBAC prevents unauthorized users from viewing or modifying the cluster state. Because the API server exposes a REST interface, users perform actions by sending HTTP requests to the server. Users authenticate themselves by including credentials in the request (an authentication token, username and password, or a client certificate).
As for REST clients you get GET, POST, PUT,DELETE etc. These are send to specific URL paths that represents specific REST API resources (Pods, Services, Deployments and so).
RBAC auth is configured with two groups:
Roles and ClusterRoles - this specify which actions/verbs can be performed
RoleBinding and ClusterRoleBindings - this bind the above roles to a user, group or service account.
As you might already find out the ClusterRole is the one your might be looking for. This will allow to restrict specific user or group against the cluster.
In the example below we are creating ClusterRole that can only list pods. The namespace is omitted since ClusterRoles are not namepsaced.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
This permission has to be bound then via ClusterRoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to list pods in any namespace.
kind: ClusterRoleBinding
metadata:
name: list-pods-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
Because you don't have the enough permissions on your own you have to reach out to appropriate person who manage those to create user for you that has the ClusterRole: View. View role should be predefined already in cluster ( kubectl get clusterrole view)
If you wish to read more Kubernetes docs explains well its whole concept of authorization.

RBAC (Role Based Access Control) on K3s

after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla

Minikube: Restricted PodSecurityPolicy is not restricting when trying to create a privileged container

I have enabled podsecuritypolicy in minikube. By default it has created two psp - privileged and restricted.
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
I have also created a linux user - kubexz, for which I have created ClusterRole and RoleBinding to restrict for only managing pods on kubexz namespace, and use the restricted psp.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: only-edit
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "deletecollection", "patch", "update", "get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames: ["restricted"]
verbs: ["use"]
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kubexz-rolebinding
namespace: kubexz
subjects:
- kind: User
name: kubexz
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: only-edit
I have set the kubeconfig file in my kubexz user $HOME/.kube. The RBAC is working fine - From kubexz user I am only able to create and manage pod resources in the kubexz namespace as expected.
But when I post a pod manifest with securityContext.privileged: true, the restricted podsecuritypolicy is not stopping me to create that pod. I should not be able to create a pod with privilege container. But the pod is getting created. Not sure what am I missing
apiVersion: v1
kind: Pod
metadata:
name: new-pod
spec:
hostPID: true
containers:
- name: justsleep
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
I have followed PodsecurityPolicy using minikube.
This work by default only while using Minikube 1.11.1 with Kubernetes 1.16.x or higher.
Note for older versions of minikube:
Older versions of minikube do not ship with the pod-security-policy addon, so the policies that addon enables must be separately applied to the cluster
What I did:
1. Start minikube with the PodSecurityPolicy admission controller and the pod-security-policy addon enabled.
minikube start --extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy --addons=pod-security-policy
The pod-security-policy addon must be enabled along with the admission controller to prevent issues during bootstrap.
2. Create authenticated user:
In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.
Even though normal user cannot be added via an API call, but any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated. In this configuration, Kubernetes determines the username from the common name field in the ‘subject’ of the cert (e.g., “/CN=bob”). From there, the role based access control (RBAC) sub-system would determine whether the user is authorized to perform a specific operation on a resource.
Here you can find example how you can properly prepare X509 Client Certs and configure KubeConfig file accordingly.
The most important part is to define properly the common name (CN) and the organization field (O):
openssl req -new -key DevUser.key -out DevUser.csr -subj "/CN=DevUser/O=development"
The common name (CN) of the subject will be used as username for authentication request. The organization field (O) will be used to indicate group membership of the user.
Finally I have created your configuration based on standard minikube setup and can't recreate your issue either due to hostPID: true or securityContext.privileged: true
To consider:
a). Verify if your client certificate for authentication and kubeconfig file were created/setup properly especially common name (CN) and organization field (O).
b). Make sure you are switching between proper context while performing requests on behalf of different users.
f.e. kubectl get pods --context=NewUser-context

How to view members of subject with Group kind

There is a default ClusterRoleBinding named cluster-admin.
When I run kubectl get clusterrolebindings cluster-admin -o yaml I get:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-06-13T12:19:26Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "98"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 0361e9f2-6f04-11e8-b5dd-000c2904e34b
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
In the subjects field I have:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
How can I see the members of the group system:masters ?
I read here about groups but I don't understand how can I see who is inside the groups as the example above with system:masters.
I noticed that when I decoded /etc/kubernetes/pki/apiserver-kubelet-client.crt using the command:
openssl x509 -in apiserver-kubelet-client.crt -text -noout it contained the subject system:masters but I still didn't understand who are the users in this group:
Issuer: CN=kubernetes
Validity
Not Before: Jul 31 19:08:36 2018 GMT
Not After : Jul 31 19:08:37 2019 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
Answer updated:
It seems that there is no way to do it using kubectl. There is no object like Group that you can "get" inside the Kubernetes configuration.
Group information in Kubernetes is currently provided by the Authenticator modules and usually it's just string in the user property.
Perhaps you can get the list of group from the subject of user certificate or if you use GKE, EKS or AKS the group attribute is stored in a cloud user management system.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Information about ClusterRole membership in system groups can be requested from ClusterRoleBinding objects. (for example for "system:masters" it shows only cluster-admin ClusterRole):
Using jq:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters")'
If you want to list the names only:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters") | .metadata.name'
Using go-templates:
kubectl get clusterrolebindings -o go-template='{{range .items}}{{range .subjects}}{{.kind}}-{{.name}} {{end}} {{" - "}} {{.metadata.name}} {{"\n"}}{{end}}' | grep "^Group-system:masters"
Some additional information about system groups can be found in GitHub issue #44418 or in RBAC document:
Admittedly, late to the party here.
Have a read through the Kubernetes 'Authenticating' docs. Kubernetes does not have an in-built mechanism for defining and controlling users (as distinct from ServiceAccounts which are used to provide a cluster identity for Pods, and therefore services running on them).
This means that Kubernetes does not therefore have any internal DB to reference, to determine and display group membership.
In smaller clusters, x509 certificates are typically used to authenticate users. The API server is configured to trust a CA for the purpose, and then users are issued certificates signed by that CA. As you had noticed, if the subject contains an 'Organisation' field, that is mapped to a Kubernetes group. If you want a user to be a member of more than one group, then you specify multiple 'O' fields. (As an aside, to my mind it would have made more sense to use the 'OU' field, but that is not the case)
In answer to your question, it appears that in the case of a cluster where users are authenticated by certificates, your only route is to have access to the issued certs, and to check for the presence of the 'O' field in the subject. I guess in more advanced cases, Kubernetes would be integrated with a centralised tool such as AD, which could be queried natively for group membership.
In EKS the system:masters group is mapped to IAM roles in the aws-auth ConfigMap
kubectl get cm -n kube-system aws-auth -oyaml | yq '.data.mapRoles' | yq -P

kubernetes: Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope even after granting permission

Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: user#gmail.com
name: user#kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: nsp#gmail.com name: nsp#kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: nsp#gmail.com
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl --username=user#gmail.com get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous (which is similar to *NIX's nobody) and not nsp#example.com (to which you applied your binding).
In your specific case the reason for that is that the username flag uses HTTP Basic authentication and needs the password flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username and password authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).
In my case i was receiving nearly similar error due to RBAC
Error
root#k8master:~# kubectl cluster-info dump --insecure-skip-tls-verify=true
Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Solution:
As Solution i have done below things to reconfigure my user to access cluster
cd $HOME
sudo whoami
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
After doing above when i take cluster dump i got result
root#k8master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.10.15:6443
KubeDNS is running at https://192.168.10.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy