Kubernetes: ClusterRole created in the cluster are not visible during rbac checks - kubernetes

I have a problem in my Kubernetes cluster, that suddendly appeared two weeks ago. The ClusterRoles I create are not visible when RBAC for a given ServiceAccount are resolved. Here is a minimal set to reproduce the problem.
Create relevant ClusterRole, ClusterRoleBinding and a ServiceAccount in the default namespace to have the rights to see Endpoints with this SA.
# test.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-cr
subjects:
- kind: ServiceAccount
name: test-sa
namespace: default
$ kubectl apply -f test.yaml
serviceaccount/test-sa created
clusterrole.rbac.authorization.k8s.io/test-cr created
clusterrolebinding.rbac.authorization.k8s.io/test-crb created
All objects, in particular the ClusterRole, are visible if requested directly.
$ kubectl get serviceaccount test-sa
NAME SECRETS AGE
test-sa 1 57s
$ kubectl get clusterrolebinding test-crb
NAME AGE
test-crb 115s
$ kubectl get clusterrole test-cr
NAME AGE
test-cr 2m19s
However, when I try to resolve the effective rights for this ServiceAccount, here the error I get back:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:default:test-sa
no - RBAC: clusterrole.rbac.authorization.k8s.io "test-cr" not found
The RBAC rules created before the breakage are working properly. For instance, here for the ServiceAccount of my etcd-operator that I deployed with Helm several months ago:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:etcd:etcd-etcd-operator-etcd-operator
yes
The version of Kubernetes in this cluster is the 1.17.0-0.
I am also seeing very slow deployements lately of new Pods, that can take up to 5 mins to start to be deployed after they have been created by a StatefulSet or a Deployment, if this can help.
Do you have any insight of what is going on, or even what I could do about it? Please note that my Kubernetes cluster is managed, so I do not have any control on the underlying system, I just have the cluster-admin privileges as a customer. But it would greatly help anyway if I could give any direction to the administrators.
Thanks in advance!

Thanks a lot for your answers!
It turned out that we will certainly never have the final world about what happen. The cluster provider just restarted the kube-apiserver, and this fixed the issue.
I suppose that something went wrong like caching or other transient failures, that can not be defined as a reproductible error.
To give a little more data for a future reader, the error occured on a Kubernetes cluster managed by OVH, and their specificity is to run the control plane itself as pods deployed in a master Kubernetes cluster on their side.

Related

Can a Kubernetes Role be granted to access cluster scoped resources such as CRD and ClusterRole?

Refering to this question Kubernetes: Role vs ClusterRole. I understand that A Role can only be used to grant access to resources within a single namespace. so it should not be able to access to cluster scoped resources like customresourcedefinitions or clusterroles. I've created a Role and granted all resources to it like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: myaccount
namespace: test
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testadmin
namespace: test
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testadminbinding
namespace: test
subjects:
- kind: ServiceAccount
name: myaccount
namespace: test
roleRef:
kind: Role
name: testadmin
apiGroup: rbac.authorization.k8s.io
Then I tried this but it got 'yes' (with warning).
$ kubectl auth can-i create customresourcedefinitions --as=system:serviceaccount:test:myaccount
Warning: resource 'customresourcedefinitions' is not namespace scoped in group 'apiextensions.k8s.io'
yes
Also 'yes' with clusterroles and clusterrolebindings which are cluster scoped.
I'm a little bit confuse if a Role can really access cluster scoped resources? Can anyone help explain this behavior please?
You are absolutely right, a Role is used to grant access to resources within a namespace, but I should clarify a few things:
customresourcedefinitions can be either namespaced or cluster-scoped, and are available to all namespaces, that is why it is returning as yes when testing, according to the official documentation:
When you create a new CustomResourceDefinition (CRD), the Kubernetes
API Server creates a new RESTful resource path for each version you
specify. The CRD can be either namespaced or cluster-scoped, as
specified in the CRD's scope field. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.
clusterroles and clusterrolebindings are NOT resources, clusterroles are just a set of permissions that can be assigned to resources within a given cluster, and a clusterrolebinding is the way to grant those permissions cluster-wide.
To properly test the created Role and RoleBinding, you can do it with a resource like a Pod:
You should be able to create a pod within the namespace "test" with the service account "myaccount", but not with any other user.
kubectl auth can-i create pod --as=system:serviceaccount:test:myaccount --namespace=test
yes
kubectl auth can-i create pod --as=system:serviceaccount:test:otheraccount --namespace=test
no
And you should not be able to create the same resource at any namespace other than "test" with either user.
kubectl auth can-i create pod --as=system:serviceaccount:test:myaccount
no
kubectl auth can-i create pod --as=system:serviceaccount:test:otheraccount
no

Attach IAM Role to Serviceaccount from a Pod in EKS

I am trying to attach an IAM role to a pod's service account from within the POD in EKS.
kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN
The current role attached to the $serviceaccountis outlined below:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: common-role
rules:
- apiGroups: [""]
resources:
- event
- secrets
- configmaps
- serviceaccounts
verbs:
- get
- create
However, when I execute the kubectl command I get the following:
error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user "system:servi...." cannot get resource "serviceaccounts" in API group "" ...
Is my role correct? Why can't I modify the service account?
Kubernetes by default will run the pods with service account: default which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.
If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: run-kubectl-from-pod
Then you will have to create appropriate role (you can find full list of verbs here):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-service-accounts
rules:
- apiGroups: [""]
resources:
- serviceaccounts
verbs:
- get
- create
- patch
- list
I'm using here more verbs as a test. Get and Patch would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.
Then create your role accordingly:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-service-account-bind
subjects:
- kind: ServiceAccount
name: run-kubectl-from-pod
roleRef:
kind: Role
name: modify-service-accounts
apiGroup: rbac.authorization.k8s.io
And now you just have reference that service account when your run your pod:
apiVersion: v1
kind: Pod
metadata:
name: run-kubectl-in-pod
spec:
serviceAccountName: run-kubectl-from-pod
containers:
- name: kubectl-in-pod
image: bitnami/kubectl
command:
- sleep
- "3600"
Once that is done, you just exec into the pod:
➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh
And then annotate the service account:
$ kubectl get sa
NAME SECRETS AGE
default 1 19m
eks-sa 1 36s
run-kubectl-from-pod 1 17m
$ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN
serviceaccount/eks-sa annotated
$ kubectl describe sa eks-sa
Name: eks-sa
Namespace: default
Labels: <none>
Annotations: eks.amazonaws.com/role-arn:
Image pull secrets: <none>
Mountable secrets: eks-sa-token-sldnn
Tokens: <none>
Events: <none>
If you encounter any issues with request being refused please start with reviewing your request attributes and determine the appropriate request verb.
You can also check your access with kubectl auth can-i command:
kubectl-pod kubectl auth can-i patch serviceaccount
API server will respond with simple yes or no.
Please Note that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it here.
While your role appears to be correct, please keep in mind that when executing kubectl, the RBAC permissions of your account in kubeconfig are relevant for whether you are allowed to perform an action.
From your question, I understand that your role is attached to the service account you are trying to annotate, which is irrelevant to the kubectl permission check.

Kubernetes-dashboard empty : every resources are forbidden

It seems I have a very common problem but I cannot figure it out.
On a new kubernetes cluster (v1.17) I'm trying to install Kubernetes-dashboard.
For this I followed the official steps, starting by installing the dashboard :
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Then I created the ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
And the ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Everything is running smoothly and all the objets get created (I can get them and everything looks alright)
After running kubectl proxy the dashboard is accessible at this URL :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Then I enter the token I got with this command :
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user-token | awk '{print $1}')
I can login, but the dashboard is empty. The notifications panel is full of
[OBJECT] is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "[OBJECT]" in API group "extensions" in the namespace "default"
Replace [OBJECT] with every kubernetes object and you have a good overview of my notifications panel ;)
The admin-user has obviously not enough rights to access the objects.
Questions
Did I miss something ?
How can I debug this situation ?
Thank you for your help !
Edit: That was an outage from my cloud provider. I don't know what happened nor how they solved it but they did something and everything is working now.
In the end, that was an outage from the cloud provider. I ran into another problem with PVC, they solved it and tadaa the dashboard is working just fine with no modifications.
role binding give this error?
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef

Kubernetes dashboard error using service account token

I have a Kubernetes cluster with various resources running fine. I am trying to get the Dashboard working but getting the following error when I launch the dashboard and enter the service-account token.
persistentvolumeclaims is forbidden: User
"system:serviceaccount:kube-system:kubernetes-dashboard" cannot list
resource "persistentvolumeclaims" in API group "" in the namespace
"default"
It does not allow the listing of any resources from my cluster (persistent volumes, pods, ingresses etc). My cluster has multiple namespaces.
This is my service-account yaml file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-test # replace with your preferred username
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin # replace with your preferred username
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin # replace with your preferred username
namespace: kube-system
Any help is appreciated.
FIX: Create a Role Binding for the cluster role.
This should fix the problem:
kubectl delete clusterrole cluster-admin
kubectl delete clusterrolebinding kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
The above command will create a role binding that gives all permissions to all resources.
Run the Proxy:
kubectl proxy
Check the DashBoard: Please check the URL and port provided by kubectl
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/persistentvolume?namespace=default
More info: Cluster role:
You can check out the 'cluster-admin' role by:
kubectl edit clusterrole cluster-admin
The problem here is that the serviceaccount 'kubernetes-dashboard' does not have 'list' permissions for the resource 'persistentVolumeClaims'.
I would recommend using Web UI (Dashboard) documentation from Kubernetes.
Deploying the Dashboard UI
The Dashboard UI is not deployed by default. To deploy it, run the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
From your yaml I can see that you specified them for namespace kube-system but dashboard is trying to list resources from namespace default, at least that's what is says in your error message.
Also it seems your yaml is also incorrect for ServiceAccount name, as in the file you have k8s-test and error message says it's using kubernetes-dashboard.

Problems with simple RBAC example

I want to make a very simple example to learn how to use RBAC authorization in kubernetes. Therefore I use the example from the docs:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: dev-readpods-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dev-tester-rolebinding
namespace: dev
subjects:
- kind: User
name: Tester
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: dev-readpods-role
apiGroup: rbac.authorization.k8s.io
The role and the rolebinding are created.
When I log in with Tester and try
kubectl get pods -n dev
I get
Error from server (Forbidden): pods is forbidden: User "<url>:<port>/oidc/endpoint/OP#Tester" cannot list pods in the namespace "dev"
I read here (RBAC Error in Kubernetes) that the api-server have to be started with --authorization-mode=…,RBAC. How can I check this? I read somewhere else that if I run
kubectl api-versions | findstr rbac
and find entries RBAC should be activated. Is that true?
What am I doing wrong? Is there a good way to troubleshoot?
Thanks!
P.S. I'm running kubernetes inside IBM Cloud Private.
In ICP, it looks encouraging to use Teams (ICP's own term, I think). Try starting with it. But you need an LDAP server outside of ICP.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/user_management/admin.html
You would need to determine the invocation of the apiserver to see what --authorization-mode flag was passed to it. Normally this is contained in a systemd unit file or pod manifest. I'm not sure how IBM Cloud launches the apiserver