Kubernetes RBAC ClusterRole - kubernetes

I'm trying to deploy some deployments out of my gitlab runner.
I do not see the error in my clusterrole and in the rolebindng.
Here the error I get:
from server for: "./deployment.yaml": deployments.apps "demo-deployment" is forbidden: User "system:serviceaccount:gitlab-runner:gitlab-ci" cannot get resource "deployments" in API group "apps" in the namespace "gitlab-runner"
Here the role I create:
kubectl create clusterrole deployment-test --verb=\* --resource=deployments
kubectl create clusterrolebinding deployment-test-binding --clusterrole=deployment-test --serviceaccount=gitlab-runner:gitlab-ci
Thanks for any help!

Related

Accessing k8s cluster with service account token

Is possible to gain k8s cluster access with serviceaccount token?
My script does not have access to a kubeconfig file, however, it does have access to the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here are the steps I tried but it is not working.
kubectl config set-credentials sa-user --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context sa-context --user=sa-user
but when the script ran "kubectl get rolebindings" I get the following error:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Is possible to gain k8s cluster access with serviceaccount token?
Certainly, that's the point of a ServiceAccount token. The question you appear to be asking is "why does my default ServiceAccount not have all the privileges I want", which is a different problem. One will benefit from reading the fine manual on the topic
If you want the default SA in the test NS to have privileges to read things in its NS, you must create a Role scoped to that NS and then declare the relationship explicitly. SAs do not automatically have those privileges
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: test-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: whatever-role-you-want
subjects:
- kind: ServiceAccount
name: default
namespace: test
but when the script ran "kubectl get pods" I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Presumably you mean you can kubectl get rolebindings, because I would not expect running kubectl get pods to emit that error
Yes, it is possible. For instance, if you login K8S dashboard via token it does use the same way.
Follow these steps;
Create a service account
$ kubectl -n <your-namespace-optional> create serviceaccount <service-account-name>
A role binding grants the permissions defined in a role to a user or set of users. You can use a predefined role or you can create your own. Check this link for more info. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example
$ kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<service-account-name>
Get the token name
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'`
Finally, get the token and set the credentials
$ kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode
$ kubectl config set-credentials <service-account-name> --token=<output from previous command>
$ kubectl config set-context --current --user=<service-account-name>
If you follow these steps carefully your problem will be solved.

Kubernetes dashboard error messages: configmaps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource

The Kubernetes dashboard outputs a bunch of error messages.
Should you ignore them?
If not, how do you fix them?
warning
configmaps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "configmaps" in API group "" in the namespace "default"
warning
persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
warning
secrets is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "secrets" in API group "" in the namespace "default"
warning
services is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "services" in API group "" in the namespace "default"
warning
ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
warning
daemonsets.apps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "daemonsets" in API group "apps" in the namespace "default"
warning
events is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "events" in API group "" in the namespace "default"
warning
jobs.batch is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "jobs" in API group "batch" in the namespace "default"
warning
cronjobs.batch is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "cronjobs" in API group "batch" in the namespace "default"
warning
replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
warning
statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
It looks like your cluster is RBAC enabled and the deployment-controller is missing a service account defined in the deployment-controller pod(s). You should be able to easily mitigate this issue by adding this SA and it's Roles/Bindings.
Two ways to do it.
You can create the binding with simple one liner from CLI or YAML way:
$ kubectl create clusterrolebinding deployment-controller --clusterrole=cluster-admin --serviceaccount=kube-system:deployment-controller
If you want to define ClusterRoleBinding in YAML file - create the below file with some name say dashboard-rb.yaml and execute specific command:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: deployment-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: deployment-controller
namespace: kube-system
$ kubectl create -f dashboard-rb.yaml
Take a look: kubernetes-dashboard-access-warnings, accessing-rbac-enabled-kubernetes-dashboard, k8s-crb-warning, kubernetes-dashboard-is-forbidden-all-over-the-site.

Cannot list resource "configmaps" in API group when deploying Weaviate k8s setup on GCP

When running (on GCP):
$ helm upgrade \
--values ./values.yaml \
--install \
--namespace "weaviate" \
"weaviate" \
weaviate.tgz
It returns;
UPGRADE FAILED
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t
he namespace "kube-system"
UPDATE: based on solution
$ vim rbac-config.yaml
Add to the file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Run:
$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller --upgrade
Note: based on Helm v2.
tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see https://v2.helm.sh/docs/using_helm/#role-based-access-control
Long Answer
Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.
The following error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
means that the default service account in the kube-system namespace is lacking permissions. I assume you have installed Helm/Tiller in the kube-system namespace as this is the default if no other arguments are specified on helm init. Since you haven't created a specific Service Account for Tiller to use it defaults to the default service account.
Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has RBAC Authorization enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.
The helm docs list several options on how to make Helm/Tiller work in an RBAC-enabled setting. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: Service Account with cluster-admin role. The process described there essentially creates a dedicated service account for Tiller, and adds the required ClusterRoleBinding to the existing cluster-admin ClusterRole. Note that this effectively makes Helm/Tiller an admin of the entire cluster.
If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.

how to access namespaces in kubernetes using rest api?

I am unable to get list of namespaces using rest api and rest end point is https://<localhost>:8001/api/v1/namespaces
Using this kubernetes document:
I am using postman. I will repeat the steps:
Created a user and given cluster admin privileges:
kubectl create serviceaccount exampleuser
Created a rolebinding for our user with cluster role cluster-admin:
kubectl create rolebinding <nameofrolebinding> --clusterrole cluster-admin
--serviceaccount default:exampleuser
Checked rolebinding using:
kubectl describe rolebinding <nameofrolebinding>
Now by using:
kubectl describe serviceaccount exampleuser
kubectl describe secret exampleuser-xxxx-xxxx
I will use token I got here to authenticate postman.
GET https://<ipofserver>:port/api/v1/namespace
AUTH using bearer token.
Expected result to list all namespaces in cluster. like
kubectl get namespaces. But got a warning as follows.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:serviceaccount:default:exampleuser\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
I have used "cluster-admin" clusterrole for the user, still getting authentication related error.
please help.
You should use clusterrolebinding instead of rolebinding:
kubectl create clusterrolebinding <nameofrolebinding> --clusterrole cluster-admin --serviceaccount default:exampleuser
RoleBinding means permissions to a namespaced resources, but namespace is not a namespaced resources, you can check this by kubectl api-resouces.
More detail at rolebinding-and-clusterrolebinding:
Permissions can be granted within a namespace with a RoleBinding, or cluster-wide with a ClusterRoleBinding
so issue is instead of using rolebinding , i need to use clusterrolebinding check below
kubectl create rolebinding nameofrolebinding --clusterrole cluster-admin --serviceaccount default:exampleuser
kubectl create clusterrolebinding nameofrolebinding --clusterrole cluster-admin --serviceaccount default:exampleuser
rolebinding scope is upto a namespace and
clusterrolebinding scope is entire cluster.
To work with api/v1/namespaces we need to use clusterrolebinding

How are Kubernetes user tokens authorized?

I have two kubernetes clusters that were set up by kops. They are both running v1.10.8. I have done by best to mirror the configuration between the two. They both have RBAC enabled. I have kubernetes-dashboard running on both. They both have a /srv/kubernetes/known_tokens.csv with an admin and a kube user:
$ sudo cat /srv/kubernetes/known_tokens.csv
ABCD,admin,admin,system:masters
DEFG,kube,kube
(... other users ...)
My question is how do these users get authorized with consideration to RBAC? When authenticating to kubernetes-dashboard using tokens, the admin user's token works on both clusters and has full access. But the kube user's token only has access on one of the clusters. On one cluster, I get the following errors in the dashboard.
configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default"
persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default"
secrets is forbidden: User "kube" cannot list secrets in the namespace "default"
services is forbidden: User "kube" cannot list services in the namespace "default"
ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default"
daemonsets.apps is forbidden: User "kube" cannot list daemonsets.apps in the namespace "default"
pods is forbidden: User "kube" cannot list pods in the namespace "default"
events is forbidden: User "kube" cannot list events in the namespace "default"
deployments.apps is forbidden: User "kube" cannot list deployments.apps in the namespace "default"
replicasets.apps is forbidden: User "kube" cannot list replicasets.apps in the namespace "default"
jobs.batch is forbidden: User "kube" cannot list jobs.batch in the namespace "default"
cronjobs.batch is forbidden: User "kube" cannot list cronjobs.batch in the namespace "default"
replicationcontrollers is forbidden: User "kube" cannot list replicationcontrollers in the namespace "default"
statefulsets.apps is forbidden: User "kube" cannot list statefulsets.apps in the namespace "default"
As per the official docs, "Kubernetes does not have objects which represent normal user accounts".
I can't find anywhere on the working cluster that would give authorization to kube. Likewise, I can't find anything that would restrict kube on the other cluster. I've checked all ClusterRoleBinding resources in the default and kube-system namespace. None of these reference the kube user. So why the discrepancy in access to the dashboard and how can I adjust it?
Some other questions:
How do I debug authorization issues such as this? The dashboard logs just say this user doesn't have access. Is there somewhere I can see which serviceAccount a particular request or token is mapped to?
What are groups in k8s? The k8s docs mention groups a lot. Even the static token users can be assigned a group such as system:masters which looks like arole/clusterrolebut there is nosystem:mastersrole in my cluster? What exactly aregroups`? As per Create user group using RBAC API?, it appears groups are simply arbitrary labels that can be defined per user. What's the point of them? Can I map a group to a RBAC serviceAccount?
Update
I restarted the working cluster and it no longer works. I get the same authorization errors as the working cluster. Looks like it was some sort of cached access. Sorry for the bogus question. I'm still curious on my follow-up questions but they can be made into separate questions.
Hard to tell without access to the cluster, but my guess is that you have a Role and a RoleBinding somewhere for the kube user on the cluster that works. Not a ClusterRole with ClusterRoleBinding.
Something like this:
kind: Role
metadata:
name: my-role
namespace: default
rules:
- apiGroups: [""]
Resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-role-binding
namespace: default
subjects:
- kind: User
name: "kube"
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
How do I debug authorization issues such as this? The dashboard logs
just say this user doesn't have access. Is there somewhere I can see
which serviceAccount a particular request or token is mapped to?
You can look at the kube-apiserver logs under /var/log/kube-apiserver.log on your leader master. Or if it's running in a container docker logs <container-id-of-kube-apiserver>