I have two kubernetes clusters that were set up by kops. They are both running v1.10.8. I have done by best to mirror the configuration between the two. They both have RBAC enabled. I have kubernetes-dashboard running on both. They both have a /srv/kubernetes/known_tokens.csv with an admin and a kube user:
$ sudo cat /srv/kubernetes/known_tokens.csv
ABCD,admin,admin,system:masters
DEFG,kube,kube
(... other users ...)
My question is how do these users get authorized with consideration to RBAC? When authenticating to kubernetes-dashboard using tokens, the admin user's token works on both clusters and has full access. But the kube user's token only has access on one of the clusters. On one cluster, I get the following errors in the dashboard.
configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default"
persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default"
secrets is forbidden: User "kube" cannot list secrets in the namespace "default"
services is forbidden: User "kube" cannot list services in the namespace "default"
ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default"
daemonsets.apps is forbidden: User "kube" cannot list daemonsets.apps in the namespace "default"
pods is forbidden: User "kube" cannot list pods in the namespace "default"
events is forbidden: User "kube" cannot list events in the namespace "default"
deployments.apps is forbidden: User "kube" cannot list deployments.apps in the namespace "default"
replicasets.apps is forbidden: User "kube" cannot list replicasets.apps in the namespace "default"
jobs.batch is forbidden: User "kube" cannot list jobs.batch in the namespace "default"
cronjobs.batch is forbidden: User "kube" cannot list cronjobs.batch in the namespace "default"
replicationcontrollers is forbidden: User "kube" cannot list replicationcontrollers in the namespace "default"
statefulsets.apps is forbidden: User "kube" cannot list statefulsets.apps in the namespace "default"
As per the official docs, "Kubernetes does not have objects which represent normal user accounts".
I can't find anywhere on the working cluster that would give authorization to kube. Likewise, I can't find anything that would restrict kube on the other cluster. I've checked all ClusterRoleBinding resources in the default and kube-system namespace. None of these reference the kube user. So why the discrepancy in access to the dashboard and how can I adjust it?
Some other questions:
How do I debug authorization issues such as this? The dashboard logs just say this user doesn't have access. Is there somewhere I can see which serviceAccount a particular request or token is mapped to?
What are groups in k8s? The k8s docs mention groups a lot. Even the static token users can be assigned a group such as system:masters which looks like arole/clusterrolebut there is nosystem:mastersrole in my cluster? What exactly aregroups`? As per Create user group using RBAC API?, it appears groups are simply arbitrary labels that can be defined per user. What's the point of them? Can I map a group to a RBAC serviceAccount?
Update
I restarted the working cluster and it no longer works. I get the same authorization errors as the working cluster. Looks like it was some sort of cached access. Sorry for the bogus question. I'm still curious on my follow-up questions but they can be made into separate questions.
Hard to tell without access to the cluster, but my guess is that you have a Role and a RoleBinding somewhere for the kube user on the cluster that works. Not a ClusterRole with ClusterRoleBinding.
Something like this:
kind: Role
metadata:
name: my-role
namespace: default
rules:
- apiGroups: [""]
Resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-role-binding
namespace: default
subjects:
- kind: User
name: "kube"
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
How do I debug authorization issues such as this? The dashboard logs
just say this user doesn't have access. Is there somewhere I can see
which serviceAccount a particular request or token is mapped to?
You can look at the kube-apiserver logs under /var/log/kube-apiserver.log on your leader master. Or if it's running in a container docker logs <container-id-of-kube-apiserver>
Related
I'm trying to deploy some deployments out of my gitlab runner.
I do not see the error in my clusterrole and in the rolebindng.
Here the error I get:
from server for: "./deployment.yaml": deployments.apps "demo-deployment" is forbidden: User "system:serviceaccount:gitlab-runner:gitlab-ci" cannot get resource "deployments" in API group "apps" in the namespace "gitlab-runner"
Here the role I create:
kubectl create clusterrole deployment-test --verb=\* --resource=deployments
kubectl create clusterrolebinding deployment-test-binding --clusterrole=deployment-test --serviceaccount=gitlab-runner:gitlab-ci
Thanks for any help!
The Kubernetes dashboard outputs a bunch of error messages.
Should you ignore them?
If not, how do you fix them?
warning
configmaps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "configmaps" in API group "" in the namespace "default"
warning
persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "default"
warning
secrets is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "secrets" in API group "" in the namespace "default"
warning
services is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "services" in API group "" in the namespace "default"
warning
ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
warning
daemonsets.apps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "daemonsets" in API group "apps" in the namespace "default"
warning
events is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "events" in API group "" in the namespace "default"
warning
jobs.batch is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "jobs" in API group "batch" in the namespace "default"
warning
cronjobs.batch is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "cronjobs" in API group "batch" in the namespace "default"
warning
replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
warning
statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:deployment-controller" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
It looks like your cluster is RBAC enabled and the deployment-controller is missing a service account defined in the deployment-controller pod(s). You should be able to easily mitigate this issue by adding this SA and it's Roles/Bindings.
Two ways to do it.
You can create the binding with simple one liner from CLI or YAML way:
$ kubectl create clusterrolebinding deployment-controller --clusterrole=cluster-admin --serviceaccount=kube-system:deployment-controller
If you want to define ClusterRoleBinding in YAML file - create the below file with some name say dashboard-rb.yaml and execute specific command:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: deployment-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: deployment-controller
namespace: kube-system
$ kubectl create -f dashboard-rb.yaml
Take a look: kubernetes-dashboard-access-warnings, accessing-rbac-enabled-kubernetes-dashboard, k8s-crb-warning, kubernetes-dashboard-is-forbidden-all-over-the-site.
If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
A default service account is automatically created for each namespace.
kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
"system:serviceaccount:foo:default" cannot list resource
"services" in API group "" in the namespace "foo"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now i am able to list the resurce service
kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root#tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.
What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.
The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.
According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".
HTH
How can I check what the default service account is authorized to do?
There isn't an easy way, but auth can-i may be helpful. Eg
$ kubectl auth can-i get pods --as=system:serviceaccount:default:default
no
For users there is auth can-i --list but this does not seem to work with --as which I suspect is a bug. In any case, you can run the above commands on a few verbs and the answer will be no in all cases, but I only tried a few. Conclusion: it seems that the default service account has no permissions by default (since in the cluster where I checked, we have not configured it, AFAICT).
Do we need it to be mounted there with every pod?
Not sure what the question means.
If not, how can we disable this behavior on the namespace level or cluster level.
You can set automountServiceAccountToken: false on a service or an individual pod. Service accounts are per namespace, so when done on a service account, any pods in that namespace that use this account will be affected by that setting.
What other use cases the default service account should be handling?
The default service account is a fallback, it is the SA that gets used if a pod does not specify one. So the default service account should have no privileges whatsoever. Why would a pod need to talk to the kube API by default?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace?
I don't recommend that, see previous answer. Instead, you should create a service account (bound to appropriate role/clusterrole) for each pod type that needs access to the API, following principle of least privileges. All other pod types can use default service account, which should not mount SA token automatically and should not be bound to any role.
kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<serviceaccount> -n <namespace>
as a simple example. to check the default service account in the testns namespace
kubectl auth can-i --list --as=system:serviceaccount:testns:default -n testns
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[ ... ]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I have a Kubernetes v1.9.3 (no OpenShift) cluster I'd like to manage with ManageIQ (gaprindashvili-3 running as a Docker container).
I prepared the k8s cluster to interact with ManageIQ following these instructions. Notice that I performed the steps listed in the last section only (Prepare cluster for use with ManageIQ), as the previous ones were for setting up a k8s cluster and I already had a running one.
I successfully added the k8s container provider to ManageIQ, but the dashboard reports nothing: 0 nodes, 0 pods, 0 services, etc..., while I do have nodes, services and running pods on the cluster. I looked at the content of /var/log/evm.log of ManageIQ and found this error:
[----] E, [2018-06-21T10:06:40.397410 #13333:6bc9e80] ERROR – : [KubeException]: events is forbidden: User “system:serviceaccount:management-infra:management-admin” cannot list events at the cluster scope: clusterrole.rbac.authorization.k8s.io “cluster-reader” not found Method:[block in method_missing]
So the ClusterRole cluster-reader was not defined in the cluster. I double checked with kubectl get clusterrole cluster-reader and it confirmed that cluster-reader was missing.
As a solution, I tried to create cluster-reader manually. I could not find any reference of it in the k8s doc, while it is mentioned in the OpenShift docs. So I looked at how cluster-reader was defined in OpenShift v3.9. Its definition changes across different OpenShift versions, I picked 3.9 as it is based on k8s v1.9 which is the one I'm using. So here's what I found in the OpenShift 3.9 doc:
Name: cluster-reader
Labels: <none>
Annotations: authorization.openshift.io/system-only=true
rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
[*] [] [get]
apiservices.apiregistration.k8s.io [] [] [get list watch]
apiservices.apiregistration.k8s.io/status [] [] [get list watch]
appliedclusterresourcequotas [] [] [get list watch]
I wrote the following yaml definition to create an equivalent ClusterRole in my cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-reader
rules:
- apiGroups: ["apiregistration"]
resources: ["apiservices.apiregistration.k8s.io", "apiservices.apiregistration.k8s.io/status"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["*"]
verbs: ["get"]
I didn't include appliedclusterresourcequotas among the monitored resources because it's my understanding that is an OpenShift-only resource (but I may be mistaken).
I deleted the old k8s container provider on ManageIQ and created a new one after having created cluster-reader, but nothing changed, the dashboard still displays nothing (0 nodes, 0 pods, etc...). I looked at the content of /var/log/evm.log in ManageIQ and this time these errors were reported:
[----] E, [2018-06-22T11:15:39.041903 #2942:7e5e1e0] ERROR -- : MIQ(ManageIQ::Providers::Kubernetes::ContainerManager::EventCatcher::Runner#start_event_monitor) EMS [kubernetes-01] as [] Event Monitor Thread aborted because [events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope]
[----] E, [2018-06-22T11:15:39.042455 #2942:7e5e1e0] ERROR -- : [KubeException]: events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope Method:[block in method_missing]
So what am I doing wrong? How can I fix this problem?
If it can be of any use, here you can find the whole .yaml file I'm using to set up the k8s cluster to interact with ManageIQ (all the required namespaces, service accounts, cluster role bindings are present as well).
For the ClusterRole to take effect it must be bound to the group management-infra or user management-admin.
Example of creating group binding:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-cluster-state
subjects:
- kind: Group
name: management-infra
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-reader
apiGroup: rbac.authorization.k8s.io
After applying this file changes will take place immediately. No need to restart cluster.
See more information here.
I am using kubernetes on bare-metal (v1.10.2) and latest traefik (v1.6.2) as ingress. I am seeing following issue when I want to enable traefik to route to a httpS service.
Error configuring TLS for ingress default/cheese: secret default/traefik-cert does not exist
The secret exists ! why does it report that it doesnt ?
On the basis of comment: secret is inaccessible from traefik service account. But I dont understand why.
Details as follows:
kubectl get secret dex-tls -oyaml --as gem-lb-traefik
Error from server (Forbidden): secrets "dex-tls" is forbidden: User "gem-lb-traefik" cannot get secrets in the namespace "default"
$ kubectl describe clusterrolebinding gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: gem-lb-traefik
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount gem-lb-traefik default
$ kubectl describe clusterrole gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
secrets [] [] [get list watch]
services [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
I still dont understand why I am getting error of secret inaccessibility from the service account
First of all, in this case, you cannot check the access to the secret using --as gem-lb-traefik key because it tries to run the command as user gem-lb-traefik, but you have no such user, you only have ServiceAccount with ClusterRole gem-lb-traefik. Moreover, using --as <user> key with any nonexistent user provides an error similar to yours:
Error from server (Forbidden): secrets "<secretname>" is forbidden: User "<user>" cannot get secrets in the namespace "<namespace>"
So, as #Ignacio Millán mentioned, you need to check your settings for Traefik and fix them according to the official documentation. Possibly, you missed your ServiceAccount in Traefik DaemonSet description. Also, you need to check if Traefik DaemonSet is located in the same namespace as ServiceAccount for which you use ClusterRoleBinding.