I'm experiencing a strange behavior from newly created Kubernetes service accounts. It appears that their tokens provide limitless access permissions in our cluster.
If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.
# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa
# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"
SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)
# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
cluster:
certificate-authority-data: ${CA}
server: ${SERVER}
contexts:
- name: test-context
context:
cluster: test-cluster
namespace: ${NAMESPACE}
user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
user:
token: ${TOKEN}
" > config
Running that ^ as a shell script yields a config in the current directory. The problem is, using that file, I'm able to read and edit all resources in the cluster. I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.
# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces
# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod
I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl queries using the newly generated config.
Below is an example of the test-sa service account's JWT that's embedded in config:
{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "test-namespace",
"kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
"kubernetes.io/serviceaccount/service-account.name": "test-sa",
"kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
"sub": "system:serviceaccount:test-namespace:test-sa"
}
Things to consider...
RBAC seems to be enabled in the cluster as I see rbac.authorization.k8s.io/v1 and
rbac.authorization.k8s.io/v1beta1 in the output of kubectl api-versions | grep rbac as suggested in this post. It is notable that kubectl cluster-info dump | grep authorization-mode, as suggested in another answer to the same question, doesn't show output. Could this suggest RBAC isn't actually enabled?
My user has cluster-admin role privileges, but I would not expect those to carry over to service accounts created with it.
We're running our cluster on GKE.
As far as I'm aware, we don't have any unorthodox RBAC roles or bindings in the cluster that would cause this. I could be missing something or am generally unaware of K8s RBAC configurations that would cause this.
Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?
You can check the permission of the service account by running command
kubectl auth can-i --list --as=system:serviceaccount:test-namespace:test-sa
If you see below output that's the very limited permission by default a service account gets.
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I could not reproduce your issue on three different K8S versions in my test lab (including v1.15.3, v1.14.10-gke.17, v1.11.7-gke.12 - with basic auth enabled).
Unfortunately token based log-in activities are not recorded in AuditLogs of Cloud Logs console for GKE clusters :(.
To my knowledge only data-access operations, that go through Google Cloud are recorded (AIM based = kubectl using google auth provider).
If your "test-sa" service account is somehow permitted to do specific operation by RBAC, I would still try to study Audit Logs of your GKE cluster. Maybe somehow your service account is being mapped to google service account one, and thus authorized.
You can always contact official support channel of GCP, to troubleshot further your unusual case.
It turns out an overly permissive cluster-admin ClusterRoleBinding was bound to the system:serviceaccounts group. This resulted in all service accounts in our cluster having cluster-admin privileges.
It seems like somewhere early on in the cluster's life the following ClusterRoleBinding was created:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
WARNING: Never apply this rule to your cluster ☝️
We have since removed this overly permissive rule and rightsized all service account permissions.
Thank you to the folks that provided useful answers and comments to this question. They were helpful in determining this issue. This was a very dangerous RBAC configuration and we are pleased to have it resolved.
Related
I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created
If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
A default service account is automatically created for each namespace.
kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
"system:serviceaccount:foo:default" cannot list resource
"services" in API group "" in the namespace "foo"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now i am able to list the resurce service
kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root#tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.
What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.
The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.
According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".
HTH
How can I check what the default service account is authorized to do?
There isn't an easy way, but auth can-i may be helpful. Eg
$ kubectl auth can-i get pods --as=system:serviceaccount:default:default
no
For users there is auth can-i --list but this does not seem to work with --as which I suspect is a bug. In any case, you can run the above commands on a few verbs and the answer will be no in all cases, but I only tried a few. Conclusion: it seems that the default service account has no permissions by default (since in the cluster where I checked, we have not configured it, AFAICT).
Do we need it to be mounted there with every pod?
Not sure what the question means.
If not, how can we disable this behavior on the namespace level or cluster level.
You can set automountServiceAccountToken: false on a service or an individual pod. Service accounts are per namespace, so when done on a service account, any pods in that namespace that use this account will be affected by that setting.
What other use cases the default service account should be handling?
The default service account is a fallback, it is the SA that gets used if a pod does not specify one. So the default service account should have no privileges whatsoever. Why would a pod need to talk to the kube API by default?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace?
I don't recommend that, see previous answer. Instead, you should create a service account (bound to appropriate role/clusterrole) for each pod type that needs access to the API, following principle of least privileges. All other pod types can use default service account, which should not mount SA token automatically and should not be bound to any role.
kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<serviceaccount> -n <namespace>
as a simple example. to check the default service account in the testns namespace
kubectl auth can-i --list --as=system:serviceaccount:testns:default -n testns
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[ ... ]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I am using kubernetes on bare-metal (v1.10.2) and latest traefik (v1.6.2) as ingress. I am seeing following issue when I want to enable traefik to route to a httpS service.
Error configuring TLS for ingress default/cheese: secret default/traefik-cert does not exist
The secret exists ! why does it report that it doesnt ?
On the basis of comment: secret is inaccessible from traefik service account. But I dont understand why.
Details as follows:
kubectl get secret dex-tls -oyaml --as gem-lb-traefik
Error from server (Forbidden): secrets "dex-tls" is forbidden: User "gem-lb-traefik" cannot get secrets in the namespace "default"
$ kubectl describe clusterrolebinding gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: gem-lb-traefik
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount gem-lb-traefik default
$ kubectl describe clusterrole gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
secrets [] [] [get list watch]
services [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
I still dont understand why I am getting error of secret inaccessibility from the service account
First of all, in this case, you cannot check the access to the secret using --as gem-lb-traefik key because it tries to run the command as user gem-lb-traefik, but you have no such user, you only have ServiceAccount with ClusterRole gem-lb-traefik. Moreover, using --as <user> key with any nonexistent user provides an error similar to yours:
Error from server (Forbidden): secrets "<secretname>" is forbidden: User "<user>" cannot get secrets in the namespace "<namespace>"
So, as #Ignacio Millán mentioned, you need to check your settings for Traefik and fix them according to the official documentation. Possibly, you missed your ServiceAccount in Traefik DaemonSet description. Also, you need to check if Traefik DaemonSet is located in the same namespace as ServiceAccount for which you use ClusterRoleBinding.
I have my Kubernetes cluster running in GKE I want to run an application outside the cluster and talk to the Kubernetes API.
By using password retrieved from running:
gcloud container clusters get-credentials cluster-2 --log-http
I am able to access the API with Basic authentication.
But I want to create multiple Kubernetes service accounts and configure them with required authorization and use appropriately.
So, I created service accounts and obtained the tokens using:
kubectl create serviceaccount sauser1
kubectl get serviceaccounts sauser1 -o yaml
kubectl get secret sauser1-token-<random-string-as-retrieved-from-previous-command> -o yaml
If I try to access the Kubernetes API with the obtained token using Bearer authentication then I get a 401 HTTP error. I thought that some permissions may have to be set for the service account, so based on the documentation here, I created below YAML file:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
subjects:
- kind: ServiceAccount
name: sauser1
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
and tried to apply it using the below command:
kubectl apply -f default-sa-rolebinding.yaml
I got the following error:
clusterrolebinding "pod-reader" created
Error from server (Forbidden): error when creating "default-sa-rolebinding.yaml"
: clusterroles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to g
rant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["g
et"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule
{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{xyz#gmail.
com [system:authenticated] map[authenticator:[GKE]]} ownerrules=[PolicyRule{Res
ources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:[
"create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healt
hz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/versi
on"], Verbs:["get"]}] ruleResolutionErrors=[]
I dont know how to proceed from here. Is my approach correct or am I missing something here?
UPDATE: As per the post referred by #JanosLenart in the comments, modified the kubectl command and the above error is not observed. But accessing the API, still gives 401 error. The curl command that I am using is:
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
#Janos pointed out the potential problem, however I think you will need an actual Cloud IAM Service Account as well, because you said:
I want to run an application outside the cluster [...]
If you're authenticating to GKE from outside, I believe you can only use the Google IAM identities. (I might be wrong, if so, please let me know.)
In this case, what you need to do:
Create an IAM service account and download a json key file of it.
set GOOGLE_APPLICATION_CREDENTIALS to that file.
either:
use RBAC like in your question to give permissions to the email address of the IAM Service Account
use IAM Roles to give the IAM Service Account on the GKE API (e.g. Container Developer role is usually sufficient)
Use kubectl command against the cluster (make sure you have a .kube/config file with the cluster's IP/CA cert beforehand) with the environment variable above, it should work.
YMMV
I managed to get it work without USING an actual Cloud IAM Service Account
First, I decided to use an shell inside GKE's k8s cluster by running
kubectl run curl-random --image=radial/busyboxplus:curl -i --tty --rm
Second, I made sure I decoded my token by copying the token and then running through
pbpaste | base64 -D
Third, I created the rolebinding for the serviceaccount, NOT the username.
kubectl create clusterrolebinding shaoserverless-cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=default:shaoserverless
The third step was particularly tricky but I got the inspiration since the error message used to be
Unknown user \"system:serviceaccount:default:shaoserverless\"",
Lastly, then this works
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
Hi I created a project in Openshift and attempted to add a turbine-server image to it. A Pod was added but I keep receiving the following error in the logs. I am very new to OpenShift and i would appreciate any advice or suggestions as to how to resolve this error. I can supply either further information that is required.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/booking/pods/turbine-server-2-q7v8l . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
How to diagnose
Make sure you have configured a service account, role, and role binding to the account. Make sure the service account is set to the pod spec.
spec:
serviceAccountName: your-service-account
Start monitoring atomic-openshift-node service on the node the pod is deployed and the API server.
$ journalctl -b -f -u atomic-openshift-node
Run the pod and monitor the journald output. You would see "Forbidden".
Jan 28 18:27:38 <hostname> atomic-openshift-node[64298]:
logging error output: "Forbidden (user=system:serviceaccount:logging:appuser, verb=get, resource=nodes, subresource=proxy)"
This means the service account appuser doest not have the authorisation to do get on the nodes/proxy resource. Then update the role to be able to allow the verb "get" on the resource.
- apiGroups: [""]
resources:
- "nodes"
- "nodes/status"
- "nodes/log"
- "nodes/metrics"
- "nodes/proxy" <----
- "nodes/spec"
- "nodes/stats"
- "namespaces"
- "events"
- "services"
- "pods"
- "pods/status"
verbs: ["get", "list", "view"]
Note that some resources are not default legacy "" group as in Unable to list deployments resources using RBAC.
How to verify the authorisations
To verify who can execute the verb against the resource, for example patch verb against pod.
$ oadm policy who-can patch pod
Namespace: default
Verb: patch
Resource: pods
Users: auser
system:admin
system:serviceaccount:cicd:jenkins
Groups: system:cluster-admins
system:masters
OpenShift vs K8S
OpenShift has command oc policy or oadm policy:
oc policy add-role-to-user <role> <user-name>
oadm policy add-cluster-role-to-user <role> <user-name>
This is the same with K8S role binding. You can use K8S RBAC but the API version in OpenShift needs to be v1 instead of rbac.authorization.k8s.io/v1 in K8s.
References
Managing Authorization Policies
Using RBAC Authorization
User and Role Management
Hi thank you for the replies - I was able to resolve the issue by executing the following commands using the oc command line utility:
oc policy add-role-to-group view system:serviceaccounts -n <project>
oc policy add-role-to-group edit system:serviceaccounts -n <project>