If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
A default service account is automatically created for each namespace.
kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
"system:serviceaccount:foo:default" cannot list resource
"services" in API group "" in the namespace "foo"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now i am able to list the resurce service
kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root#tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.
What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.
The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.
According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".
HTH
How can I check what the default service account is authorized to do?
There isn't an easy way, but auth can-i may be helpful. Eg
$ kubectl auth can-i get pods --as=system:serviceaccount:default:default
no
For users there is auth can-i --list but this does not seem to work with --as which I suspect is a bug. In any case, you can run the above commands on a few verbs and the answer will be no in all cases, but I only tried a few. Conclusion: it seems that the default service account has no permissions by default (since in the cluster where I checked, we have not configured it, AFAICT).
Do we need it to be mounted there with every pod?
Not sure what the question means.
If not, how can we disable this behavior on the namespace level or cluster level.
You can set automountServiceAccountToken: false on a service or an individual pod. Service accounts are per namespace, so when done on a service account, any pods in that namespace that use this account will be affected by that setting.
What other use cases the default service account should be handling?
The default service account is a fallback, it is the SA that gets used if a pod does not specify one. So the default service account should have no privileges whatsoever. Why would a pod need to talk to the kube API by default?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace?
I don't recommend that, see previous answer. Instead, you should create a service account (bound to appropriate role/clusterrole) for each pod type that needs access to the API, following principle of least privileges. All other pod types can use default service account, which should not mount SA token automatically and should not be bound to any role.
kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<serviceaccount> -n <namespace>
as a simple example. to check the default service account in the testns namespace
kubectl auth can-i --list --as=system:serviceaccount:testns:default -n testns
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[ ... ]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Related
Is possible to gain k8s cluster access with serviceaccount token?
My script does not have access to a kubeconfig file, however, it does have access to the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here are the steps I tried but it is not working.
kubectl config set-credentials sa-user --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context sa-context --user=sa-user
but when the script ran "kubectl get rolebindings" I get the following error:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Is possible to gain k8s cluster access with serviceaccount token?
Certainly, that's the point of a ServiceAccount token. The question you appear to be asking is "why does my default ServiceAccount not have all the privileges I want", which is a different problem. One will benefit from reading the fine manual on the topic
If you want the default SA in the test NS to have privileges to read things in its NS, you must create a Role scoped to that NS and then declare the relationship explicitly. SAs do not automatically have those privileges
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: test-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: whatever-role-you-want
subjects:
- kind: ServiceAccount
name: default
namespace: test
but when the script ran "kubectl get pods" I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Presumably you mean you can kubectl get rolebindings, because I would not expect running kubectl get pods to emit that error
Yes, it is possible. For instance, if you login K8S dashboard via token it does use the same way.
Follow these steps;
Create a service account
$ kubectl -n <your-namespace-optional> create serviceaccount <service-account-name>
A role binding grants the permissions defined in a role to a user or set of users. You can use a predefined role or you can create your own. Check this link for more info. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example
$ kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<service-account-name>
Get the token name
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'`
Finally, get the token and set the credentials
$ kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode
$ kubectl config set-credentials <service-account-name> --token=<output from previous command>
$ kubectl config set-context --current --user=<service-account-name>
If you follow these steps carefully your problem will be solved.
I am trying to deploy applications on GKE using REST APIs. However, the GKE documentation is all mixed up and unclear as to how to enable the Kubernetes REST API access.
Does anyone here have a clear idea about how to create a Deployment on Kubernetes cluster on Google Cloud?
If yes, I would love to know the detailed steps for enabling the same. Currently, this is what I get.
https://xx.xx.xx.xx/apis/apps/v1/namespaces/default/deployments/nginx-1 GET call gives below JSON output despite valid authorization token
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"nginx-1\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments\" in API group \"apps\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "nginx-1",
"group": "apps",
"kind": "deployments"
},
"code": 403
}
Administration APIs however seems to be enabled:
Following the instructions at this link and executing the below commands:
# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server referring the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
gives the desired output.
The service account default in default namespace does not have RBAC to perform get verb on deployment resource in default namespace.
Use below role and rolebinding to provide the necessary permission to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: deployment-reader
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-deployment
namespace: default
subjects:
# You can specify more than one "subject"
- kind: ServiceAccount
name: default # "name" is case sensitive
namespace: default
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: deployment-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
To verify the permission
kubectl auth can-i get deployments --as=system:serviceaccount:default:default -n default
yes
I'm experiencing a strange behavior from newly created Kubernetes service accounts. It appears that their tokens provide limitless access permissions in our cluster.
If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.
# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa
# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"
SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)
# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
cluster:
certificate-authority-data: ${CA}
server: ${SERVER}
contexts:
- name: test-context
context:
cluster: test-cluster
namespace: ${NAMESPACE}
user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
user:
token: ${TOKEN}
" > config
Running that ^ as a shell script yields a config in the current directory. The problem is, using that file, I'm able to read and edit all resources in the cluster. I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.
# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces
# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod
I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl queries using the newly generated config.
Below is an example of the test-sa service account's JWT that's embedded in config:
{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "test-namespace",
"kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
"kubernetes.io/serviceaccount/service-account.name": "test-sa",
"kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
"sub": "system:serviceaccount:test-namespace:test-sa"
}
Things to consider...
RBAC seems to be enabled in the cluster as I see rbac.authorization.k8s.io/v1 and
rbac.authorization.k8s.io/v1beta1 in the output of kubectl api-versions | grep rbac as suggested in this post. It is notable that kubectl cluster-info dump | grep authorization-mode, as suggested in another answer to the same question, doesn't show output. Could this suggest RBAC isn't actually enabled?
My user has cluster-admin role privileges, but I would not expect those to carry over to service accounts created with it.
We're running our cluster on GKE.
As far as I'm aware, we don't have any unorthodox RBAC roles or bindings in the cluster that would cause this. I could be missing something or am generally unaware of K8s RBAC configurations that would cause this.
Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?
You can check the permission of the service account by running command
kubectl auth can-i --list --as=system:serviceaccount:test-namespace:test-sa
If you see below output that's the very limited permission by default a service account gets.
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I could not reproduce your issue on three different K8S versions in my test lab (including v1.15.3, v1.14.10-gke.17, v1.11.7-gke.12 - with basic auth enabled).
Unfortunately token based log-in activities are not recorded in AuditLogs of Cloud Logs console for GKE clusters :(.
To my knowledge only data-access operations, that go through Google Cloud are recorded (AIM based = kubectl using google auth provider).
If your "test-sa" service account is somehow permitted to do specific operation by RBAC, I would still try to study Audit Logs of your GKE cluster. Maybe somehow your service account is being mapped to google service account one, and thus authorized.
You can always contact official support channel of GCP, to troubleshot further your unusual case.
It turns out an overly permissive cluster-admin ClusterRoleBinding was bound to the system:serviceaccounts group. This resulted in all service accounts in our cluster having cluster-admin privileges.
It seems like somewhere early on in the cluster's life the following ClusterRoleBinding was created:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
WARNING: Never apply this rule to your cluster ☝️
We have since removed this overly permissive rule and rightsized all service account permissions.
Thank you to the folks that provided useful answers and comments to this question. They were helpful in determining this issue. This was a very dangerous RBAC configuration and we are pleased to have it resolved.
When running (on GCP):
$ helm upgrade \
--values ./values.yaml \
--install \
--namespace "weaviate" \
"weaviate" \
weaviate.tgz
It returns;
UPGRADE FAILED
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t
he namespace "kube-system"
UPDATE: based on solution
$ vim rbac-config.yaml
Add to the file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Run:
$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller --upgrade
Note: based on Helm v2.
tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see https://v2.helm.sh/docs/using_helm/#role-based-access-control
Long Answer
Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.
The following error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
means that the default service account in the kube-system namespace is lacking permissions. I assume you have installed Helm/Tiller in the kube-system namespace as this is the default if no other arguments are specified on helm init. Since you haven't created a specific Service Account for Tiller to use it defaults to the default service account.
Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has RBAC Authorization enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.
The helm docs list several options on how to make Helm/Tiller work in an RBAC-enabled setting. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: Service Account with cluster-admin role. The process described there essentially creates a dedicated service account for Tiller, and adds the required ClusterRoleBinding to the existing cluster-admin ClusterRole. Note that this effectively makes Helm/Tiller an admin of the entire cluster.
If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.
Hi I installed a fresh kubernetes cluster on Ubuntu 16.04 using this tutorial:https://blog.alexellis.io/kubernetes-in-10-minutes/
However as soon as I try to access my api (for example: https://[server-ip]:6443 /api/v1/namespaces) I get the following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:bootstrap:a916af\" cannot list namespaces at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
Does anyone know how to fix this or what I am doing wrong?
While I haven't run through that tutorial, the service account with which you're making the request doesn't have access to cluster-level information, like listing namespaces. RBAC (Role-Based Access Control) binds users with either a Role or a ClusterRole, which grant them different permissions. My guess is that service account shouldn't ever need to know what other namespaces exist, therefore doesn't have access to list them.
In terms of "fixing" this, aside from creating a serviceaccount/user with correct permissions, that tutorial makes several references to a config file stored at $HOME/.kube/config, which stores the credentials for a user that should have access to cluster-level resources, including listing namespaces. You could start there.
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
I solve it by create
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kubectl apply -f fabric8-rbac.yaml