I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created
Related
Is possible to gain k8s cluster access with serviceaccount token?
My script does not have access to a kubeconfig file, however, it does have access to the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here are the steps I tried but it is not working.
kubectl config set-credentials sa-user --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context sa-context --user=sa-user
but when the script ran "kubectl get rolebindings" I get the following error:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Is possible to gain k8s cluster access with serviceaccount token?
Certainly, that's the point of a ServiceAccount token. The question you appear to be asking is "why does my default ServiceAccount not have all the privileges I want", which is a different problem. One will benefit from reading the fine manual on the topic
If you want the default SA in the test NS to have privileges to read things in its NS, you must create a Role scoped to that NS and then declare the relationship explicitly. SAs do not automatically have those privileges
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: test-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: whatever-role-you-want
subjects:
- kind: ServiceAccount
name: default
namespace: test
but when the script ran "kubectl get pods" I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Presumably you mean you can kubectl get rolebindings, because I would not expect running kubectl get pods to emit that error
Yes, it is possible. For instance, if you login K8S dashboard via token it does use the same way.
Follow these steps;
Create a service account
$ kubectl -n <your-namespace-optional> create serviceaccount <service-account-name>
A role binding grants the permissions defined in a role to a user or set of users. You can use a predefined role or you can create your own. Check this link for more info. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example
$ kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<service-account-name>
Get the token name
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'`
Finally, get the token and set the credentials
$ kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode
$ kubectl config set-credentials <service-account-name> --token=<output from previous command>
$ kubectl config set-context --current --user=<service-account-name>
If you follow these steps carefully your problem will be solved.
I install kubernetes external secrets with helm, on GKE.
GKE: 1.16.15-gke.6000 on asia-northeast1
helm app version 6.2.0
using Workload Identity as document written
For workload identity,the service account I bind as below command (my-secrets-sa#$PROJECT.iam.gserviceaccount.com) has SecretManager.admin role, which seems necessary for using google secrets manager
gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:$CLUSTER_PROJECT.svc.id.goog[$SECRETS_NAMESPACE/kubernetes-external-secrets]" my-secrets-sa#$PROJECT.iam.gserviceaccount.com
Workload identity looks set correctly, because checking service account in pod on GKE shows correct serviceaccount
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_workload_identity_on_a_new_cluster
create a pod in cluster and check auth in it. it shows my-secrets-sa#$PROJECT.iam.gserviceaccount.com
$ kubectl run -it --image google/cloud-sdk:slim --serviceaccount ksa-name --namespace k8s-namespace workload-identity-test
$ gcloud auth list
But even if creating externalsecret, externalsecret shows error
ERROR, 7 PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/project-id/secrets/my-gsm-secret-name/versions/latest' (or it may not exist).
secret my-gsm-secret-name itself exist in secretmanager, so it should not "not exist".
Also permission must be correctly set by workload identity.
it's the externalsecret I defined.
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
Has anyone had similar problem before ?
Thank you
whole command
create a cluster with workload-pool.
$ gcloud container clusters create cluster --region asia-northeast1 --node-locations asia-northeast1-a --num-nodes 1 --preemptible --workload-pool=my-project.svc.id.goog
create kubernetes service account.
$ kubectl create serviceaccount --namespace default ksa
binding kubernetes service account & service account
$ gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "serviceAccount:my-project.svc.id.goog[default/ksa]"
my-secrets-sa#my-project.iam.gserviceaccount.com`
add annotation
$ kubectl annotate serviceaccount
--namespace default
ksa
iam.gke.io/gcp-service-account=my-secrets-sa#my-project.iam.gserviceaccount.com
install with helm
$ helm install my-release external-secrets/kubernetes-external-secrets
create external secret
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
$ kubectl apply -f excternal-secret.yaml
I noticed that I had used different kubernetes service account.
When installing helm, new kubernetes service account my-release-kubernetes-external-secrets was created, and service/pods must be working on this service account.
So I should bind my-release-kubernetes-external-secrets & google service account.
Now, it works well.
Thank you #matt_j #norbjd
What are the practical differences between the two? When should I choose one over the other?
For example if I'd like to give a developer in my project access to just view the logs of a pod. It seems both a service account or a context could be assigned these permissions via a RoleBinding.
What is Service Account?
From Docs
User accounts are for humans. Service accounts are for processes,
which run in pods.
User accounts are intended to be global...Service
accounts are namespaced.
Context
context related to kubeconfig file(~/.kube/config). As you know kubeconfig file is a yaml file, the section context holds your user/token and cluster references. context is really usefull when you have multiple cluster, you can define all your clusters and users in single kubeconfig file, then you can switch between them with help of context (Example: kubectl config --kubeconfig=config-demo use-context dev-frontend)
From Docs
apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
- cluster:
insecure-skip-tls-verify: true
server: https://5.6.7.8
name: scratch
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
- context:
cluster: development
namespace: storage
user: developer
name: dev-storage
- context:
cluster: scratch
namespace: default
user: experimenter
name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
- name: experimenter
user:
password: some-password
username: exp
You can above, there are 3 contexts, holds references of cluster and user.
..if I'd like to give a developer in my project access to just view the
logs of a pod. It seems both a service account or a context could be
assigned these permissions via a RoleBinding.
That correct, you need to create service account, Role(or ClusterRole), RoleBinding(or ClusterRoleBinding) and generate kubeconfig file that contains service account token and give it your developer.
I have a script to generate kubconfig file, takes service account name argument. Feel free to check out
UPDATE:
If you want to create Role and RoleBinding, this might help
Service Account: A service account represents an identity for processes that run in a pod. When a process is authenticated through a service account, it can contact the API server and access cluster resources. If a pod doesn’t have an assigned service account, it gets the default service account.
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace and you can access the API from inside a pod using automatically mounted service account credentials.
Context: A context is just a set of access parameters that contains a Kubernetes cluster, a user, and a namespace.
The current context is the cluster that is currently the default for kubectl and all kubectl commands run against that cluster.
They are two different concepts. A context most probably refers to an abstraction relating to the kubectl configuration, where a context can be associated with a service account.
For some reason I assumed a context was just another method of authentication.
I have my Kubernetes cluster running in GKE I want to run an application outside the cluster and talk to the Kubernetes API.
By using password retrieved from running:
gcloud container clusters get-credentials cluster-2 --log-http
I am able to access the API with Basic authentication.
But I want to create multiple Kubernetes service accounts and configure them with required authorization and use appropriately.
So, I created service accounts and obtained the tokens using:
kubectl create serviceaccount sauser1
kubectl get serviceaccounts sauser1 -o yaml
kubectl get secret sauser1-token-<random-string-as-retrieved-from-previous-command> -o yaml
If I try to access the Kubernetes API with the obtained token using Bearer authentication then I get a 401 HTTP error. I thought that some permissions may have to be set for the service account, so based on the documentation here, I created below YAML file:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
subjects:
- kind: ServiceAccount
name: sauser1
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
and tried to apply it using the below command:
kubectl apply -f default-sa-rolebinding.yaml
I got the following error:
clusterrolebinding "pod-reader" created
Error from server (Forbidden): error when creating "default-sa-rolebinding.yaml"
: clusterroles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to g
rant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["g
et"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule
{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{xyz#gmail.
com [system:authenticated] map[authenticator:[GKE]]} ownerrules=[PolicyRule{Res
ources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:[
"create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healt
hz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/versi
on"], Verbs:["get"]}] ruleResolutionErrors=[]
I dont know how to proceed from here. Is my approach correct or am I missing something here?
UPDATE: As per the post referred by #JanosLenart in the comments, modified the kubectl command and the above error is not observed. But accessing the API, still gives 401 error. The curl command that I am using is:
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
#Janos pointed out the potential problem, however I think you will need an actual Cloud IAM Service Account as well, because you said:
I want to run an application outside the cluster [...]
If you're authenticating to GKE from outside, I believe you can only use the Google IAM identities. (I might be wrong, if so, please let me know.)
In this case, what you need to do:
Create an IAM service account and download a json key file of it.
set GOOGLE_APPLICATION_CREDENTIALS to that file.
either:
use RBAC like in your question to give permissions to the email address of the IAM Service Account
use IAM Roles to give the IAM Service Account on the GKE API (e.g. Container Developer role is usually sufficient)
Use kubectl command against the cluster (make sure you have a .kube/config file with the cluster's IP/CA cert beforehand) with the environment variable above, it should work.
YMMV
I managed to get it work without USING an actual Cloud IAM Service Account
First, I decided to use an shell inside GKE's k8s cluster by running
kubectl run curl-random --image=radial/busyboxplus:curl -i --tty --rm
Second, I made sure I decoded my token by copying the token and then running through
pbpaste | base64 -D
Third, I created the rolebinding for the serviceaccount, NOT the username.
kubectl create clusterrolebinding shaoserverless-cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=default:shaoserverless
The third step was particularly tricky but I got the inspiration since the error message used to be
Unknown user \"system:serviceaccount:default:shaoserverless\"",
Lastly, then this works
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: user#gmail.com
name: user#kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: nsp#gmail.com name: nsp#kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: nsp#gmail.com
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl --username=user#gmail.com get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous (which is similar to *NIX's nobody) and not nsp#example.com (to which you applied your binding).
In your specific case the reason for that is that the username flag uses HTTP Basic authentication and needs the password flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username and password authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).
In my case i was receiving nearly similar error due to RBAC
Error
root#k8master:~# kubectl cluster-info dump --insecure-skip-tls-verify=true
Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Solution:
As Solution i have done below things to reconfigure my user to access cluster
cd $HOME
sudo whoami
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
After doing above when i take cluster dump i got result
root#k8master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.10.15:6443
KubeDNS is running at https://192.168.10.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy