GCP K8s (kubectl) error message (Required "container.leases.get" permission) - kubernetes

I am getting an error message after running some kubectl commands (GCP command line - gcloud). I have a K8S cluster created in GKE.
Example:
kubectl describe node
gke_k8s_cluster_name
Error from server (Forbidden): leases.coordination.k8s.io "gke_k8s_cluster_name" is forbidden: User "MY_SERVICE_ACCOUNT" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": Required "container.leases.get" permission.
The point is that "container.leases.get" permission is not listed in IAM (as custom permissions or regular role).
How could I grant that permission to the service account in GCP ?
thanks,
Jose

You may need to grant additional permissions to yourself on GCP IAM and GKE sides, for example:
PROJECT_ID=$(gcloud config get-value core/project)
USER_ID=$(gcloud config get-value core/account)
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=user:${USER_ID} --role=roles/container.admin
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ${USER_ID}
See also GCP IAM & GKE RBAC integration.

Related

Cannot change service account user gcloud gcp

I still wondering how supposed to do to change the service account user. Let say I have 2 service account (A and B), which each user has different role in different project. After done being use user B, when I want to change to service account A and access the resource, gcloud command says
Error from server (Forbidden): pods is forbidden: User "user-B#project.iam.gserviceaccount.com" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).
I'm done change my service account user with gcloud config set account [service-account], but the gcloud still read another service account. Did I missed something?
Here's a contrived example of what I think you're doing:
# gcloud is using my regular User credentials
gcloud config get account
me#gmail.com
# Access GKE as me#gmail.com
kubectl get pods --namespace=default
pod/foo-c7b7995df-vxrmh
# Authenticate as a GCP Service Account with **no** permissions
EMAIL="{ACCOUNT}#{PROJECT}.iam.gserviceaccount.com"
gcloud auth activate-service-account ${EMAIL} \
--key-file=${KEY_FILE}
# gcloud is now using the Service Account credentials
gcloud config get account
${EMAIL}
# Using new GKE auth plugin
gke-gcloud-auth-plugin \
| jq -r .status.expirationTimestamp
2022-00-00T17:10:00Z
# Need to either delete the token
# Or wait until 17:10 for it to expire
# Then...
kubectl get pods --namespace=default
Error from server (Forbidden): pods is forbidden: ...
ERROR Error from server (Forbidden): pods is forbidden: User "{ACCOUNT}#{PROJECT}.iam.gserviceaccount.com" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).
One solution is to grant the GCP (!) Service Account one of the Kubernetes Engine roles that has permission to list Pods, i.e. container.pods.* which is part of roles/container.developer:
# Grant the Service Account Kubernetes Engine role
ROLE="roles/container.developer"
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
# Try again
kubectl get pods --namespace=default --output=name
pod/foo-c7b7995df-vxrmh

Unable to run 'kubectl' commands after using impersonation to fetch GKE cluster credentials

My Objective
I want to use GCP impersonation to fetch my GKE cluster credentials. And then I want to run kubectl commands.
Initial Context
I have a GCP project called rakib-example-project
I have 2 ServiceAccounts in the project called:
owner#rakib-example-project.iam.gserviceaccount.com
it has the project-wide roles/owner role - so it can do anything and everything inside the GCP project
executor#rakib-example-project.iam.gserviceaccount.com
it only has the project-wide roles/iam.serviceAccountTokenCreator role - so it can impersonate the owner ServiceAccount in the GCP project
I have 1 GKE cluster in the project called my-gke-cluster
The Problem
✅ I have authenticated as the executor ServiceAccount:
$ gcloud auth activate-service-account --key-file=my_executor_sa_key.json
Activated service account credentials for: [executor#rakib-example-project.iam.gserviceaccount.com]
✅ I have fetched GKE cluster credentials by impersonating the owner:
$ gcloud container clusters get-credentials my-gke-cluster \
--zone asia-southeast1-a \
--project rakib-example-project \
--impersonate-service-account=owner#rakib-example-project.iam.gserviceaccount.com
WARNING: This command is using service account impersonation. All API calls will be executed as [owner#rakib-example-project.iam.gserviceaccount.com].
WARNING: This command is using service account impersonation. All API calls will be executed as [owner#rakib-example-project.iam.gserviceaccount.com].
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-gke-cluster.
❌ I am failing to list cluster nodes due to missing container.nodes.list permission:
$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "executor#rakib-example-project.iam.gserviceaccount.com" cannot list resource "nodes" in API group "" at the cluster scope: requires one of ["container.nodes.list"] permission(s).
But I have already impersonated the Owner ServiceAccount. Why would it still have missing permissions? 😧😧😧
My Limitations
It works well if i grant my executor ServiceAccount the roles/container.admin role. However, I am not allowed to grant such roles to my executor ServiceAccount due to compliance requirements. I can only impersonate the owner ServiceAccount and THEN do whatever I want through it - not directly.
If you have a look to your kubeconfig file at this location ~/.kube/config, you can see the list of authorization and the secrets, such as
- name: gke_gdglyon-cloudrun_us-central1-c_knative
user:
auth-provider:
config:
access-token: ya29.<secret>-9XQmaTQodj4kS39w
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: "2020-08-25T17:48:39Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
You see external references (expiry-key and token-key) and a cmd-path. The command path is interesting because when a new token need to be generated, it will be called.
However, you see any mention of the impersonation. You have to add it in the command path, to be used by default. For this, add it in your config like this:
gcloud config set auth/impersonate_service_account owner#rakib-example-project.iam.gserviceaccount.com
Now, every use of the gcloud CLI will use the impersonate service account, and it's what you want to generate a valid access_token to reach your GKE cluster

Problem deploying K8s with gitlab runner get an error

I changed something and deployed on a new cluster then I got this error even though I didn't change anything in the code. Has anybody seen it before?
from server for:
"/builds/dropcunt/nettmoster.com/deployment/webapp.yml": ingresses.extensions "nettmoster.comn-273414" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "ingresses" in API group "extensions" in the namespace "nettmoster-com-9777808"
59 Error from server (Forbidden): error when retrieving current configuration of:
As suggested I runkubectl auth can-i --list --as=system:serviceaccount:gitlab-managed-apps:default
It returns:
This is a RBAC problem. The service account system:serviceaccount:gitlab-managed-apps:default does not have permission to get ingress resource in the new cluster.
You can compare what permission a service account have by running below command in both the cluster
kubectl auth can-i --list --as=system:serviceaccount:gitlab-managed-apps:default
Run below commands to get permission via RBAC
kubectl create role ingress-reader --verb=get,list,watch,update --resource=ingress
kubectl create rolebinding ingress-reader-role --role=ingress-reader --serviceaccount=gitlab-managed-apps:default

RBAC Error in Kubernetes

I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access.
Can you suggest the errors/changes that had to be done?
Ensure the RBAC authorization mode is still being used (--authorization-mode=…,RBAC is part of the apiserver arguments)
If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users:
kubectl get clusterrolebindings -o yaml | grep -C 20 system:authenticated

Login to GKE via service account with token

I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential