Unable to run 'kubectl' commands after using impersonation to fetch GKE cluster credentials - kubernetes

My Objective
I want to use GCP impersonation to fetch my GKE cluster credentials. And then I want to run kubectl commands.
Initial Context
I have a GCP project called rakib-example-project
I have 2 ServiceAccounts in the project called:
owner#rakib-example-project.iam.gserviceaccount.com
it has the project-wide roles/owner role - so it can do anything and everything inside the GCP project
executor#rakib-example-project.iam.gserviceaccount.com
it only has the project-wide roles/iam.serviceAccountTokenCreator role - so it can impersonate the owner ServiceAccount in the GCP project
I have 1 GKE cluster in the project called my-gke-cluster
The Problem
✅ I have authenticated as the executor ServiceAccount:
$ gcloud auth activate-service-account --key-file=my_executor_sa_key.json
Activated service account credentials for: [executor#rakib-example-project.iam.gserviceaccount.com]
✅ I have fetched GKE cluster credentials by impersonating the owner:
$ gcloud container clusters get-credentials my-gke-cluster \
--zone asia-southeast1-a \
--project rakib-example-project \
--impersonate-service-account=owner#rakib-example-project.iam.gserviceaccount.com
WARNING: This command is using service account impersonation. All API calls will be executed as [owner#rakib-example-project.iam.gserviceaccount.com].
WARNING: This command is using service account impersonation. All API calls will be executed as [owner#rakib-example-project.iam.gserviceaccount.com].
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-gke-cluster.
❌ I am failing to list cluster nodes due to missing container.nodes.list permission:
$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "executor#rakib-example-project.iam.gserviceaccount.com" cannot list resource "nodes" in API group "" at the cluster scope: requires one of ["container.nodes.list"] permission(s).
But I have already impersonated the Owner ServiceAccount. Why would it still have missing permissions? 😧😧😧
My Limitations
It works well if i grant my executor ServiceAccount the roles/container.admin role. However, I am not allowed to grant such roles to my executor ServiceAccount due to compliance requirements. I can only impersonate the owner ServiceAccount and THEN do whatever I want through it - not directly.

If you have a look to your kubeconfig file at this location ~/.kube/config, you can see the list of authorization and the secrets, such as
- name: gke_gdglyon-cloudrun_us-central1-c_knative
user:
auth-provider:
config:
access-token: ya29.<secret>-9XQmaTQodj4kS39w
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: "2020-08-25T17:48:39Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
You see external references (expiry-key and token-key) and a cmd-path. The command path is interesting because when a new token need to be generated, it will be called.
However, you see any mention of the impersonation. You have to add it in the command path, to be used by default. For this, add it in your config like this:
gcloud config set auth/impersonate_service_account owner#rakib-example-project.iam.gserviceaccount.com
Now, every use of the gcloud CLI will use the impersonate service account, and it's what you want to generate a valid access_token to reach your GKE cluster

Related

How do I grant permission to my Kubernetes cluster to pull images from gcr.io?

In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.

Cannot change service account user gcloud gcp

I still wondering how supposed to do to change the service account user. Let say I have 2 service account (A and B), which each user has different role in different project. After done being use user B, when I want to change to service account A and access the resource, gcloud command says
Error from server (Forbidden): pods is forbidden: User "user-B#project.iam.gserviceaccount.com" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).
I'm done change my service account user with gcloud config set account [service-account], but the gcloud still read another service account. Did I missed something?
Here's a contrived example of what I think you're doing:
# gcloud is using my regular User credentials
gcloud config get account
me#gmail.com
# Access GKE as me#gmail.com
kubectl get pods --namespace=default
pod/foo-c7b7995df-vxrmh
# Authenticate as a GCP Service Account with **no** permissions
EMAIL="{ACCOUNT}#{PROJECT}.iam.gserviceaccount.com"
gcloud auth activate-service-account ${EMAIL} \
--key-file=${KEY_FILE}
# gcloud is now using the Service Account credentials
gcloud config get account
${EMAIL}
# Using new GKE auth plugin
gke-gcloud-auth-plugin \
| jq -r .status.expirationTimestamp
2022-00-00T17:10:00Z
# Need to either delete the token
# Or wait until 17:10 for it to expire
# Then...
kubectl get pods --namespace=default
Error from server (Forbidden): pods is forbidden: ...
ERROR Error from server (Forbidden): pods is forbidden: User "{ACCOUNT}#{PROJECT}.iam.gserviceaccount.com" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).
One solution is to grant the GCP (!) Service Account one of the Kubernetes Engine roles that has permission to list Pods, i.e. container.pods.* which is part of roles/container.developer:
# Grant the Service Account Kubernetes Engine role
ROLE="roles/container.developer"
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
# Try again
kubectl get pods --namespace=default --output=name
pod/foo-c7b7995df-vxrmh

Azure Kubernetes Service RBAC Cluster Admin doens't provide my user the cluster admin privilege

I have an aks cluster running kubernetes 1.21.2 with those options :
Kubernetes RBAC --> enable
AKS-managed AAD --> enable
Local accounts --> disabled
When I run the az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin it fails as expected. So far it's looking good.
I still want someone to be admin, so I give a user following privileges :
Azure Kubernetes Service Cluster User Role
Azure Kubernetes Service RBAC Cluster Admin
I followed the following steps to try out my user is a cluster admin :
az login and input my user's credentials to login
az aks get-credentials --resource-group <resource-group> --name <cluster-name> to download the clusterUser_kubeconfig
kubectl get pods -A to list pods in all namespaces
it prompts me something like To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DKXXXXX8T to authenticate.
I login using my user's credential again and allow the k8s application to connect to my user.
kubectl get pods -A fails :
Error from server (Forbidden): pods is forbidden: User "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" cannot list resource "pods" in API group "" at the cluster scope
According to what I have read so far, it should be working as my user is Azure Kubernetes Service RBAC Cluster Admin. Could someone enlighten me what I've missed or misunderstood ?

GCP K8s (kubectl) error message (Required "container.leases.get" permission)

I am getting an error message after running some kubectl commands (GCP command line - gcloud). I have a K8S cluster created in GKE.
Example:
kubectl describe node
gke_k8s_cluster_name
Error from server (Forbidden): leases.coordination.k8s.io "gke_k8s_cluster_name" is forbidden: User "MY_SERVICE_ACCOUNT" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": Required "container.leases.get" permission.
The point is that "container.leases.get" permission is not listed in IAM (as custom permissions or regular role).
How could I grant that permission to the service account in GCP ?
thanks,
Jose
You may need to grant additional permissions to yourself on GCP IAM and GKE sides, for example:
PROJECT_ID=$(gcloud config get-value core/project)
USER_ID=$(gcloud config get-value core/account)
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=user:${USER_ID} --role=roles/container.admin
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ${USER_ID}
See also GCP IAM & GKE RBAC integration.

Login to GKE via service account with token

I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential