Connect Kubernetes service account to Google Cloud service account - kubernetes

I'm developing a service running in Google Kubernetes Engine and I would like to use Google Cloud functionality from that service.
I have created a service account in Google Cloud with all the necessary roles and I would like to use these roles from the pod running my service.
I have read this: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
and I was wondering if there is an easier way to "connect" the two kinds of service accounts ( defined in Kubernetes - defined in Google Cloud IAM ) ?
Thanks

I don't think there is any direct link. K8s service accounts are purely internal. You could try granting GIAM permissions to serviceaccount:name but that seems unlikely to work. More likely you would put the Google SA credentials in a secret and then write an RBAC policy giving your K8s SA read access to it.

Read the topic which I have shared. You need to enable Workload Identity on your cluster and then you can annotate Kubernetes service account with IAM on google.
gke-document

Related

Workload identity to connect a GKE cluster to a different GCP project

Is it possible to use workload identity to access from a GKE pod to a GCP service of another project? A project that is different from the one in which the GKE cluster is created.
Thanks
Yes, you can. If the service account bind with your K8S service account is autorize to access to resources in other projects, there is no issue. It's the same thing with your user account or other service accounts: Grant the account the access to the ressources and that's enough!

How to get IAM/service account used by juicefs to access GCS in GKE?

I'm using a juicefs-csi in GKE. I use postgre as meta-store and GCS as storage. The corresponding setting is as follow:
node:
# ...
storageClasses:
- name: juicefs-sc
enabled: true
reclaimPolicy: Retain
backend:
name: juicefs
metaurl: postgres://user:password#my-ec2-where-postgre-installed.ap-southeast-1.compute.amazonaws.com:5432/the-database?sslmode=disable
storage: gs
bucket: gs://my-bucket
# ...
According to this documentation, I don't have to specify access key/secret (like in S3).
But unfortunately, whenever I try to write anything to the mounted volume (with juicefs-sc storage class), I always get this error:
AccessDeniedException: 403 Caller does not have storage.objects.create access to the Google Cloud Storage object.
I believe it should be related to IAM role.
My question is, how could I know which IAM user/service account is used by juicefs to access GCS, so that I can assign a sufficient role to it?
Thanks in advance.
EDIT
Step by step:
Download juicefs-csi helm chart
Add values as described in the question, apply
Create a pod that mount from PV with juicefs-sc storage class
Try to read/write file to the mount point
Ok I misunderstood you at the beginning.
When you are creating GKE cluster you can specify which GCP Service Account will be used by this cluster, like below:
By Default it's Compute Engine default service account (71025XXXXXX-compute#developer.gserviceaccount.com) which is lack of a few Cloud Product permissions (like Cloud Storage, it has Read Only). It's even described in this message.
If you want to check which Service Account was set by default to VM, you could do this via
Compute Engine > VM Instances > Choose one of the VMs from this cluster > In details find API and identity management
So You have like 3 options to solve this issue:
1. During Cluster creation
In Node Pools > Security, you have Access scopes where you can add some additional permissions.
Allow full access to all Cloud APIs to allow access for all listed Cloud APIs
Set access for each API
In your case you could just use Set access for each API and change Storage to Full.
2. Set permissions with a Service Account
You would need to create a new Service Account and provide proper permissions for Compute Engine and Storage. More details about how to create SA you can find in Creating and managing service accounts.
3. Use Workload Identity
Workload Identity on your Google Kubernetes Engine (GKE) clusters. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.
For more details you should check Using Workload Identity.
Useful links
Configuring Velero - Velero is software for backup and restore, however steps 2 and 3 are mentioned there. You would just need to adjust commands/permissions to your scenario.
Authenticating to Google Cloud with service accounts

How to setup GKE Cluster and GKE pods has to communicate with cloud sql and cloud sql password stored on google cloud secret manager

I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.

What does "on-premise" mean when referring to istio or kubernetes?

I'm new to istio, and I read istio docs(https://istio.io/docs/concepts/security/#istio-identity):
Istio service identities on different platforms:
Kubernetes: Kubernetes service account
GKE/GCE: may use GCP service account
GCP: GCP service account
AWS: AWS IAM user/role account
On-premises (non-Kubernetes): user account, custom service account, service name, Istio service account, or GCP service account. The custom service account refers to the existing service account just like the identities that the customer’s Identity Directory manages.
I can't make it clear what does on-premise mean? Can anyone give me some more detailed information about on-premise? And how does it compared to kubernetes?
Thanks.
"On Premises" simply means locally at your organization in contrast to remote / in the cloud. See https://en.wikipedia.org/wiki/On-premises_software

Can Namespace level permissions be set with Google Cloud IAM on GKE?

Kubernetes RBAC can be used to give permissions to a subject in a particular Namespace. Can the same be accomplished with Cloud IAM?
Not at the moment, no. IAM is used to assign and verify permissions when interacting with GCP APIs. IAM can only provide access to the GKE API, which does not take into account namespaces.
As you mentioned, RBAC is your option for more granular permissions within the cluster
If I got your point correctly that:
The IAM roles for a GKE kubernetes cluster are very simple, "Admin, Read/Write, Read".
But you need more fine-grained control over the kubernetes cluster.
In this case:
There's a new "Alpha" feature in Google Cloud's IAM which wasn't available previously.
Under IAM > Roles
You can now create custom IAM roles with your own subset of permissions.
You can create a minimal role which allows for example gcloud container clusters get-credentials to work, but nothing else, allowing permissions within the kubernetes cluster to be fully managed by RBAC.
It will allow you to get more fine-grained access configurations for kubernetes cluster.