I removed a bunch of IAM policies and think this is preventing me from creating k8s clusters in Google Cloud (through the UI).
Every time I click Create cluster, it processes for a bit, before hanging up and throwing the following error:
Create Kubernetes Engine cluster "standard-cluster-1"
Just now
MyProject
Google Compute Engine: Required 'compute.zones.get' permission for 'projects/<MY_PROJECT_ID>/zones/us-central1-a'.
I'm mainly doing this through my host shell (iTerm) and NOT through the interactive shell found on cloud.google.com.
Here's the IAM policy for a user (I use my google email address under the Member column):
Really hoping to get unblocked so I can start creating clusters in my shell again and not have to use the interactive shell on the Google Cloud website.
You are missing ServiceAgent roles. But only service accounts can be granted those roles.
1) First, copy you project number
2) create following members for the Service Agents replacing 77597574896 with your project number and set appropriate roles:
service-77597574896#container-engine-robot.iam.gserviceaccount.com - Kubernetes Engine Service Agent
service-77597574896#compute-system.iam.gserviceaccount.com - Kubernetes Engine Service Agent
77597574896#cloudservices.gserviceaccount.com - Editor
This should work now, because I've tested it with my cluster
In order to create new cluster container - please just add new role in yours IAM settings:
- Kubernetes Engine Admin,
Please share with the results.
Related
I've got a container inside a GKE cluster and I want it to be able to talk to the Kubernetes API of another GKE cluster to list some resources there.
This works well if run the following command in a separate container to proxy the connection for me:
gcloud container clusters get-credentials MY_CLUSTER --region MY_REGION --project MY_PROJECT; kubectl --context MY_CONTEXT proxy --port=8001 --v=10
But this requires me to run a separate container that, due to the size of the gcloud cli is more than 1GB big.
Ideally I would like to talk directly from my primary container to the other GKE cluster. But I can't figure out how to figure out the IP address and set-up the authentication required for the connection.
I've seen a few questions:
How to Authenticate GKE Cluster on Kubernetes API Server using its Java client library
Is there a golang sdk equivalent of "gcloud container clusters get-credentials"
But it's still not really clear to me if/how this would work with the Java libraries, if at all possible.
Ideally I would write something like this.
var info = gkeClient.GetClusterInformation(...);
var auth = gkeClient.getAuthentication(info);
...
// using the io.fabric8.kubernetes.client.ConfigBuilder / DefaultKubernetesClient
var config = new ConfigBuilder().withMasterUrl(inf.url())
.withNamespace(null)
// certificate or other autentication mechanishm
.build();
return new DefaultKubernetesClient(config);
Does that make sense, is something like that possible?
There are multiple ways to connect to your cluster without using the gcloud cli, since you are trying to access the cluster from another cluster within the cloud you can use the workload identity authentication mechanism. Workload Identity is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. For more information refer to this official document. Here they have detailed a step by step procedure for configuring workload identity and provided reference links for code libraries.
This is drafted based on information provided in google official documentation.
I'm using a juicefs-csi in GKE. I use postgre as meta-store and GCS as storage. The corresponding setting is as follow:
node:
# ...
storageClasses:
- name: juicefs-sc
enabled: true
reclaimPolicy: Retain
backend:
name: juicefs
metaurl: postgres://user:password#my-ec2-where-postgre-installed.ap-southeast-1.compute.amazonaws.com:5432/the-database?sslmode=disable
storage: gs
bucket: gs://my-bucket
# ...
According to this documentation, I don't have to specify access key/secret (like in S3).
But unfortunately, whenever I try to write anything to the mounted volume (with juicefs-sc storage class), I always get this error:
AccessDeniedException: 403 Caller does not have storage.objects.create access to the Google Cloud Storage object.
I believe it should be related to IAM role.
My question is, how could I know which IAM user/service account is used by juicefs to access GCS, so that I can assign a sufficient role to it?
Thanks in advance.
EDIT
Step by step:
Download juicefs-csi helm chart
Add values as described in the question, apply
Create a pod that mount from PV with juicefs-sc storage class
Try to read/write file to the mount point
Ok I misunderstood you at the beginning.
When you are creating GKE cluster you can specify which GCP Service Account will be used by this cluster, like below:
By Default it's Compute Engine default service account (71025XXXXXX-compute#developer.gserviceaccount.com) which is lack of a few Cloud Product permissions (like Cloud Storage, it has Read Only). It's even described in this message.
If you want to check which Service Account was set by default to VM, you could do this via
Compute Engine > VM Instances > Choose one of the VMs from this cluster > In details find API and identity management
So You have like 3 options to solve this issue:
1. During Cluster creation
In Node Pools > Security, you have Access scopes where you can add some additional permissions.
Allow full access to all Cloud APIs to allow access for all listed Cloud APIs
Set access for each API
In your case you could just use Set access for each API and change Storage to Full.
2. Set permissions with a Service Account
You would need to create a new Service Account and provide proper permissions for Compute Engine and Storage. More details about how to create SA you can find in Creating and managing service accounts.
3. Use Workload Identity
Workload Identity on your Google Kubernetes Engine (GKE) clusters. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.
For more details you should check Using Workload Identity.
Useful links
Configuring Velero - Velero is software for backup and restore, however steps 2 and 3 are mentioned there. You would just need to adjust commands/permissions to your scenario.
Authenticating to Google Cloud with service accounts
I'm developing a service running in Google Kubernetes Engine and I would like to use Google Cloud functionality from that service.
I have created a service account in Google Cloud with all the necessary roles and I would like to use these roles from the pod running my service.
I have read this: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
and I was wondering if there is an easier way to "connect" the two kinds of service accounts ( defined in Kubernetes - defined in Google Cloud IAM ) ?
Thanks
I don't think there is any direct link. K8s service accounts are purely internal. You could try granting GIAM permissions to serviceaccount:name but that seems unlikely to work. More likely you would put the Google SA credentials in a secret and then write an RBAC policy giving your K8s SA read access to it.
Read the topic which I have shared. You need to enable Workload Identity on your cluster and then you can annotate Kubernetes service account with IAM on google.
gke-document
One of our Google Kubernetes Engine clusters has lost access to Google Cloud Platform via it's main service account. It was not using the service account 'default', but a custom one, but it's now gone. Is there a way to restore or change the service account for a GKE cluster after it has been created? Or are we just out of luck and do we have to re-create the cluster?
Good news! We found a way to solve the issue without having to re-create the entire cluster.
Create a new node-pool and make sure it has the default permissions to Google Cloud Platform (this is the case if you create the pool via the Console UI).
'Force' all workloads on the new node pool (e.g. by using node labels).
Re-deploy the workloads.
Remove the old (broken ) node pool.
Hope this helps anyone with the same issue in the future.
Looks like you are out of luck. According to the documentation, gcloud container clusters update command does not let you update service account.
It's not possible to do it, either restore a service account or update the cluster for a new one, you can edit Compute Engine instances but since the cluster is managed as a group, you can't edit them, even if you could, if you had the autoscaler or the auto repair node feature, new nodes wouldn't have the new service account.
So, it seems you're out of luck, you will have to recreate the cluster.
We are unable to create Kubernetes clusters in our Google Cloud project. It was working a few weeks ago. We keep getting the following error:
Google Compute Engine: Required 'compute.zones.get' permission for 'projects/<project code>/zones/us-central1-a'
However, the role assigned to the user trying to create the cluster is Project/Owner, and the service account selected when creating the cluster has Project/Editor, which includes the compute.zones.get permission. Even if I give the service account Project/Owner it still gives the same error.
EDIT
When trying to create the cluster with gcloud we get a different (similar) error:
Google Compute Engine: Required 'compute.networks.get' permission for 'projects/<project code>/global/networks/default'
Not sure what went wrong, but the fix was to disable all the compute services, and then re-initialising the Kubernetes service.
You lack the cloudservices service account in IAM. A current workaround for this issue is to re-enable the Google Cloud Compute Engine API.
I wasn't able to disable the Compute API due to an apparent dependency loop, but creating a new GCP project "fixed" this for me.