We are unable to create Kubernetes clusters in our Google Cloud project. It was working a few weeks ago. We keep getting the following error:
Google Compute Engine: Required 'compute.zones.get' permission for 'projects/<project code>/zones/us-central1-a'
However, the role assigned to the user trying to create the cluster is Project/Owner, and the service account selected when creating the cluster has Project/Editor, which includes the compute.zones.get permission. Even if I give the service account Project/Owner it still gives the same error.
EDIT
When trying to create the cluster with gcloud we get a different (similar) error:
Google Compute Engine: Required 'compute.networks.get' permission for 'projects/<project code>/global/networks/default'
Not sure what went wrong, but the fix was to disable all the compute services, and then re-initialising the Kubernetes service.
You lack the cloudservices service account in IAM. A current workaround for this issue is to re-enable the Google Cloud Compute Engine API.
I wasn't able to disable the Compute API due to an apparent dependency loop, but creating a new GCP project "fixed" this for me.
Related
I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.
Sorry to bother you, but i am having a serious issue with my online DevOps learning.
In fact, i am taking a Devops course and we are using the google cloud platform as a cloud. When i create my cluster with gcloud container clusters create xxx and then do the describe command like gcloud container clusters describe xxx, it works but i have no information regarding the login and password to Kubernetes;
That is one of the problem.
After creating the cluster, i got not Kubernetes dashboard link with the command kubectl cluster-info. Normally i should have a Kubernetes dashboard to manage my app. In place of having the Kubernetes dashboard, there is something called Kubernetes system metric.
Can somebody help me fix this problem probably someone who is used to practice on GCP.
Best regards
Can you please go through this Google Cloud Kubernetes dashboards docs[1]?
Because, I'm able to see Kubernetes dashboard in my console. But, I don't know why you are not able to see that, and I also checked there is now any service outage on Kubernetes from Google Cloud Status Dashboard[2]. But, It's working fine. So, kindly go through that Kubernetes docs, from that you will get some better understanding of working with Kubernetes in GCP.
If you're still facing any issue or abnormal behavior, please go to public issue tracker[3] or support from GCP console and raise a ticket.
[1]. https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards
[2]. https://status.cloud.google.com/
[3]. https://cloud.google.com/support/docs/issue-trackers#trackers-list
When you visit the GCP dashboard docs, you should see red warning on top of the website, saying:
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. As an alternative, use the Cloud Console dashboards described in this guide.
Below you read:
Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API. You will still be able to install Kubernetes Dashboard manually by following the instructions in the project's repository. For clusters in which you have already deployed the add-on, it will continue to function but you will need to manually apply any updates and security patches that are released.
To deploy it, follow the instructions on k8s dashboard github repo
I'm developing a service running in Google Kubernetes Engine and I would like to use Google Cloud functionality from that service.
I have created a service account in Google Cloud with all the necessary roles and I would like to use these roles from the pod running my service.
I have read this: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
and I was wondering if there is an easier way to "connect" the two kinds of service accounts ( defined in Kubernetes - defined in Google Cloud IAM ) ?
Thanks
I don't think there is any direct link. K8s service accounts are purely internal. You could try granting GIAM permissions to serviceaccount:name but that seems unlikely to work. More likely you would put the Google SA credentials in a secret and then write an RBAC policy giving your K8s SA read access to it.
Read the topic which I have shared. You need to enable Workload Identity on your cluster and then you can annotate Kubernetes service account with IAM on google.
gke-document
I removed a bunch of IAM policies and think this is preventing me from creating k8s clusters in Google Cloud (through the UI).
Every time I click Create cluster, it processes for a bit, before hanging up and throwing the following error:
Create Kubernetes Engine cluster "standard-cluster-1"
Just now
MyProject
Google Compute Engine: Required 'compute.zones.get' permission for 'projects/<MY_PROJECT_ID>/zones/us-central1-a'.
I'm mainly doing this through my host shell (iTerm) and NOT through the interactive shell found on cloud.google.com.
Here's the IAM policy for a user (I use my google email address under the Member column):
Really hoping to get unblocked so I can start creating clusters in my shell again and not have to use the interactive shell on the Google Cloud website.
You are missing ServiceAgent roles. But only service accounts can be granted those roles.
1) First, copy you project number
2) create following members for the Service Agents replacing 77597574896 with your project number and set appropriate roles:
service-77597574896#container-engine-robot.iam.gserviceaccount.com - Kubernetes Engine Service Agent
service-77597574896#compute-system.iam.gserviceaccount.com - Kubernetes Engine Service Agent
77597574896#cloudservices.gserviceaccount.com - Editor
This should work now, because I've tested it with my cluster
In order to create new cluster container - please just add new role in yours IAM settings:
- Kubernetes Engine Admin,
Please share with the results.
One of our Google Kubernetes Engine clusters has lost access to Google Cloud Platform via it's main service account. It was not using the service account 'default', but a custom one, but it's now gone. Is there a way to restore or change the service account for a GKE cluster after it has been created? Or are we just out of luck and do we have to re-create the cluster?
Good news! We found a way to solve the issue without having to re-create the entire cluster.
Create a new node-pool and make sure it has the default permissions to Google Cloud Platform (this is the case if you create the pool via the Console UI).
'Force' all workloads on the new node pool (e.g. by using node labels).
Re-deploy the workloads.
Remove the old (broken ) node pool.
Hope this helps anyone with the same issue in the future.
Looks like you are out of luck. According to the documentation, gcloud container clusters update command does not let you update service account.
It's not possible to do it, either restore a service account or update the cluster for a new one, you can edit Compute Engine instances but since the cluster is managed as a group, you can't edit them, even if you could, if you had the autoscaler or the auto repair node feature, new nodes wouldn't have the new service account.
So, it seems you're out of luck, you will have to recreate the cluster.