Unable to connect to cloud storage from instance - google-cloud-storage

I'm perfectly able to connect (using Go) to cloud storage from my local computer (using default credentials) but i just keep getting this error when i'm trying to connect to storage from instance.
googleapi: Error 403: User Rate Limit Exceeded, userRateLimitExceeded
This only happens on one project, it works just fine on all other projects.
On that project i can successfully connect to Datastore, Logging service, everything except Cloud storage.
When creating gce instance, i'm using Compute engine default service account with (Allow full access to all Cloud APIs).
Storage api is enabled.
I tried running gsutil (while being ssh to an instance) and i keep getting the same error.
I created a github issue (https://github.com/GoogleCloudPlatform/gcloud-golang/issues/269) but they are clueless.
Any ideas?

API Manager along with enabling/disabling api allows setting max quota (total and per second)..
Make sure these are > 0.
I had "requests per 100 seconds per user" set to 0.
Link to storage api:
https://console.developers.google.com/apis/api/storage_api/quotas

Related

Google Cloud Composer Environment Setup Error: Connect to Google Cloud Storage

I am trying to create an environment in Google Cloud Composer. Link here
When creating the environment from scratch and selecting all the default fields, the following error appears:
CREATE operation on this environment failed 22 hours ago with the following error message:
CREATE operation failed. Composer Agent failed with: Cloud Storage Assertions Failed: Unable to write to GCS bucket.
GCS bucket write check failed.
I then created a google cloud storage bucket within the same project to see if that would help and the same error still appears.
Has anyone been able successfully create a Google Cloud Composer environment and if so please provide guidance on why this error message continues to appear?
Update: Need to update permissions to allow access it seems like. Here is a screenshot of my permissions page but not editable.
It seems like you haven't given the required IAM policies to the service account. I would advise you to read more about the IAM policies on Google Cloud here
When it comes to the permissions of the bucket, there are permissions like the Storage Object Admin that might fit your needs.

permission error: service account don't have access to cloud-ml platform

I am running Kubeflow pipeline(docker approach) and cluster uses the endpoint to navigate to the dashboard. The Clusters is created followed by the instructions mentioned in this link Deploy Kubeflow. Everything is successfully created and the cluster generated the endpoints and its working perfectly.
Endpoint link would be something like this https://appname.endpoints.projectname.cloud.goog.
Every workload of the pipeline is working fine except the last one. In the last workload, I am trying to submit a job to the cloud-ml engine. But it logs shows that the application has no access to the project. Here is the full image of the log.
ERROR:
(gcloud.ml-engine.versions.create) PERMISSION_DENIED: Request had
insufficient authentication scopes.
ERROR:
(gcloud.ml-engine.jobs.submit.prediction) User
[clustername#project_name.iam.gserviceaccount.com]
does not have permission to access project [project_name]
(or it may not exist): Request had insufficient authentication scopes.
From the logs, it's clear that this service account doesn't have access to the project itself. However, I tried to give access for Cloud ML Service to this service account but still, it's throwing the same error.
Any other ways to give Cloud ML service credentials to this application.
Check two things:
1) GCP IAM: if clustername-user#projectname.iam.gserviceaccount.com has ML Engine Admin permission.
2) Your pipeline DSL: if the cloud-ml engine step calls apply(gcp.use_gcp_secret('user-gcp-sa')), e.g. https://github.com/kubeflow/pipelines/blob/ea07b33b8e7173a05138d9dbbd7e1ce20c959db3/samples/tfx/taxi-cab-classification-pipeline.py#L67

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

working with Google Cloud Storage without gsutil

I have developed a Software in which is configured directories to save files. I run it on Linux. These directories are informed by config file.
I would like to use compute engine nodes because I need to increase its performance. Therefore, I would like to use Google Storage to save these files into a save repository.
In [1] is showed mounting a bucket as file system. I tried it, but no success. I receive authentication error.
Can anyone help me to get success in order to access my bucket by compute engine nodes ?
[1] https://cloud.google.com/compute/docs/disks/gcs-buckets
Best regards,
It sounds like you did not start your GCE instance with a service account.
According to the docs you linked, you need to configure a service account or run gcloud auth login to configure your credentials for accessing cloud storage.
If you are trying to set up gcsfuse without running on GCE you will need to use the gcloud auth login approach.