500 Internal Server Error when using gsutil mb -l Location Constraint - google-cloud-storage

Has anyone successfully been able to create a location specific storage bucket? I am unable to do so for granular US regions like US-CENTRAL1. Attempting to do so results in a 500 Internal Server Error. I'm using the latest version of gsutil:
> gsutil mb -l US-CENTRAL1 gs://somebucketname =>
Failure: BotoServerError: 500 Internal Server Error; Code: InternalError; Message: We encountered an internal error. Please try again.

As the regional buckets documentation says, only Durable Reduced Availability storage is available for regional buckets. To specify DRA when creating the bucket:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket
I'll also open a ticket to provide a more informative response in this case instead of returning an HTTP 500.

Related

Google cloud storage: Cannot reuse bucket name after deleting bucket

I deleted an existing bucket on google cloud storage using:
gsutil rm -r gs://www.<mydomain>.com
I then verify then bucket was deleted using:
gcloud storage ls gs://www.<mydomain>.com
And I get expected response:
ERROR: (gcloud.storage.ls) gs://www.<mydomain>.com not found: 404.
I then verify then bucket was deleted using:
gsutil ls
And I get expected empty response.
I then tried to recreate a new bucket with same name using:
gsutil mb -p <projectid> -c STANDARD -l US-EAST1 -b on gs://www.<mydomain>.com
I get the unexpected error below indicating bucket still exists:
www.<mydomain>.com
Creating gs://www.<mydomain>.com/...
ServiceException: 409 A Cloud Storage bucket named 'www.<mydomain>.com' already exists. Try another name. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
How can I reuse the bucket name for the bucket that I deleted?
I found the answer to my question here:
https://stackoverflow.com/a/44763841
Basically I had deleted the project the bucket was in before or after (not sure) deleting the bucket. For some reason this causes the bucket to still appear to exist even though it does not. The behavior does not seem quite right to me but I believe waiting for billing period to complete and project to be deleted would delete the phantom bucket. Unfortunately this means I have to wait 2 weeks. I will confirm this in 2 weeks.

GCloud custom image upload failure due to size or permissions

I've been trying to upload two custom images for some time now and I have failed repeatedly. During the import process the Google application always responds with the message that the Compute Engine Default Service Account does not have the role 'roles/compute.storageAdmin'. However, I have both assigned it using the CLI as the webinterface.
Notable is that the application throws this error during resizing of the disk. The original size of the disk is about 10GB, however, it tries to convert it to a 1024GB (!) disk. This got me thinking, could it be that this is too big for the application, hence it throwing the error it lacks permissions?
As a follow up questions, I have not found any options to set the size of the disk (not in the CLI nor in the webapp). Does anybody know of such options?
Here is the error message I have recieved:
ate-import-3ly9z": StatusMatch found: "Import: Resizing temp-translation-disk-3ly9z to 1024GB in projects/0000000000000/zones/europe-west4-a."
[import-and-translate]: 2020-05-01T07:46:30Z Error running workflow: step "import" run error: step "wait-for-signal" run error: WaitForInstancesSignal FailureMatch found for "inst-importer-import-and-translate-import-3ly9z": "ImportFailed: Failed to resize disk. The Compute Engine default service account needs the role: roles/compute.storageAdmin'"
[import-and-translate]: 2020-05-01T07:46:30Z Serial-output value -> target-size-gb:1024
[import-and-translate]: 2020-05-01T07:46:30Z Serial-output value -> source-size-gb:7
[import-and-translate]: 2020-05-01T07:46:30Z Serial-output value -> import-file-format:vmdk
[import-and-translate]: 2020-05-01T07:46:30Z Workflow "import-and-translate" cleaning up (this may take up to 2 minutes).
[import-and-translate]: 2020-05-01T07:47:34Z Workflow "import-and-translate" finished cleanup.
[import-image] 2020/05/01 07:47:34 step "import" run error: step "wait-for-signal" run error: WaitForInstancesSignal FailureMatch found for "inst-importer-import-and-translate-import-3ly9z": "ImportFailed: Failed to resize disk. The Compute Engine default service account needs the role: roles/compute.storageAdmin'"
ERROR
ERROR: build step 0 "gcr.io/compute-image-tools/gce_vm_image_import:release" failed: step exited with non-zero status: 1
ERROR: (gcloud.compute.images.import) build a9ccbeac-92c5-4457-a784-69d486e85c3b completed with status "FAILURE"
Thanks for your time!
EDIT: Not sure but I'm farily certain this is due to the 1024GB being too big. I've uploaded a 64GB without any issues using the same methods. For those who read after me, that's most likely the issue (:
This error message with the import of virtual disks have 2 root causes:
1.- Cloud Build and/or Compute engine and/or your User account did not have the correct IAM roles to perform these tasks. You can verify them here.
Cloud Build SA roles needed:
roles/iam.serviceAccountTokenCreator
roles/compute.admin
roles/iam.serviceAccountUser
Compute Engine SA roles needed:
roles/compute.storageAdmin
roles/storage.objectViewer
User Account roles needed:
roles/storage.admin
roles/viewer
roles/resourcemanager.projectIamAdmin
2.- " Not sure but I'm fairly certain this is due to the 1024GB being too big" The disk quota you have is less than 1T. The normal disk quota is 250-500 GB so that could be why by importing a 64 GB disk you encounter no problem.
You can check your quota in step 1 of this document; If you need to request more, you can follow steps 2 to 7.

The mb command requires a URL that specifies a bucket

I'm attempting to use the mb command to create a bucket on Google Cloud Storage but am getting
CommandException: The mb command requires a URL that specifies a bucket.
The odd part is that while
gsutil mb gs://foo/bar1
returns this error,
gsutil ls gs://foo/bar2
correctly lists files in gs://foo/bar2. I don't see how gs://foo/bar2 can be a valid URL while gs://foo/bar1 isn't. Is anyone able to shed some light here?
gs://foo/bar1 is a URL that specifies an object, bar1, within a bucket, foo. The gsutil mb command requires a URL signifying a bucket, e.g. gs://foo. The gsutil ls command can accept both bucket and object URLs.
gsutil mb makes a bucket. "gs://foo" specifies a bucket, specifically the bucket 'foo'. "gs://foo/bar1" specifies an object rather than just a bucket. "foo/bar1" isn't a bucket.

gsutil acl set command AccessDeniedException: 403 Forbidden

I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$