I have a bucket on Google Cloud Storage that I created. I wanted to test some of the built in ACLs like public-read, public-read-write, etc. But once I changed the ACL using the gsutil setacl command like so:
gsutil setacl public-read-write gs://mybucket
I seem to have lost the ability to set the ACL to anything else, I also can not get the current ACL. When I attempt either I get the following message:
GSResponseError: status=403, code=AccessDenied, reason=Forbidden, detail=mybucket.
Not sure if this is a bug or I am just missing something obvious. How do I regain the ability to set the ACLs?
Go to https://code.google.com/apis/console/#:team and see whether you are a viewer, editor or owner of the project.
Contact one of the owners of the project.
Ask the owner to run gsutil chacl -u you#gmail.com:FC gs://mybucket or else make you an owner of the project.
Project owners always have full control of buckets.
Related
We have uploaded files to google cloud storage buckets and planning to create a permission to have a number of people access it. So far we could only filter/search files and folders in the directory you are in. Is it possible to search files recursively though?
It seems what you are looking for is the following command for searching within a bucket recursively:
gsutil ls -r gs://bucket/**
Note: "bucket" is the name of the bucket you have set.
In the case you would like to search within a specific directory you can run the following:
gsutil ls -r gs://bucket/dir/**
Note: "dir" would be the directory in which you would like to search
You can find more information regarding searching through "Directory By Directory, Flat, And Recursive" by going to the following link.
Update
If this is not what you meant then I would like to mention another option. You can retrieve the information regarding the contents in a bucket through an API as well. The following API link here retrieves a list of objects matching the criteria specified.
Note: In order for this API to work the user must have "READER" permission or above.
Please let me know if this is what you were looking for.
I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.
I am developing a script which uses the REST API for an Oracle ZFS Storage appliance ("ZS3"). The script uses the API to make a snapshot and clone of a production environment for use as a temporary test environment. So far everything is great... except I can find no way to specify the "Share Level ACL" settings for the SMB protocol.
A manual (via web ui) clone results in a default ACL of "everyone, full access". The ACL for the original share (source for the snapshot/clone) has a specific user list with specific ACLs. I assume that this information is not in the ZFS snapshot, but maintained outside of ZFS, hence it is not present in the clone (Q: Is this correct?).
I've re-read the Oracle document "E56084.pdf" ("Oracle ZFS Storage Appliance RESTful API Guide, Release 2013.1.4.0") a few times. There are vague references to the "sharesmb" property, and nothing else related to SMB or ACLs. My script correctly sets the "sharesmb" value (used to enabling SMB sharing) to "sharesmb=SHARENAME,abe=off,dfsroot=false" in the JSON payload passed to the API for creating a file system clone. However, I see no property that I can set for the actual ACL list. For NFS, this is easy, it is the value passed in the "sharenfs" property.
The result of a "GET" of the source project and share do not contain any reference to the users listed in the "SMB Share Level ACL" as seen in the web UI.
So, how do I copy over, or explicitly set if necessary, the "SMB Share Level ACLs" on a share via the REST api?
Thanks!
The system has two different kinds of ACLs and both are stored inside your datasets:
ACLs on all files and directories (let's call them file ACLs): These are used for general Unix access and also are active when sharing the filesystem. They are stored with each file or directory (use /usr/bin/ls -V /pool/filesystem/yourFile or /usr/bin/ls -Vd /pool/filesystem/yourDir to see them).
ACLs on filesystems shared via SMB/CIFS protocol (let's call them share ACLs): These are only used when sharing the filesystem and can only be set for the whole filesystem, not individual files inside. Use /usr/bin/ls -V /pool/filesystem/.zfs/shares/yourShareName to see them.
Unfortunately I do not know how to to that over the REST API, but at least you know where your ACLs should end up.
When I try to copy my files to Google Cloud Storage using
gsutil cp file.gz gs://somebackup
Get this error:
Your "GCE" credentials are invalid. For more help, see "gsutil help creds", or re-run the gsutil config command (see "gsutil help config").
Failure: GCE credentials requested outside a GCE instance.
BTW, this was working until last yesterday.
Just ran into this as well and contacted Google support. It's occurring because the instance was created with Storage permissions as Read Only, visible on the instance details page:
Apparently this can't be changed after the instance is created (!). Our solution was to mount a temp disk, copy the file there, unmount it and then remount it on a second instance (with proper Storage permissions) and do the gsutil copy from there.
Try to do a "gcloud auth login" from the command line
I've had this before and my problem was that I've set the wrong project.
Make sure you set the project ID and not project name when you run
gcloud config set project <projectID>
I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$