I ran gsutil.py acl set -R public-read gs://dsa-assets
and now I see that it override every users permission and I can't upload new file or even delete this bucket.
What can I do to reset permission on my bucket or to delete this bucket ?
gsutil acl set -R public-read gs://bucketName will set the ACL for bucketName and all of the objects inside of bucketName to the canned ACL public-read. This ACL grants all users read access to the bucket and objects, and it grants FULL_CONTROL to the bucket or object owner.
Every ACL includes FULL_CONTROL for the bucket or object owner. The owner of the bucket will always have FULL_CONTROL of the bucket that they own, no matter how they try.
If you find that you can no longer upload files to the bucket, it is likely that you are not using gsutil with an account that owns the bucket. Figure out which project owns the bucket, and make sure that your account is in the owners group of that project.
Alternately, you could switch which account you're using for gsutil to one that is a project owner temporarily. The easiest way to do this is by using the BOTO_CONFIG environment variable to control multiple profiles:
$> BOTO_CONFIG=/home/me/.boto.owner gsutil config
# Follow prompts to set up account, use an account that owns the bucket
$> BOTO_CONFIG=/home/me/.boto.owner gsutil acl ch -u otherAccount#gmail.com:FC
$> gsutil do stuff with original account
Related
I am trying to overwrite existing export data in gcloud using:
gcloud firestore export gs://<PROJECT>/dir --collection-ids='tokens'
But I get this error:
(gcloud.firestore.export) INVALID_ARGUMENT: Path already exists: /fcm-test-firebase.appspot.com/dir/dir.overall_export_metadata
Is there anyway to either delete the path or export with replace?
You can easily determine the list of available flags for any gcloud.
Here are variants of the command and you can see that there's no overwrite option:
gcloud firestore export
gcloud alpha firestore export
gcloud beta firestore export
Because the export is too a Google Cloud Storage (GCS) bucket, you can simply delete the path before attempting the export.
BE VERY CAREFUL with this command as it recursively deletes objects
gsutil rm -r gs://<PROJECT>/dir
If you would like Google to consider adding an overwrite feature, consider filing a feature request on it's public issue tracker.
I suspect that the command doesn't exist for various reasons:
GCS storage is cheap
Many backup copies is ∞>> no backup copies
It's easy to delete copies using gsutil
Because Firestore does not have a way to clone projects, I am attempting to achieve the equivalent by copying data from one project into a GCS bucket and read it into another project.
Specifically, using cloudshell I populate the bucket with data exported from Firestore project A and am attempting to import it into Firestore project B. The bucket belongs to Firestore project A.
I am able to export the data from Firestore project A without any issue. When I attempt to import into Firestore project B with the cloudshell command
gcloud beta firestore import gs://bucketname
I get the error message
project-b#appspot.gserviceaccount.com does not have storage.
buckets.get access to bucketname
I have searched high and low for a way to provide the access rights storage.bucket.get to project B, but am not finding anything that works.
Can anyone point me to how this is done? I have been through the Google docs half a dozen times and am either not finding the right information or not understanding the information that I find.
Many thanks in advance.
For import from a project A in a project B, the service account in project B must have the right permissions for the Cloud Storage bucket in project A.
In your case, the service account is:
project-ID#appspot.gserviceaccount.com
To grant the right permissions you can use this command on the Cloud Shell of project B:
gsutil acl ch -u project-ID#appspot.gserviceaccount.com:OWNER gs://[BUCKET_NAME]
gsutil -m acl ch -r -u project-ID#appspot.gserviceaccount.com:OWNER gs://[BUCKET_NAME]
Then, you can import using the firestore import:
gcloud beta firestore import gs://[BUCKET_NAME]/[EXPORT_PREFIX]
I was not able to get the commands provided by "sotis" to work, however his answer certainly got me heading down the right path. The commands that eventually worked for me were:
gcloud config set project [SOURCE_PROJECT_ID]
gcloud beta firestore export gs://[BUCKET_NAME]
gcloud config set project [TARGET_PROJECT_ID]
gsutil acl ch -u [RIGHTS_RECIPIENT]:R gs://[BUCKET_NAME]
gcloud beta firestore import gs://[BUCKET_NAME]/[TIMESTAMPED_DIRECTORY]
Where:
* SOURCE_PROJECT_ID = the name of the project you are cloning
* TARGET_PROJECT_ID = the destination project for the cloning
* RIGHTS_RECIPIENT = the email address of the account to receive read rights
* BUCKET_NAME = the name of the bucket that stores the data.
Please note, you have to manually create this bucket before you export to it.
Also, make sure the bucket is in the same geographic region as the projects you are working with.
* TIMESTAMPED_DIRECTORY = the name of the data directory automatically created by the "export" command
I am sure that this is not the only way to solve the problem, however it worked for me and appears to be the "shortest path" solution I have seen.
I want to set Content-Type metadata to image/jpeg for all objects of a Google Storage bucket.
How to do this?
Using gsutil and its setmeta command:
gsutil -m setmeta -h "Content-Type:image/jpeg" gs://YOUR_BUCKET/**/*.jpg
Use the -m to activate a parallel update, in case you have a lot of objects.
The /**/* pattern will perform a recursive search on any folders that you may have on your bucket.
I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.
I am new to google cloud storage nearline and test it. I intend to use google cloud storage nearline for backup.
I wonder how to keep files timestamp when I do 'gsutil cp' between local and nearline.
gsutil cp localfile gs://mybucket
Then, uploaded file timestamp is set uploaded time. I want to keep original file timestamp.
Sorry, you cannot specify the creation time of an object in GCS. The creation time is always the moment that the object is created in GCS.
You can, however, set extra user metadata on objects that you upload. If you'd like, you can record the original creation time of an object there:
$> gsutil cp -h "x-goog-meta-local-creation-time:Some Creation Time" localfile gs://mybucket
When I attempted to perform copy with the following command, the timestamp (Linux "mtime") of local files gets automatically preserved as "goog-reserved-file-mtime" in metadata on Google Cloud Storage.
gsutil cp -r -P $LOCAL_DIR/* gs://$TARGET_BUCKET &