I want to set Content-Type metadata to image/jpeg for all objects of a Google Storage bucket.
How to do this?
Using gsutil and its setmeta command:
gsutil -m setmeta -h "Content-Type:image/jpeg" gs://YOUR_BUCKET/**/*.jpg
Use the -m to activate a parallel update, in case you have a lot of objects.
The /**/* pattern will perform a recursive search on any folders that you may have on your bucket.
Related
I am trying to overwrite existing export data in gcloud using:
gcloud firestore export gs://<PROJECT>/dir --collection-ids='tokens'
But I get this error:
(gcloud.firestore.export) INVALID_ARGUMENT: Path already exists: /fcm-test-firebase.appspot.com/dir/dir.overall_export_metadata
Is there anyway to either delete the path or export with replace?
You can easily determine the list of available flags for any gcloud.
Here are variants of the command and you can see that there's no overwrite option:
gcloud firestore export
gcloud alpha firestore export
gcloud beta firestore export
Because the export is too a Google Cloud Storage (GCS) bucket, you can simply delete the path before attempting the export.
BE VERY CAREFUL with this command as it recursively deletes objects
gsutil rm -r gs://<PROJECT>/dir
If you would like Google to consider adding an overwrite feature, consider filing a feature request on it's public issue tracker.
I suspect that the command doesn't exist for various reasons:
GCS storage is cheap
Many backup copies is ∞>> no backup copies
It's easy to delete copies using gsutil
I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.
I have around 200 gb of data on a google cloud coldline bucket. When i try to remove it, it keeps preparing forever.
Any way to remove the bucket ?
Try the gsutil tool if you have been trying with the Console and it did not work. To do so, you can just open Google Cloud Shell (most left button in the top right corner of the Console) and type a command like:
gsutil -m rm -r gs://[BUCKET_NAME]
It may take a while, but with the -r flag you will be deleting first the contents of the bucket recursively, and later delete the bucket itself. The -m flag performs parallel removes, to speed up the process.
I am new to google cloud storage nearline and test it. I intend to use google cloud storage nearline for backup.
I wonder how to keep files timestamp when I do 'gsutil cp' between local and nearline.
gsutil cp localfile gs://mybucket
Then, uploaded file timestamp is set uploaded time. I want to keep original file timestamp.
Sorry, you cannot specify the creation time of an object in GCS. The creation time is always the moment that the object is created in GCS.
You can, however, set extra user metadata on objects that you upload. If you'd like, you can record the original creation time of an object there:
$> gsutil cp -h "x-goog-meta-local-creation-time:Some Creation Time" localfile gs://mybucket
When I attempted to perform copy with the following command, the timestamp (Linux "mtime") of local files gets automatically preserved as "goog-reserved-file-mtime" in metadata on Google Cloud Storage.
gsutil cp -r -P $LOCAL_DIR/* gs://$TARGET_BUCKET &
I ran gsutil.py acl set -R public-read gs://dsa-assets
and now I see that it override every users permission and I can't upload new file or even delete this bucket.
What can I do to reset permission on my bucket or to delete this bucket ?
gsutil acl set -R public-read gs://bucketName will set the ACL for bucketName and all of the objects inside of bucketName to the canned ACL public-read. This ACL grants all users read access to the bucket and objects, and it grants FULL_CONTROL to the bucket or object owner.
Every ACL includes FULL_CONTROL for the bucket or object owner. The owner of the bucket will always have FULL_CONTROL of the bucket that they own, no matter how they try.
If you find that you can no longer upload files to the bucket, it is likely that you are not using gsutil with an account that owns the bucket. Figure out which project owns the bucket, and make sure that your account is in the owners group of that project.
Alternately, you could switch which account you're using for gsutil to one that is a project owner temporarily. The easiest way to do this is by using the BOTO_CONFIG environment variable to control multiple profiles:
$> BOTO_CONFIG=/home/me/.boto.owner gsutil config
# Follow prompts to set up account, use an account that owns the bucket
$> BOTO_CONFIG=/home/me/.boto.owner gsutil acl ch -u otherAccount#gmail.com:FC
$> gsutil do stuff with original account