Google cloud storage does not let me remove bucket with huge data - google-cloud-storage

I have around 200 gb of data on a google cloud coldline bucket. When i try to remove it, it keeps preparing forever.
Any way to remove the bucket ?

Try the gsutil tool if you have been trying with the Console and it did not work. To do so, you can just open Google Cloud Shell (most left button in the top right corner of the Console) and type a command like:
gsutil -m rm -r gs://[BUCKET_NAME]
It may take a while, but with the -r flag you will be deleting first the contents of the bucket recursively, and later delete the bucket itself. The -m flag performs parallel removes, to speed up the process.

Related

Is there a way to tag or version Cloud Storage buckets?

I have a shell script which refreshes my emulators data to the latest data from prod.
Part of the script is removing the existing bucket and then re exporting it to avoid the Path already exists error.
I know that I can manually add version buckets like /firestore_data/v1 but that would require me to find out what the last version is from the console and then update the shell script each time I need to refresh the emulators data.
Ideally I would like to be able to run gsutil -m cp -r gs://my-app.appspot.com/firestore_data#latest
Is there any way to version storage buckets, or to leave tags that can be used when adding and copying down?

Google Cloud Firestore: How to copy Firestore collection to Cloud Storage

Writing a code is the only option to copy Firestore collections to Cloud Storage or is there some kind of a magic feature I can use?
I know this new feature announcement of importing Firestore collection into BigQuery in the Firestore talk during the Next conference. Is there something similar for Cloud Storage?
https://cloud.google.com/firestore/docs/manage-data/export-import. Not so sure whether this is a new feature but I am going to try this out.
Yes, finally, Firebase enabled this feature.
Create Cloud Storage Bucket
install gcloud if not already: in terminal run curl
https://sdk.cloud.google.com | bash
after prompt Modify profile to update your $PATH and enable bash completion? (Y/n) type y + enter
next, run source .bash_profile
afterwards, run: gcloud beta firestore export gs://[BUCKET-NAME].
and in case, you want to save the folder locally, simply run gsutil cp -r gs://[BUCKET-NAME] /path/to/folder

How to skip existing files in gsutil rsync

I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.

Executed PHP Script Cannot Access GCS Mounted Drive on GCE

I was able to mount my Google Cloud Storage using the command line below:
gcsfuse -o allow_other -file-mode=660 -dir-mode=770 --uid=<uid> --gid=<gid> testbucket /path/to/domain/folder
The group includes the user apache. Apache is able to write to the mounted drive like so:
sudo -u apache echo 'Some Test Text' > /path/to/domain/folder/hello.txt
hello.txt appears in the bucket as expected. However when I execute the below php script I get an error:
<?php file_put_contents('/path/to/domain/folder/hello.txt', 'Some Test Text');
PHP Error: failed to open stream: Permission denied
echo exec('whoami'); Returns apache
I assumed this is a common use for mounting with gcsfuse or something similar to this but, I seem to be the only one on the internet with this issue. I do not know if its an issue with the way I mounted it or the service security of httpd.
I came across a similar issue.
Use the flag --implicit-dirs while mounting the Google Storage bucket using gcsfuse. More on this here.
Mounting the bucket as a folder makes the OS to treat it like a regular folder which may contain files and folders. But Google Cloud Storage bucket doesn't have directory structures. For example, when you are creating a file named hello.txt in a folder named files inside a Google Storage bucket, you are not actually creating a folder and putting the file in it. The object is created in the bucket with the name as files/hello.txt. More on this here and here.
To make the OS treat the GCS bucket like a hierarchical structure, you have to specify the --implicit-dirs flag to the gcsfuse.
Note:
I wouldn't recommend using gcsfuse in production systems as it is a beta quality software.

How to keep timestamp when gsutil cp

I am new to google cloud storage nearline and test it. I intend to use google cloud storage nearline for backup.
I wonder how to keep files timestamp when I do 'gsutil cp' between local and nearline.
gsutil cp localfile gs://mybucket
Then, uploaded file timestamp is set uploaded time. I want to keep original file timestamp.
Sorry, you cannot specify the creation time of an object in GCS. The creation time is always the moment that the object is created in GCS.
You can, however, set extra user metadata on objects that you upload. If you'd like, you can record the original creation time of an object there:
$> gsutil cp -h "x-goog-meta-local-creation-time:Some Creation Time" localfile gs://mybucket
When I attempted to perform copy with the following command, the timestamp (Linux "mtime") of local files gets automatically preserved as "goog-reserved-file-mtime" in metadata on Google Cloud Storage.
gsutil cp -r -P $LOCAL_DIR/* gs://$TARGET_BUCKET &