I am new to google cloud storage nearline and test it. I intend to use google cloud storage nearline for backup.
I wonder how to keep files timestamp when I do 'gsutil cp' between local and nearline.
gsutil cp localfile gs://mybucket
Then, uploaded file timestamp is set uploaded time. I want to keep original file timestamp.
Sorry, you cannot specify the creation time of an object in GCS. The creation time is always the moment that the object is created in GCS.
You can, however, set extra user metadata on objects that you upload. If you'd like, you can record the original creation time of an object there:
$> gsutil cp -h "x-goog-meta-local-creation-time:Some Creation Time" localfile gs://mybucket
When I attempted to perform copy with the following command, the timestamp (Linux "mtime") of local files gets automatically preserved as "goog-reserved-file-mtime" in metadata on Google Cloud Storage.
gsutil cp -r -P $LOCAL_DIR/* gs://$TARGET_BUCKET &
Related
I have a shell script which refreshes my emulators data to the latest data from prod.
Part of the script is removing the existing bucket and then re exporting it to avoid the Path already exists error.
I know that I can manually add version buckets like /firestore_data/v1 but that would require me to find out what the last version is from the console and then update the shell script each time I need to refresh the emulators data.
Ideally I would like to be able to run gsutil -m cp -r gs://my-app.appspot.com/firestore_data#latest
Is there any way to version storage buckets, or to leave tags that can be used when adding and copying down?
I am trying to overwrite existing export data in gcloud using:
gcloud firestore export gs://<PROJECT>/dir --collection-ids='tokens'
But I get this error:
(gcloud.firestore.export) INVALID_ARGUMENT: Path already exists: /fcm-test-firebase.appspot.com/dir/dir.overall_export_metadata
Is there anyway to either delete the path or export with replace?
You can easily determine the list of available flags for any gcloud.
Here are variants of the command and you can see that there's no overwrite option:
gcloud firestore export
gcloud alpha firestore export
gcloud beta firestore export
Because the export is too a Google Cloud Storage (GCS) bucket, you can simply delete the path before attempting the export.
BE VERY CAREFUL with this command as it recursively deletes objects
gsutil rm -r gs://<PROJECT>/dir
If you would like Google to consider adding an overwrite feature, consider filing a feature request on it's public issue tracker.
I suspect that the command doesn't exist for various reasons:
GCS storage is cheap
Many backup copies is ∞>> no backup copies
It's easy to delete copies using gsutil
I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.
I have around 200 gb of data on a google cloud coldline bucket. When i try to remove it, it keeps preparing forever.
Any way to remove the bucket ?
Try the gsutil tool if you have been trying with the Console and it did not work. To do so, you can just open Google Cloud Shell (most left button in the top right corner of the Console) and type a command like:
gsutil -m rm -r gs://[BUCKET_NAME]
It may take a while, but with the -r flag you will be deleting first the contents of the bucket recursively, and later delete the bucket itself. The -m flag performs parallel removes, to speed up the process.
I was able to mount my Google Cloud Storage using the command line below:
gcsfuse -o allow_other -file-mode=660 -dir-mode=770 --uid=<uid> --gid=<gid> testbucket /path/to/domain/folder
The group includes the user apache. Apache is able to write to the mounted drive like so:
sudo -u apache echo 'Some Test Text' > /path/to/domain/folder/hello.txt
hello.txt appears in the bucket as expected. However when I execute the below php script I get an error:
<?php file_put_contents('/path/to/domain/folder/hello.txt', 'Some Test Text');
PHP Error: failed to open stream: Permission denied
echo exec('whoami'); Returns apache
I assumed this is a common use for mounting with gcsfuse or something similar to this but, I seem to be the only one on the internet with this issue. I do not know if its an issue with the way I mounted it or the service security of httpd.
I came across a similar issue.
Use the flag --implicit-dirs while mounting the Google Storage bucket using gcsfuse. More on this here.
Mounting the bucket as a folder makes the OS to treat it like a regular folder which may contain files and folders. But Google Cloud Storage bucket doesn't have directory structures. For example, when you are creating a file named hello.txt in a folder named files inside a Google Storage bucket, you are not actually creating a folder and putting the file in it. The object is created in the bucket with the name as files/hello.txt. More on this here and here.
To make the OS treat the GCS bucket like a hierarchical structure, you have to specify the --implicit-dirs flag to the gcsfuse.
Note:
I wouldn't recommend using gcsfuse in production systems as it is a beta quality software.