Google Cloud Storage upload files modified today - google-cloud-storage

I am trying to figure out if I can use the cp command of gsutil on the Windows platform to upload files to Google Cloud Storage. I have 6 folders on my local computer that get daily new pdf documents added to them. Each folder contains around 2,500 files. All files are currently on google storage in their respective folders. Right now I mainly upload all the new files using Google Cloud Storage Manager. Is there a way to create a batch file and schedule to run it automatically every night so it grabs only files that have been scanned today and uploads it to Google Storage?
I tried this format:
python c:\gsutil\gsutil cp "E:\PIECE POs\64954.pdf" "gs://dompro/piece pos"
and it uploaded the file perfectly fine.
This command
python c:\gsutil\gsutil cp "E:\PIECE POs\*.pdf" "gs://dompro/piece pos"
will upload all of the files into a bucket. But how do I only grab files that were changed or generated today? Is there a way to do it?

One solution would be to use the -n parameter on the gsutil cp command:
python c:\gsutil\gsutil cp -n "E:\PIECE POs\*" "gs://dompro/piece pos/"
That will skip any objects that already exist on the server. You may also want to look at using gsutil's -m flag and see if that speeds the process up for you:
python c:\gsutil\gsutil -m cp -n "E:\PIECE POs\*" "gs://dompro/piece pos/"

Since you have Python available to you, you could write a small Python script to find the ctime (creation time) or mtime (modification time) of each file in a directory, see if that date is today, and upload it if so. You can see an example in this question which could be adapted as follows:
import datetime
import os
local_path_to_storage_bucket = [
('<local-path-1>', 'gs://bucket1'),
('<local-path-2>', 'gs://bucket2'),
# ... add more here as needed
]
today = datetime.date.today()
for local_path, storage_bucket in local_path_to_storage_bucket:
for filename in os.listdir(local_path):
ctime = datetime.date.fromtimestamp(os.path.getctime(filename))
mtime = datetime.date.fromtimestamp(os.path.getmtime(filename))
if today in (ctime, mtime):
# Using the 'subprocess' library would be better, but this is
# simpler to illustrate the example.
os.system('gsutil cp "%s" "%s"' % (filename, storage_bucket))
Alternatively, consider using Google Cloud Store Python API directly instead of shelling out to gsutil.

Related

GSUTIL CP using file size

I am trying to copy files from a directory on my Google Compute Instance to Google Cloud Storage Bucket. I have it working, however there are ~35k files but only ~5k have an data in them.
Is there anyway to only copy files above a certain size?
I've not tried this but...
You should be able to do this using a resumable transfer and setting the threshold to 5k (defaults to 8Mib). See: https://cloud.google.com/storage/docs/gsutil/commands/cp#resumable-transfers
May be advisable to set BOTO_CONFIG specifically for this copy (a) to be intentional; (b) to remind yourself how it works. See: https://cloud.google.com/storage/docs/boto-gsutil
Resumable uploads has the added benefit, of course, of resuming if there are any failures.
Recommend: try this on a small subset and confirm it works to your satisfaction.
While it's not possible to do it only with gsutil, it's possible to do it by parsing the names and use the -I flag on the cp command to process them. If you're using a Linux Compute Engine instance you can perform it by using the du and awk commands:
du * | awk '{if ($1 > 1000) print $2 }' | gsutil -m cp -I gs://bucket2
The command will get the filesize of the files inside the current directory on your compute engine with du * and will only copy the files which size are larger than 1000 bytes to bucket2, you can change that value to adjust it to your needs.

Gatling: upload simulation.log to S3 bucket from simulation

I would like to upload the simulation.log file of a scenario to an S3 bucket once my simulation is finished.
I was thinking of adding the upload in the after block of my simulation.
I didn't find any example that does that. I only found scripts external to the simulation taking care if it.
Is there any reason why I shouldn't do it?
If not, how can I get the absolute path of the simulation.log?
Unfortunately block 'after' execute early then generating report.
I would advise writing a script that runs after the test.
Go to directory with the last report.
cd target/gatling && cd "$(ls -td -- */ | head -n 1 | cut -d'/' -f1)"
Use aws cli for upload to s3
aws s3 cp simulation.log s3://my-bucket/

Nearline - Backup Solution - Versioning

I've setup some Nearline buckets and enabled versioning and object lifecycle management. The use-case is to replace my current backup solution, Crashplan.
Using gsutil I can see the different versions of a file using a command like gsutil ls -al gs://backup/test.txt.
First, is there any way of finding files that don't have a live version (e.g. deleted) but still have a version attached?
Second, is there any easier way of managing versions? For instance if I delete a file from my PC, it will no longer have a live version in my bucket but will still have the older versions associated. Say, if I didn't know the file name would I just have to do a recursive ls on the entire bucket and sift through the output?
Would love a UI that supported versioning.
Thanks.
To check if the object currently has no life version use x-goog-if-generation-match header equal to 0, for example :
gsutil -h x-goog-if-generation-match:0 cp file.txt gs://bucket/file.txt
will fail (PreconditionException: 412 Precondition Failed) if file has a live version and will succeed if it has only archived versions.
In order to automatically synchronize your local folder and folder in the bucket (or the other way around) use gcloud rsync:
gcloud rsync -r -d ./test gs://bucket/test/
notice the trailing / in gs://bucket/test/, without it you will receive
CommandException: arg (gs://graham-dest/test) does not name a directory, bucket, or bucket subdir.
-r synchronize all the directories in ./test recursively to gs://bucket/test/`
-d will delete all files from gs://bucket/test/that are not found in./test`
Regarding UI, there already exists a future request. I don't know anything about third party applications however.

Google Cloud Storage : What is the easiest way to update timestamp of all files under all subfolders

I have datewise folders in the form of root-dir/yyyy/mm/dd
under which there are so many files present.
I want to update the timestamp of all the files falling under certain date-range,
for example 2 weeks ie. 14 folders, so that these these files can be picked up by my file-Streaming Data Ingestion process.
What is the easiest way to achieve this?
Is there a way in UI console? or is it through gsutil?
please help
GCS objects are immutable, so the only way to "update" the timestamp would be to copy each object on top of itself, e.g., using:
gsutil cp gs://your-bucket/object1 gs://your-bucket/object1
(and looping over all objects you want to do this to).
This is a fast (metadata-only) operation, which will create a new generation of each object, with a current timestamp.
Note that if you have versioning enabled on the bucket doing this will create an extra version of each file you copy this way.
When you say "folders in the form of root-dir/yyyy/mm/dd", do you mean that you're copying those objects into your bucket with names like gs://my-bucket/root-dir/2016/12/25/christmas.jpg? If not, see Mike's answer; but if they are named with that pattern and you just want to rename them, you could use gsutil's mv command to rename every object with that prefix:
$ export BKT=my-bucket
$ gsutil ls gs://$BKT/**
gs://my-bucket/2015/12/31/newyears.jpg
gs://my-bucket/2016/01/15/file1.txt
gs://my-bucket/2016/01/15/some/file.txt
gs://my-bucket/2016/01/15/yet/another-file.txt
$ gsutil -m mv gs://$BKT/2016/01/15 gs://$BKT/2016/06/20
[...]
Operation completed over 3 objects/12.0 B.
# We can see that the prefixes changed from 2016/01/15 to 2016/06/20
$ gsutil ls gs://$BKT/**
gs://my-bucket/2015/12/31/newyears.jpg
gs://my-bucket/2016/06/20/file1.txt
gs://my-bucket/2016/06/20/some/file.txt
gs://my-bucket/2016/06/20/yet/another-file.txt

Prevent downtime using lftp mirror

I'm using lftp to deploy a website via Travis CI. There is a build process before the deployment, for that reason a build directory is present and pushed to the root of the ftp server.
lftp $FTP_URL -e "glob -d mirror build . --reverse --delete-first --parallel=10 && exit"
It works quite well, but I dislike to have a downtime / temporary PHP parse errors because of missing files on my website. What is the best way to work arround that issue?
My first approach was an option to set a temporary directory, but the lftp man page says there is only a options for temporary files. I still tried the option but it didn't help.
My second approach was to use "mirror build temp" to use a temporary folder and then replace the root with it. The problem here is, that I cannot exclude the temp folder while deleting the old files and folders like rm -rf *.
For small changes not involving adding/removing php files set xfer:use-temp-file should be sufficient. Also don't use --remove-first, as it causes lftp to delete obsolete files before uploading.
For larger changes I'd create a separate directory for each version of the site and redirect the web server to the directory using .htaccess mod_rewrite or some other configuration file. This technique will allow atomic switch to the new version (and back if needed). Besides, you will be able to do final pre-production testing of the new version if you redirect to the new version conditionally based on your IP address or using some other rule.
If you don't want to re-upload whole site for each new version and the FTP server supports FXP with itself, then you can copy old version to a new directory using mirror old_directory ftp://user#example.com/new_directory, then update the new directory using mirror -eR local_dir new_directory.
This is a zero downtown pattern - each placeholder should be replaced:
lftp $FTP_URL -e "mirror {SOURCE} {TARGET}-new-{TIMESTAMP} --reverse --delete-first;
mv {TARGET} {TARGET}-old-{TIMESTAMP};
mv {TARGET}-new-{TIMESTAMP} {TARGET};
rm -rf {TARGET}-old-{TIMESTAMP};
exit"