I have a Scala Play 2 app and using AWS S3 API to read from S3 files. I have a need to determine when the last modified timestamp is for a file, what's the best way to do that? Is it using getObjectMetadata or perhaps listObjects or ? If possible, I would like to determine the timestamps for multiple files in one call. Are there other open source libraries built on top of AWS S3 APIs?
A representation of S3 Object in AWS Java SDK is S3ObjectSummary, which has method getLastModified. It returns the modified timestamp.
Ideally just list all of the files using listObjects and than call getObjectSummaries on a returned object.
Related
I am trying to rename a blob (which can be quite large) after having uploaded them to a temporary location in the bucket.
Reading the documentation it says:
Warning: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. If you need more control over the copy and deletion, instead use google.cloud.storage.blob.Blob.copy_to and google.cloud.storage.blob.Blob.delete directly.
But I can find absolutely no reference to copy_to anywhere in the SDK (or elsewhere really).
Is there any way to rename a blob from A to B without the SDK copying the file. In my case overwriting B, but I can remove B first if it's easier.
The reason is checksum validation, I'll upload it under A first to make sure it's successfully uploaded (and doesn't trigger DataCorruption) and only then replace B (the live object)
GCS itself does not support renaming objects. Renaming with a copy+delete is done in the client as a helper, and there is no better way to rename an object at the moment.
As you say your goal is checksum validation, there is a better solution. Upload directly to your destination and use GCS's built in checksum verification. How you do this depends on the API:
JSON objects.insert: Set crc32c or md5Hash header.
XML PUT object: Set x-goog-hash header.
Python SDK Blob.upload_from_* methods: Set checksum="crc32c" or checksum="md5" method parameter.
I am woking on tensorflow model where I want to make use of the latest ulpoad object, in order get output from that uploaded object. Is there way to access latest object uploaded to Google cloud storage bucket using python.
The below is what I use for grabbing the latest updated object.
Instantiate your client
from google.cloud import storage
# first establish your client
storage_client = storage.Client()
Define bucket_name and any additional paths via prefix
# get your blobs
bucket_name = 'your-glorious-bucket-name'
prefix = 'special-directory/within/your/bucket' # optional
Iterate the blobs returned by the client
Storing these as tuple records is quick and efficient.
blobs = [(blob, blob.updated) for blob in storage_client.list_blobs(
bucket_name,
prefix = prefix,
)]
Sort the list on the second tuple value
# sort and grab the latest value, based on the updated key
latest = sorted(blobs, key=lambda tup: tup[1])[-1][0]
string_data = latest.download_as_string()
Metadata key docs and Google Cloud Storage Python client docs.
One-liner
# assumes storage_client as above
# latest is a string formatted response of the blob's data
latest = sorted([(blob, blob.updated) for blob in storage_client.list_blobs(bucket_name, prefix=prefix)], key=lambda tup: tup[1])[-1][0].download_as_string()
There is no a direct way to get the latest uploaded object from Google Cloud Storage. However, there is a workaround using the object's metadata.
Every object that it is uploaded to the Google Cloud Storage has different metadata. For more information you can visit Cloud Storage > Object Metadata documentation. One of the metadatas is "Last updated". This value is a timestamp of the last time the object was updated. Which can happen only in 3 occasions:
A) The object was uploaded for the first time.
B) The object was uploaded and replaced because it already existed.
C) The object's metadata changed.
If you are not updating the metadata of the object, then you can use this work around:
Set a variable with very old date_time object (1900-01-01 00:00:00.000000). There is no chance of an object to have this update metadata.
Set a variable to store the latest's blob's name and set it to "NONE"
List all the blobs in the bucket Google Cloud Storage Documentation
For each blob name load the updated metadata and convert it to date_time object
If the blob's update metadata is greater than the one you have already, then update it and save the current name.
This process will continue until you search all the blobs and only the latest one will be saved in the variables.
I have did a little bit of coding my self and this is my GitHub code example that worked for me. Take the logic and modify it based on your needs. I would also suggest to test it locally and then use it in your code.
BUT, in case you update the blob's metadata manually then this is another workaround:
If you update the blob's any metadata, see this documentation Viewing and Editing Object Metadata, then the "Last update" timestamp of that blob will also get updated so running the above method will NOT give you the last uploaded object but the last modified which are different. Therefore you can add a custom metadata to your object every time you upload and that custom metadata will be the timestamp at the time you upload the object. So no matter what happen to the metadata later, the custom metadata will always keep the time that the object was uploaded. Then use the same method as above but instead of getting blob.update get the blob.metadata and then use that date with the same logic as above.
Additional notes:
To use custom metadata you need to use the prefix x-goog-meta- as it is stated in Editing object metadata section in Viewing and Editing Object Metadata documentation.
So the [CUSTOM_METADATA_KEY] should be something like x-goog-meta-uploaded and [CUSTOM_METADATA_VALUE] should be [CURRENT_TIMESTAMP_DURING_UPLOAD]
Actually, I need to create a transformation which will read the JSON file from the system directory and rename the JSON fields(keys) based on the metadata inputs. Finally, write the modified JSON into '.js' file using JSON output step. This conversion must be done using the ETL Metadata Injection step.
Since I am new to Pentaho Data Integration tool, can anyone help me with the sample '.ktr' files for the above scenario.
Thanks in advance.
The same use case is on the Pentaho official documentation here, except it does it with Excel files rather than JSON objects.
Now, the Metadata Injection Step requires the development of a rather sophisticated machinery. And json, it is rather simple to build with a simple javascript. So, where do you get the "dictionary" (source field name -> target field name) from?
I'm currently writing code to use Amazon's S3 REST API and I notice different behavior where the only difference seems to be the Amazon endpoint URI that I use, e.g., https://s3.amazonaws.com vs. https://s3-us-west-2.amazonaws.com.
Examples of different behavior for the the GET Bucket (List Objects) call:
Using one endpoint, it includes the "folder" in the results, e.g.:
/path/subfolder/
/path/subfolder/file1.txt
/path/subfolder/file2.txt
and, using the other endpoint, it does not include the "folder" in the results:
/path/subfolder/file1.txt
/path/subfolder/file2.txt
Using one endpoint, it represents "folders" using a trailing / as shown above and, using the other endpoint, it uses a trailing _$folder$:
/path/subfolder_$folder$
/path/subfolder/file1.txt
/path/subfolder/file2.txt
Why the differences? How can I make it return results in a consistent manner regardless of endpoint?
Note that I get these same odd results even if I use Amazon's own command-line AWS S3 client, so it's not my code.
And the contents of the buckets should be irrelevant anyway.
Your assertion notwithstanding, your issue is exactly about the content of the buckets, and not something S3 is doing -- the S3 API has no concept of folders. None. The S3 console can display folders, but this is for convenience -- the folders are not really there -- or if there are folder-like entities, they're irrelevant and not needed.
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
So why are you seeing this?
Either you've been using EMR/Hadoop, or some other code written by someone who took a bad example and ran with it... or is doing something differently than it should have been done for quite some time.
Amazon EMR is a web service that uses a managed Hadoop framework to process, distribute, and interact with data in AWS data stores, including Amazon S3. Because S3 uses a key-value pair storage system, the Hadoop file system implements directory support in S3 by creating empty files with the <directoryname>_$folder$ suffix.
https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-empty-files/
This may have been something the S3 console did many years ago, and apparently (since you don't report seeing them in the console) it still supports displaying such objects as folders in the console... but the S3 console no longer creates them this way, if it ever did.
I've mirrored the bucket "folder" layout exactly
If you create a folder in the console, an empty object with the key "foldername/" is created. This in turn is used to display a folder that you can navigate into, and upload objects with keys beginning with that folder name as a prefix.
The Amazon S3 console treats all objects that have a forward slash "/" character as the last (trailing) character in the key name as a folder
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
If you just create objects using the API, then "my/object.txt" appears in the console as "object.txt" inside folder "my" even though there is no "my/" object created... so if the objects are created with the API, you'd see neither style of "folder" in the object listing.
That is probably a bug in the API endpoint which includes the "folder" - S3 internally doesn't actually have a folder structure, but instead is just a set of keys associated with files, where keys (for convenience) can contain slash-separated paths which then show up as "folders" in the web interface. There is the option in the API to specify a prefix, which I believe can be any part of the key up to and including part of the filename.
EMR's s3 client is not the apache one, so I can't speak accurately about it.
In ASF hadoop releases (and HDP, CDH)
The older s3n:// client uses $folder$ as its folder delimiter.
The newer s3a:// client uses / as its folder marker, but will handle $folder$ if there. At least it used to; I can't see where in the code it does now.
The S3A clients strip out all folder markers when you list things; S3A uses them to simulate empty dirs and deletes all parent markers when you create child file/dir entries.
Whatever you have which processes GET should just ignore entries with "/" or $folder at the end.
As to why they are different, the local EMRFS is a different codepath, using dynamo for implementing consistency. At a guess, it doesn't need to mock empty dirs, as the DDB tables will host all directory entries.
I want to use Tarantool database for logging user activity.
Are there any out of the box solutions to create web dashboard with nice charts based on the collected data?
A long time ago, using an old-old version of tarantool I've created a draft of tarbon - time-series database, with carbon-cache identical interface.
Since that time the protocol have changed, but the generic idea still the same: use spaces to store data, compact data organization and correct indexes to access spaces as time-series rows and lua for preparing resulting jsons.
That solution was perfect in performance (either on reads or on writes), but that old version lacks disk storage and without disk I was very limited to metrics capacity.
Tarantool has embedded lua language so u could generate json from your data and use any charting library. For example D3.js has method to load json directly from url.
d3.json(url[, callback])
Creates a request for the JSON file at the specified url with the mime type "application/json". If a callback is specified, the request is immediately issued with the GET method, and the callback will be invoked asynchronously when the file is loaded or the request fails; the callback is invoked with two arguments: the error, if any, and the parsed JSON. The parsed JSON is undefined if an error occurs. If no callback is specified, the returned request can be issued using xhr.get or similar, and handled using xhr.on.
You also could look at c3.js simple facade for d3