How long does a wormhole file transfer persist - file-transfer

I am trying to use magic-wormhole to receive a file.
My partner and I are in different time zones, however.
If my partner types wormhole send filename, for how long will this file persist (i.e. how much later can I type wormhole receive keyword and still get the file)?

From the "Timing" section in the docs:
The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other ... Both clients must be left running until the transfer has finished.
So... maybe? Consider using some cloud storage instead, depending on the file. You could also encrypt it before uploading it to cloud storage if the contents of the file is private.

Related

Google Cloud Storage Python API: blob rename, where is copy_to

I am trying to rename a blob (which can be quite large) after having uploaded them to a temporary location in the bucket.
Reading the documentation it says:
Warning: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. If you need more control over the copy and deletion, instead use google.cloud.storage.blob.Blob.copy_to and google.cloud.storage.blob.Blob.delete directly.
But I can find absolutely no reference to copy_to anywhere in the SDK (or elsewhere really).
Is there any way to rename a blob from A to B without the SDK copying the file. In my case overwriting B, but I can remove B first if it's easier.
The reason is checksum validation, I'll upload it under A first to make sure it's successfully uploaded (and doesn't trigger DataCorruption) and only then replace B (the live object)
GCS itself does not support renaming objects. Renaming with a copy+delete is done in the client as a helper, and there is no better way to rename an object at the moment.
As you say your goal is checksum validation, there is a better solution. Upload directly to your destination and use GCS's built in checksum verification. How you do this depends on the API:
JSON objects.insert: Set crc32c or md5Hash header.
XML PUT object: Set x-goog-hash header.
Python SDK Blob.upload_from_* methods: Set checksum="crc32c" or checksum="md5" method parameter.

Better way to "mutex" than with a .lock file over the network?

I have a small setup consisting of n clients (CL0, CL1, ... CLn) that access a windows share on the server.
On that server a json file holds important data that needs to be read- and writable by all players in the game. It holds key value pairs that are constantly read and changed:
{
"CurrentActive": "CL1",
"DataToProcess": false,
"NeedsReboot": false,
"Timestamp": "2020-05-25 16:10"
}
I already got the following done with PowerShell:
if a client writes the file, a lock file is generated that holds the hostname and the timestamp of the access. After the access, the lock file is removed. Each write "job" first checks if there is a lockfile and if the timestamp is still valid and will then conditionally write to the file after the lock is removed.
#Some Pseudo Code:
if(!Test-Path $lockfile){
gc $json
}else{
#wait for some time and try again
#check if lock is from my own hostname
#check if timestamp in lock is still valid
}
This works ok, but is very complex to build up, since I needed to implement the lock-mechanism and also a way to force remove the lock when the client is not able to remove the file for multiple reasons a.s.o. (and I am sure I also included some errors...). Plus, in some cases reading the file will return an empty one. I assume this is in the sweetspot during the write of the file by another client, when it is flushed and then filled with the new content.
I was looking for other options such as mutex and it works like a charm on a single client with multiple threads, but since relying on SafeHandles in the scope of one system, not with multiple clients over the network. Is it possible to get mutex running over the network with
I also stumbled about AlphaFS which would allow me to do transactional processing on the filesystem, but that doesn't fix the root cause that multiple clients access one file at the same time.
Is there a better way to store the data? I was thinking about the Windows Registy but I could not find anything on mutex there.
Any thoughts highly appreciated!

How to preserve the timestamp of an io.Reader when copying a file by using a REST service in go?

I am writing some microservices in Go which handle different files.
I want to transfer files from one service, the client, to another, the server, via PUT method. The service works, but there is a small point which is not elegant. The files I transfer are getting a new modification date, when I write them on the file system of the server.
At the moment I handle the http.Request at the server like this:
ensure that there is a file at the server
copy the body from the request to the server io.Copy(myfile, r.Body)
When I do that the file has the last modification date from now(). To solve this problem I could transfer a timestamp of the original file and set it via io.Chtimes(). But the request.Body implements an io.ReadCloser interface, so I think there must be a more elegant way to implement the writing of the file onto the server. Is there a function, which takes an io.Reader which preserves the timestamp of the file?
If not, is there a solution for REST services for this problem?

Tarantool shiny dashboard

I want to use Tarantool database for logging user activity.
Are there any out of the box solutions to create web dashboard with nice charts based on the collected data?
A long time ago, using an old-old version of tarantool I've created a draft of tarbon - time-series database, with carbon-cache identical interface.
Since that time the protocol have changed, but the generic idea still the same: use spaces to store data, compact data organization and correct indexes to access spaces as time-series rows and lua for preparing resulting jsons.
That solution was perfect in performance (either on reads or on writes), but that old version lacks disk storage and without disk I was very limited to metrics capacity.
Tarantool has embedded lua language so u could generate json from your data and use any charting library. For example D3.js has method to load json directly from url.
d3.json(url[, callback])
Creates a request for the JSON file at the specified url with the mime type "application/json". If a callback is specified, the request is immediately issued with the GET method, and the callback will be invoked asynchronously when the file is loaded or the request fails; the callback is invoked with two arguments: the error, if any, and the parsed JSON. The parsed JSON is undefined if an error occurs. If no callback is specified, the returned request can be issued using xhr.get or similar, and handled using xhr.on.
You also could look at c3.js simple facade for d3

Mule: after delivering a message, save the current timestamp for later use. What's the correct idiom?

I'm connecting to a third-party web service to retrieve rows from the underlying database. I can optionally pass a parameter like this:
http://server.com/resource?createdAfter=[yyyy-MM-dd hh:ss]
to get only the rows created after a given date.
This means I have to store the current timestamp (using #[function:datestamp:...], no problem) in one message scope and then retrieve it in another.
It also implies the timestamp should be preserved in case of an outage.
Obviously, I could use a subflow containing a file endpoint, saving in a designated file on a path. But, intuitively, based on my (very!) limited experience, it feels hackish.
What's the correct idiom to solve this?
Thanks!
The Object Store Module is designed just for that: to allow you to save bits of information from your flows.
See:
http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html
https://github.com/mulesoft/mule-module-objectstore/