How to preserve the timestamp of an io.Reader when copying a file by using a REST service in go? - rest

I am writing some microservices in Go which handle different files.
I want to transfer files from one service, the client, to another, the server, via PUT method. The service works, but there is a small point which is not elegant. The files I transfer are getting a new modification date, when I write them on the file system of the server.
At the moment I handle the http.Request at the server like this:
ensure that there is a file at the server
copy the body from the request to the server io.Copy(myfile, r.Body)
When I do that the file has the last modification date from now(). To solve this problem I could transfer a timestamp of the original file and set it via io.Chtimes(). But the request.Body implements an io.ReadCloser interface, so I think there must be a more elegant way to implement the writing of the file onto the server. Is there a function, which takes an io.Reader which preserves the timestamp of the file?
If not, is there a solution for REST services for this problem?

Related

How long does a wormhole file transfer persist

I am trying to use magic-wormhole to receive a file.
My partner and I are in different time zones, however.
If my partner types wormhole send filename, for how long will this file persist (i.e. how much later can I type wormhole receive keyword and still get the file)?
From the "Timing" section in the docs:
The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other ... Both clients must be left running until the transfer has finished.
So... maybe? Consider using some cloud storage instead, depending on the file. You could also encrypt it before uploading it to cloud storage if the contents of the file is private.

ADF decode and uncompress data on the fly

I have a pipeline in ADF v2 that calls a SOAP endpoint which returns a base64 encoded string, which is actually a zip file containing 2 files. I am only interested in file[1] (the 2nd one). I want to take this file, and write it to a storage account.
What's the best way to do this in ADF without resorting to external things like Functions call etc.

File Endpoint for Citrus Framework

I'm currently looking at using Citrus for our Integration Testing, however our Integration Software uses amongst others, file messages - where files are written to an inbound folder, picked up and processed which results in a new file message being written to an outbound folder or data being written to SQL.
I was wondering if Citrus can write a file with a certain payload to an inbound folder and then monitor for a file to appear in certain outbound folder and/or in a SQL table.
Example Test Case:
file()
.folder(todoInboundFolder)
.write()
.payload(new ClassPathResource("templates/todo.xml"));
file()
.folder(todoOutboundFolder)
.read()
.validate("/t:todo/t:correlationId", "${todocorrelationId}")
.validate("/t:todo/t:title", "${todoName}");
query(todoDataSource)
.statement("select count(*) as cnt from todo_entries where correlationid = '${todocorrelationId}'")
.validate("cnt", "1");
Additionaly - is there a way to specific the timeout to wait for the file/SQL entries to appear?
There is no direct implementation of the file endpoint yet in Citrus. There was a feature request but it was closed due to inactivity https://github.com/citrusframework/citrus/issues/151
You can solve this problem though by using a simple Apache Camel route to do the file transfer. Citrus is able to call the Camel route and use its outcome very easily. Read more about this here https://citrusframework.org/citrus/reference/2.8.0/html/index.html#apache-camel
This would be the workaround that can help right now. Other than that you can reopen or contribute to the issue.

Mule: after delivering a message, save the current timestamp for later use. What's the correct idiom?

I'm connecting to a third-party web service to retrieve rows from the underlying database. I can optionally pass a parameter like this:
http://server.com/resource?createdAfter=[yyyy-MM-dd hh:ss]
to get only the rows created after a given date.
This means I have to store the current timestamp (using #[function:datestamp:...], no problem) in one message scope and then retrieve it in another.
It also implies the timestamp should be preserved in case of an outage.
Obviously, I could use a subflow containing a file endpoint, saving in a designated file on a path. But, intuitively, based on my (very!) limited experience, it feels hackish.
What's the correct idiom to solve this?
Thanks!
The Object Store Module is designed just for that: to allow you to save bits of information from your flows.
See:
http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html
https://github.com/mulesoft/mule-module-objectstore/

How do I set up a mock queue using mockrunner to test an xml filter?

I'm using the mockrunner package from http://mockrunner.sourceforge.net/ to set up a mock queue for JUnit testing an XML filter which operates like this:
sets recognized properties for an ftp server to put and get xml input and a jms queue server that keeps track of jobs. Remotely there waits a server that actually parses the xml once a queue message is received.
creates a remote directory using ftp and starts a queue connection using mqconnectionfactory to the given address of the queue server.
once the new queue entry is made in 2), the filter waits for a new queue message to appear signifying the job has been completed by the remote server. The filter then grabs the modified xml file from the ftp and passes it along to the next filter.
The JUnit test I am working on simply needs to emulate this environment by starting a local ftp and mock queue server for the filter to connect to, then waiting for the filter to connect to the queue and put the new xml input file on a local directory via a local ftp server, wait for the queue message and then modify the xml input slightly, put the modified xml in a new directory and post another message to the queue signifying the job has completed.
All of the tutorials I have found on the net have used EJB and JNDI to lookup the queue server once it has been made. If possible, I'd like to sidestep that route by just creating a mock queue on my local machine and connecting to it in the simplest manner possible, not using EJB and JNDI.
Thanks in advance!
I'm using MockEjb and there are some examples among them one for using mock queues, so take a look to the info and to the example
Hopefully it helps.
I'd recommend having a look at using Apache Camel to create your test case. Then its really easy to switch your test case from any of the available components and most importantly Camel comes with some really handy Mock Endpoints which makes it super easy to test complex routing logic particularly with asynchronous operations.
If you also use Spring, then maybe start by trying out these Spring unit tests with mock endpoints in Camel which let you inject the mock endpoints to perform assertions on together with the ProducerTemplate object to make it really easy to fire your messages for your test case. e.g. see the last example on that page.
Start off using simple endpoints like the SEDA endpoint - then when you've got your head around the core spring/mock framework, try using the JMS endpoint or FTP endpoint endpoints etc.