REST file download that takes 5 minutes to complete - rest

One of the API calls to my Web API 2 server from my Angular client dynamically generates an XLSX file via a ton of SQL queries and processing. That can take up to five minutes to generate all the data and return it via a file download to the client. Obviously that's bad because Chrome shows an error by then even though the page is still loading.
It feels like this is where I'd use a status code 202 to tell the client that it got the request, but I'm not sure after that how to actually send the file back to the client then.
The only thing I can think of is that the server spawns a background task that will write the file to a specific temp location, after it's been created, and then another API call will download that file if it exists and delete it from the temp location.
Is that what I do and then just have the client poll periodically for that file? Pre-generating the file isn't an option as it has to have realtime data (at the point of request of course).

It feels like this is where I'd use a status code 202 to tell the client that it got the request, but I'm not sure after that how to actually send the file back to the client then.
Usually a HTTP 202 comes with a location header and an indicator of where and when the resource will be available.
Also a possibility is to add a link to a status monitor, like described here.
To achieve this, you could generate an id for that process and use it in the location header url to point to the result.
The client then is able to get that resource when the resource should be ready. This means you would need some short-term persistence.

Related

Partial processing of a multipart/formdata request

Is it possible to partially process a multipart/formdata request? I am developing a REST API wherein one of the resources is used to upload a large file. The application must take a call on processing the request based on the name of the file being uploaded, possibly sending back an alternate response if the file name fails validation.
If the application receives the large file and then performs the validation that triggers that alternate response, the time and resources used for the upload are both wasted. I prefer to preempt the upload of the actual file if the filename validation fails.
How can I implement this? I have considered the approach of first sending a request using the HEAD method and supplying the filename, with a subsequent upload contingent on the response to the first [HEAD] call. I would like to know if there are better alternatives.
Note: I am using Spring Boot to develop the RESTful application, although I imagine that will not significantly impact the answer I am seeking.

Cqrs and rest best practice

Does anyone have best practice pattern for cqrs with put/post, specifically the client is doing a get for updated resource after it has sent command/event... Would you allow/require the client to keep local copy of the updated resource, and send a last updated timestamp in the get response? Or ensure that get includes the unprocessed commands? Of course, if the same resource is retrieved by another client, may/not get the updated resource.
What's worked best for you?
Would you contend with added complexity of the get also checking command queue?
Does anyone have best practice pattern for cqrs with put/post, specifically the client is doing a get for updated resource after it has sent command/event...
How would you do it on a web site?
Normally, you would do a GET to load the resource, and that would give you version 0, possibly with some validators in the meta data to let you know what version of the representation you received. If you tried to GET the resource again, the generic components could see from the headers that your copy was up to date, and would send you back a message to that effect (304 Not Modified).
When you POST to that resource, a successful response lets all of the intermediate components know that the previously cached copy of the resource has been invalidated, so the next GET request will retrieve a fresh copy, with all of the modifications.
This all works great, right up to the point where, in a CQRS setting, the read requests follow a different path than the write requests. The read side will update itself eventually, so the trick is how to avoid returning a stale representation to the client that knows it should have changed.
The analogy you are looking for is 202 Accepted; we want the write side to let the client know that the operation succeeded, and that there is a resource that can be used to get the change.
Which is to say, the write side returns a response indicating that the command was successful, and provides a link that includes data that the read model can use to determine if its copy is up to date.
The client's job is to follow the links, just like everywhere else in REST.
The link provided will of course be some safe operation, pointing to the read model. The read model compares the information in the link to the meta data of the currently available representation; if the read model copy is up to date, it returns that, otherwise it returns a message telling the client to retry (presumably after some interval).
In short, we use polling on the read model, waiting for it to catch up.

Mirth Connect : How to create channel to make HTTP request once in a day

Requirement is that we need to download some certificates everyday. For this we have RESTfull endpoint in our application and when manually sent request to RESTfull endpoint, then certificates are downloaded to our application folder.
Now I am looking to automate it by creating channel in Mirth, which will make HTTP request the RESTfull endpoint every day.
In Mirth channel, destination is set to HTTP sender and other configuration are done.
But I am not getting about configurations needs to be done for Source.
Could any one please suggest what should be the source considering the requirement??
Thanks in advance..
That's easy to do. Just use a JavaScript Reader to return a dummy message. Literally just something like this would work:
return 'dummy';
The scheduling options available allow you to poll on a certain time interval, poll once a day at a specific time, or even specify a cron expression. Advanced options are also available that allow you to choose which days of the week/month to poll on.
Once you've made your request with the HTTP Sender, I imagine you're going to want to do something with the response. You can use the response from that destination in a subsequent destination. For example, you could use a Database Writer to grab values coming from the HTTP response and insert into a table. Or, you could use a Channel Writer to forward the response on to a completely different channel.
What operation you want to do in your source?You mean to say you are doing your main operation in destination if you want a dummy source means use channel reader.If you elaborate your query I can clearly answer.

RPC not working on addclose handler

I'm facing a weird problem in GWT. I generate an excel file on server side for users to download. But after the download the file should get deleted.
I have put logic to delete it on server-Side on 2 occasions. One when user logs out and another when browser is closed.
When the user logs out, it works perfectly as it has enough time to make a call to the server whereas in case of addclosehandler, it loses connection and file remains as it is.
i.e. the method on server side does not get executed.
I tried to find another way to call the method directly by importing the package and inheriting in gwt.xml. But an error was thrown at the compile time and rightly so that server side cant be inherited.
Please get me out of this.
Thanks in advance.
But after the download the file should get deleted. I have put logic
to delete it on server-Side on 2 occasions.
This does not have to do anything with the client. I don't know excactly how your progam works but generally it should work like this:
Client makes a request
Servlet generates the bytes (Is there really a need to store the bytes in a file?)
sends them to client
And that's it.

How to Update a resource with a large attachment with PUT request in JAX-RS?

I have a large byte file (log file) that I want to upload to server using PUT request. The reason I choose PUT is simply because I can use it to create a new resource or update an existing resource.
My problem is how to handle situation when server or Network disruption happens during PUT request.
That is say I have a huge file, during the transfer of which, Network failure happens. When the network resumes, I dont want to start the entire upload. How would I handle this?
I am using JAX-RS API with RESTeasy implementation.
Some people are using the Content-Range Header to achieve this but many people (like Mark Nottingham) state that this is not legal for requests. Please read the comments to this answer.
Besides there is no support from JAX-RS for this scenario.
If you really have the repeating problem of broken PUT requests I would simply let the client slice the files:
PUT /logs/{id}/1
PUT /logs/{id}/2
PUT /logs/{id}/3
GET /logs/{id} would then return the aggregation of all successful submitted slices.