http REST consumer timeout - rest

I implemented a https/REST provider in node.js using express. The function is calling a webservice, transforming/enhancing data and returning transformed data as csv using response. Execution time of one get request is between 4 minutes 30 seconds and 5 minutes. I want to test the implementation by calling the url.
Problem:
execution in google chrome fails since it runs to long. No option to
increase the time out value.
execution in modzilla firefox:
network.http.response.timeout changed. Now the request is executed
over and over again. Looks like the response is ignored completely.
execution in postman: changed settings->general->XHR timeout in ms(...) .
Nevertheless execution stops every time after the same amount of seconds with
message: "Could not get any response" .
My question: which tool(s) can I use for reliable testing of long running http REST requests?

curl has a --max-time in seconds setting which should do what you want.
curl -m 330 http://you.url
But it might be worth creating a background job and polling for completion of the background job instead. HTTP isn't best suited to long running tasks.

I suggest you to use Socket IO to async response with pub/sub when the csv file is ready In the client send the request and put a timeout of 6 minutes for example, the server in the request return an ack to confirm the file process start, when the file is ready, return with Socket IO the file, Socket IO can be integrated with express
http://socket.io/

Do you have control over the server? If so, you should alter how it operates. Instead of the initial request expecting a response containing the answer, your API should emit a token (a URI) from where the status of the operation can be obtained. The status will either be "in progress" or "completed; here's your answer: ..."
You make the problem (the long-running operation) into its own first-class entity on your server.

Related

vert.x ResponseTimeHandler is not giving correct processing time of the request

We have vert.x server running and we fetch req.response().headers().get("x-response-time");
after request.response().end();
We use the result as response time taken for our api's behind vert.x.
For few of the api requests I have seen x-response-time is more like 25000ms which is very huge.
So I added one more metric to calculate response time on my own, when request enters the verticle its start time and once response is sent its end time.Difference of these two I took as my customRespTime.
Now I compared customRespTime added by me with x-response-time added by vert.x. They both are not matching and vert.x shows huge response time like 25000ms for few of the requests.
Can someone help ?

Always use neg[.z.w] to ensure that all messages are asynchronous?

Consider the following definition on server:
f:{show "Received ",string x; neg[.z.w] (`mycallback; x+1)}
on client side:
q)mycallback:{show "Returned ",string x;}
q)neg[h] (`f; 42)
q)"Returned 43"
In the q for motrtals, the tip says:
When performing asynchronous messaging, always use neg[.z.w] to ensure
that all messages are asynchronous. Otherwise you will get a deadlock
as each process waits for the other.
therefore I change the definition on the server as:
f:{show "Received ",string x; .z.w (`mycallback; x+1)}
everything goes fine, and I haven't seen any deadlocks.
Can anyone give me an example to show why I should always use neg[.z.w]?
If I understand you're question correctly I think your asking how sync and async messages work. The issue with the example you have provided is that x+1 is a very simple query that can be evaluated almost instantaneously. For a more illustrative example consider changing this to a sleep (or a more strenuous calculation, eg. a large database query).
On your server side define:
f:{show "Received ",string x;system "sleep 10"; neg[.z.w] (`mycallback; x+1)}
Then on your client side you can send the synchronous query:
h(`f; 42)
multiple times. Doing this you will see there is no longer a q prompt on the client side as it must wait for a response. These requests can be queued and thus block both the client and server for a significant amount of time.
Alternatively, if you were to call:
(neg h)(`f; 42)
on the client side. You will see the q prompt remain, as the client is not waiting for a response. This is an asynchronous call.
Now, in your server side function you are looking at using either .z.w or neg .z.w. This follows the exact same principal however from a server perspective. If the response to query is large enough, the messaging can take a significant amount of time. Consequently, by using neg this response can be sent asynchronously so the server is not blocked during this process.
NOTE: If you are working on a windows machine you will need to swap out sleep for timeout or perhaps a while loop if you are following my examples.
Update: I suppose one way to cause such a deadlock would be to have two dependant processes, attempting to synchronously call each other. For example:
q)\p 10002
q)h:hopen 10003
q)g:{h (`f1;`)}
q)h (`f;`)'
on one side and
q)\p 10003
q)h:hopen 10002
q)f:{h (`g;`)}
q)f1:{show "test"}
on the other. This would result in both processes being stuck and thus test never being shown.
Joe's answer covers pretty much everything, but to your specific example, a deadlock happens if the client calls
h (`f; 42)
Client is waiting response from the server before processing the next request, but the server is also waiting response from the client before it completes the client's request.

REST: Processing error in long running operation, how to inform clients?

I am developing a webservice that alllows users to request validation reports. Report generation might take up to 20 hours per report. When a new validation request is posted, I return a 202 Accepted answer with Location set to a processing queue (e.g./queue/5) When the queue resource is polled some processing information is provided:
<queueResponse>
<status>QUEUED</status>
<queuePosition>1</queuePosition>
</queueResponse>
Once processing completes successfully and the queue is polled, a 303 see other will redirect to the created resource (at /reports/5 e.g.).
However if a processing error occurs on server, i simply return my queueResponse without redirect and status set to <status>ERROR</status>.
Is this the best way to comunicate a processing error to the client? Or should instead simply a 500 Internal Server Error returned when polling the queue for a failed validation task?.....
Your current solution is best. A 500 error for the queued process information would indicate that the request for that resource had failed, not the process it was reporting on.
postscript: If your API is still being defined, I would suggest FAILED instead of ERROR, as it sounds more permanent. Errors are potentially recoverable situations, failures are not.

SSE Server Sent Events - Client keep sending requests (like polling)

How come every site explains that in SSE a single connection stays opened between client and server "With SSE, a client sends a standard HTTP request asking for an event stream, and the server responds initially with a standard HTTP response and holds the connection open"
And then, when server decides it can send data to the client while what I am trying to implement SSE I see on fiddler requests being sent every couple of seconds
For me it feels like long polling and not a one single connection kept opened.
Moreover, It is not that the server decides to send data to the client and it sends it but it sends data only when the client sends next request
If i respond with "retry: 10000" even tough something has happened that the server wants to notify right now, will get to the client only on the next request (in 10 seconds from now) which for me does not really looks like connection that is kept opened and server sends data as soon as he wants to
Your server is closing the connection immediately. SSE has a built-in retry function for when the connection is lost, so what you are seeing is:
Client connects to server
Server myteriously dies
Client waits two seconds then auto-reconnects
Server myteriously dies
Client waits two seconds then auto-reconnects
...
To fix the server-side script, you want to go against everything your parents taught you about right and wrong, and deliberately create an infinite loop. So, it will end up looking something like this:
validate user, set up database connection, etc.
while(true){
get next bit of data
send it to client
flush
sleep 2 seconds
}
Where get next bit of data might be polling a DB table for new records since the last poll, or scan a file system directory for new files, etc.
Alternatively, if the server-side process is a long-running data analysis, your script might instead look like this:
validate user, set-up, etc.
while(true){
calculate next 1000 digits of pi
send them to client
flush
}
This assumes that the calculate line takes at least half a second to run; any more frequently and you will start to clog up the socket with lots of small packets of data for no benefit (the user won't notice that they are getting 10 updates/second instead of 2 updates/second).

Gatling synchronous Http request/response chain

I have implemented a chain of executions and each execution will send a HTTP request to the server and does check if the response status is 2XX. I need to implement a synchronous model in which the next execution in the chain should only get triggered when the previous execution is successful i.e response status is 2xx.
Below is the snapshot of the execution chain.
feed(postcodeFeeder).
exec(Seq(LocateStock.locateStockExecution, ReserveStock.reserveStockExecution, CancelOrder.cancelStockExecution,
ReserveStock.reserveStockExecution, ConfirmOrder.confirmStockExecution, CancelOrder.cancelStockExecution)
Since gatling has asynchronous IO model, what am currently observing is the HTTP requests are sent to the server in an asynchronous manner by a number of users and there is no real dependency between the executions with respect to a single user.
Also I wanted to know for an actor/user if an execution in a chain fails due the check, does it not proceed with the next execution in the chain?
there is no real dependency between the executions with respect to a single user
No, you are wrong. Except when using "resources", requests are sequential for a given user. If you want to stop the flow for a given user when it encounters an error, you can use exitblockonfail.
Gatling does not consider the failure response from the previous request before firing next in chain. You may need to cover the entire block with exitBlockOnFail{} to block the gatling to fire next.