vert.x ResponseTimeHandler is not giving correct processing time of the request - vert.x

We have vert.x server running and we fetch req.response().headers().get("x-response-time");
after request.response().end();
We use the result as response time taken for our api's behind vert.x.
For few of the api requests I have seen x-response-time is more like 25000ms which is very huge.
So I added one more metric to calculate response time on my own, when request enters the verticle its start time and once response is sent its end time.Difference of these two I took as my customRespTime.
Now I compared customRespTime added by me with x-response-time added by vert.x. They both are not matching and vert.x shows huge response time like 25000ms for few of the requests.
Can someone help ?

Related

Sequential request processing talend

When sending requests to the talent in a certain sequence, with a small delay between requests, in the talent these requests are processed in a random sequence, I see this from the data recorded in the database, how can this be fixed
I was looking for a solution in CXF worqueue, but I don't understand how to set it up and if this is what I need
Sorry for my english

The fast way to execute rest requests that require incremented value (nonce)

I'm working with Rest Api that requires an incremented parameter to be sent with each request. I use unix miliseconds as nonce and originally naively sent requests one after another but even if I send one message before another, they can arrive in a reversed order which results in an error.
One solution could be sending the next request only after the previous one got back. But it would be too slow. I'm thinking about less strict solution like measuring latency over the last 10 requests and waiting for x% of latency before sending the next message. I feel like this problem should've been already solved but can't find any good reference. Would appreciate any advice.

http REST consumer timeout

I implemented a https/REST provider in node.js using express. The function is calling a webservice, transforming/enhancing data and returning transformed data as csv using response. Execution time of one get request is between 4 minutes 30 seconds and 5 minutes. I want to test the implementation by calling the url.
Problem:
execution in google chrome fails since it runs to long. No option to
increase the time out value.
execution in modzilla firefox:
network.http.response.timeout changed. Now the request is executed
over and over again. Looks like the response is ignored completely.
execution in postman: changed settings->general->XHR timeout in ms(...) .
Nevertheless execution stops every time after the same amount of seconds with
message: "Could not get any response" .
My question: which tool(s) can I use for reliable testing of long running http REST requests?
curl has a --max-time in seconds setting which should do what you want.
curl -m 330 http://you.url
But it might be worth creating a background job and polling for completion of the background job instead. HTTP isn't best suited to long running tasks.
I suggest you to use Socket IO to async response with pub/sub when the csv file is ready In the client send the request and put a timeout of 6 minutes for example, the server in the request return an ack to confirm the file process start, when the file is ready, return with Socket IO the file, Socket IO can be integrated with express
http://socket.io/
Do you have control over the server? If so, you should alter how it operates. Instead of the initial request expecting a response containing the answer, your API should emit a token (a URI) from where the status of the operation can be obtained. The status will either be "in progress" or "completed; here's your answer: ..."
You make the problem (the long-running operation) into its own first-class entity on your server.

Async POST requests on REST API by multi users and wait for them to complete all in Jmeter

I'm submitting multiple POST submits on a REST API using same input Json. That means multi users (ex: 10000) are submitting the same POST with same Json to measure the performance of POST request, but I need to capture the result of completion on each submission using a GET method and still measure the performance of GET as well. This is a asynchronous process as follows.
POST submit
generates an ID1
wait for processing
in next step another ID2 will be generated
wait for processing
in next step another ID3 will be generated
wait for processing
final step is completion.
So I need to create a jmeter test plan that can process this Asynchronous POST submits by multi users and wait for them to be processed and finally capture the completion on each submission. I need to generate a graph and table format report that can show me latency and throughput. Sorry for my lengthy question. Thanks, Santana.
Based on your clarification in the comment, looks to me like you have a fairly straight forward script, which could be expressed like this:
Thread Group
HTTP Sampler 1 (POST)
Post-processor: save ID1 as a variable ${ID1}
Timer: wait for next step to be available
HTTP Sampler 2 (GET, uses ${ID1})
Post-processor: save ID2 as a variable ${ID2}
Timer: wait for next step to be available
HTTP Sampler 3 (GET, uses ${ID1} and ${ID2})
Post-Processor: extract completion status
(Optional) Assertion: check completion status
I cannot speak about which Timer specifically to use, or which Post-processor, they depend on specific requests you have.
You don't need to worry about multiple users from JMeter perspective (the variables are always independent for the users), but of course you need to make sure that multiple initial POSTs do not conflict with each other from application perspective (i.e. each post should process independent data)
Latency is a part of the standard interface used to save results in the file. But as JMeter's own doc states, latency measurement is a bit limited in JMeter:
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Throughput is available in some UI listeners, but can also be calculated in the same way as JMeter calculates it:
Throughput = (number of requests) / (total time)
using raw data in the file.
If you are planning to run 100-200 users (or for debug purposes), use UI listeners; with the higher load, use non-UI mode of JMeter, and save results in CSV which you can later analyze. I say get your test to pass in UI mode first with 100 users, and then setup a more robust multi-machine 10K user test.

async call back using scala and play2 or spray

I have a systems design challenge that I would like to get some community feedback on.
Basic system structure:
[Client] ---HTTP-POST--> [REST Service] ---> [Queue] ---> [Processors]
[Client] POSTs json to [REST Service] for processing.
Based on request, [Rest Services] sends data to various queues to be picked up by various processors written in various languages and running in different processes.
Work is parallelized in each processor but can still take up to 30 seconds to process. The time to process is a function of the complexity of the data and cannot be speed up.
The result cannot be streamed back to the client as it is completed because there is a final post processing step that can only be completed once all the sub steps are completed.
Key challenge: Once the post processing is complete, the client either needs to:
be sent the results after the client has been waiting
be notified async that the job is completed and passed an id to request the final result
Design requirements
I don't want to block the [REST Service]. It needs to take the incoming request, route the data to the appropriate queues for processing in other processes, and then be immediately available for the next incoming request.
Normally I would have used actors and/or futures/promises so the [REST Service] is not blocked when waiting for background workers to complete. The challenge here is the workers doing the background work are running in separate processes/VMs and written in various technology stacks. In order to pass these messages between heterogeneous systems and to ensure integrity of the request lifetime, a durable queue is being used (not in memory message passing or RPC).
Final point of consideration, in order to scale, there are a load balanced set of [REST Services] and [Processors] in respective pools. Therefore, since the messages from the [REST Service] to the [Processor] need to be sent asynchronously via a queue (and everything is running is separate processes), there is no way to correlate the work done in a background [Processor] back to its original calling [REST Service] instance in order to return the final processed data in a promise or actor message and finally pass the response back to the original client.
So, the question is, how to make this correlation? Once the all the background processing is completed, I need to get the result back to the client either via a long waited response or a notification (I do not want to use something like UrbanAirship as most of the clients are browsers or other services.
I hope this is clear, if not, please ask for clarification.
Edit: Possible solution - thoughts?
I think I pass a spray RequestContext to any actor which can then response back to the client (does not have to be the original actor that received HTTP request). If this is true, can I cache the RequestContext and then use it later to asynchronously send the response to the appropriate client using this cached RequestContext when the processing is completed?
Well, it's not the best because it requires more work from your Client, but it sounds like you want to implement a webhook. So,
[Client] --- POST--> [REST Service] ---> [Calculations] ---> POST [Client]
[Client] --- GET
For explanation:
Client sends a POST request to your service. Your Service then does whatever processing necessary. Upon completion, your service will then send an HTTP-POST to a URL that the Client has already set. With that POST data, the Client will then have the necessary information to then do a GET request for the completed data.