I have implemented a chain of executions and each execution will send a HTTP request to the server and does check if the response status is 2XX. I need to implement a synchronous model in which the next execution in the chain should only get triggered when the previous execution is successful i.e response status is 2xx.
Below is the snapshot of the execution chain.
feed(postcodeFeeder).
exec(Seq(LocateStock.locateStockExecution, ReserveStock.reserveStockExecution, CancelOrder.cancelStockExecution,
ReserveStock.reserveStockExecution, ConfirmOrder.confirmStockExecution, CancelOrder.cancelStockExecution)
Since gatling has asynchronous IO model, what am currently observing is the HTTP requests are sent to the server in an asynchronous manner by a number of users and there is no real dependency between the executions with respect to a single user.
Also I wanted to know for an actor/user if an execution in a chain fails due the check, does it not proceed with the next execution in the chain?
there is no real dependency between the executions with respect to a single user
No, you are wrong. Except when using "resources", requests are sequential for a given user. If you want to stop the flow for a given user when it encounters an error, you can use exitblockonfail.
Gatling does not consider the failure response from the previous request before firing next in chain. You may need to cover the entire block with exitBlockOnFail{} to block the gatling to fire next.
Related
I want to test a worker verticle that receives requests over EventBus and sends the results also over EventBus. A single request may result in 0,1,2,... responses - in general cases we don't know how many responses we'll get.
The business logic is that requests are acked once the processing is complete, however the responses are sent in "fire and forget" manner - therefore we only know the responses were sent, not necessarily that they were delivered already.
I am writing a test for this verticle.
The test code is planned to be like this:
1. set up consumer for responses
2. send a request
3. wait until request is acked by the worker verticle
4. wait until consumer finishes validating the responses
The problem here is step 4 - in general case we don't know if there are still some responses in flight or not.
A brute force solution is obviously to wait some reasonable time - a few milliseconds is usually enough. However. I'd prefer something more conceptual.
A solution that comes to my mind is this:
send some request for which we know for sure that there would be a single response;
wait until the consumer receives the corresponding response.
That should work, but I dislike the fact that I pump two messages through the SUT instead of just a single one.
A different solution would be to send one extra response from test code, once we have a confirmation that the request was processed - but would it be considered to be the same sender? The EventBus only guarantees delivery order from the same sender, not from different ones. The test doesn't run in cluster mode, all operations are performed on the same machine, though not necessarily in the same thread.
Yet another solution would be to somehow check that EventBus is now empty, but as I understand, this is not possible.
Is there any other (better) solution?
The solution I would choose now (after half a year more experience with vertx/EventBus) is to send two messages.
The second message would get acked only after the processing of the first one is complete.
This would only work if you have a single consumer so that your two messages can't be processed in parallel.
I have a REST endpoint called CancelOrder. It comprises four steps (in order):
Cancel fulfilment (Calls a downstream service).
Cancel the quote (Calls a downstream service).
Update the state of the order to cancelled (In our local database).
This is a PUT operation and hence, I am trying to make it idempotent and fail-safe.
Scenario 1:
Just a single call:
Cancels the fulfilment, cancels the quote, updates the state. All's good.
A call is midway when a different call is received. Assume no pessimistic locking is present:
The state of the order has not been changed to 'cancelled' by the previous call yet, but the fulfilment has been cancelled. Now, when the second call tried to cancel the fulfilment, it returns an error.
The ideal way to handle the above scenario is making the API transactional by acquiring a write lock on the document on each call. But I don't want to do that.
How should I handle this scenario?
There are 2 ways (among a lot of other solutions) to deal with this scenario:
Solution A:
Add a new state in order as isCanceling. After server receives the first cancel request on an order, set this state as true. Once the cancel operation is finished, set this state as false.
If server receives another cancel request on the same order, but find its status is isCanceling, server would return 102 Processing to client, indicating the operation is in-progress.
Solution B:
same as step 1 in Solution A.
Everytime when server receives the cancel request (including the first one), a listener is added in that order's queue, waiting to be notified by event "Cancel-OK" or "Cancel-Fail".
If server receives cancel request on an order, but find its status is isCanceling, server would do nothing but just add corresponding listener in the above queue.
Once cancel operation is finished (success or fail), an event is fired. All the listeners in the queue will get the message, and HTTP response would be returned for all the previous pending HTTP requests.
Personally, I prefer Solution B.
I am confused between the usage of Action and Action.async. And what is the appropriate condition to use one.
I have wrote the method with Action.async with just a for loop, which takes 12 secs to process:
def asyncIndex() = Action.async {
val time = Calendar.getInstance().get(Calendar.SECOND)
Future {
for(i<- 0 to 20000000) {
print(i)
}
Ok(Json.toJson(time))
}
}
When I simultaneously make two request to this method the second request is blocked until the first one is completed.
PS:- I Think I have not understood the proper concept about async call.
I am confused in Action and Action.async and what is the appropriate condition to use one
From the documentation:
Note: Both Action.apply and Action.async create Action objects that are handled internally in the same way. There is a single kind of Action, which is asynchronous, and not two kinds (a synchronous one and an asynchronous one). The .async builder is just a facility to simplify creating actions based on APIs that return a Future, which makes it easier to write non-blocking code.
when i simultaneously make two request to this method the second request is blocked until the first one is completed
Also from the documentation:
The web client will be blocked while waiting for the response, but nothing will be blocked on the server, and server resources can be used to serve other clients.
If your simultaneous requests are from the same synchronous client, one of the requests on the client side will be blocked until the other is completed. There is no blocking on the server side. To achieve parallel processing of requests to the same endpoint, use distinct clients to make those requests, or use a client that makes asynchronous HTTP calls. Also consider using a separate dispatcher for this endpoint, even if you're wrapping the processing inside a Future (more information on creating a custom dispatcher is in the linked documentation).
Waiting blocks in code created that waiting time: The code written with the body of Future is not fully concurrent because there is a loop in it before sending the Ok response; so obviously when you send the calls it takes some time to get the second response. If you remove the for loop and send number of calls (through curl for example) you will see the app is running without "awaiting" time. Of course this has the limitation; which is the specifications of your machine (cpu, ram, etc.). So, using Action.async just by itself and writing a waiting/blocking inside it; doesn't make the whole code concurrent.
When to use: There is simple rule for it: If your controller's method body has concurrent code in it then the action should be defined as Action.async{...}; if not Action{...}.
Please note that in Play all actions are asynchronous.
I implemented a https/REST provider in node.js using express. The function is calling a webservice, transforming/enhancing data and returning transformed data as csv using response. Execution time of one get request is between 4 minutes 30 seconds and 5 minutes. I want to test the implementation by calling the url.
Problem:
execution in google chrome fails since it runs to long. No option to
increase the time out value.
execution in modzilla firefox:
network.http.response.timeout changed. Now the request is executed
over and over again. Looks like the response is ignored completely.
execution in postman: changed settings->general->XHR timeout in ms(...) .
Nevertheless execution stops every time after the same amount of seconds with
message: "Could not get any response" .
My question: which tool(s) can I use for reliable testing of long running http REST requests?
curl has a --max-time in seconds setting which should do what you want.
curl -m 330 http://you.url
But it might be worth creating a background job and polling for completion of the background job instead. HTTP isn't best suited to long running tasks.
I suggest you to use Socket IO to async response with pub/sub when the csv file is ready In the client send the request and put a timeout of 6 minutes for example, the server in the request return an ack to confirm the file process start, when the file is ready, return with Socket IO the file, Socket IO can be integrated with express
http://socket.io/
Do you have control over the server? If so, you should alter how it operates. Instead of the initial request expecting a response containing the answer, your API should emit a token (a URI) from where the status of the operation can be obtained. The status will either be "in progress" or "completed; here's your answer: ..."
You make the problem (the long-running operation) into its own first-class entity on your server.
I have a systems design challenge that I would like to get some community feedback on.
Basic system structure:
[Client] ---HTTP-POST--> [REST Service] ---> [Queue] ---> [Processors]
[Client] POSTs json to [REST Service] for processing.
Based on request, [Rest Services] sends data to various queues to be picked up by various processors written in various languages and running in different processes.
Work is parallelized in each processor but can still take up to 30 seconds to process. The time to process is a function of the complexity of the data and cannot be speed up.
The result cannot be streamed back to the client as it is completed because there is a final post processing step that can only be completed once all the sub steps are completed.
Key challenge: Once the post processing is complete, the client either needs to:
be sent the results after the client has been waiting
be notified async that the job is completed and passed an id to request the final result
Design requirements
I don't want to block the [REST Service]. It needs to take the incoming request, route the data to the appropriate queues for processing in other processes, and then be immediately available for the next incoming request.
Normally I would have used actors and/or futures/promises so the [REST Service] is not blocked when waiting for background workers to complete. The challenge here is the workers doing the background work are running in separate processes/VMs and written in various technology stacks. In order to pass these messages between heterogeneous systems and to ensure integrity of the request lifetime, a durable queue is being used (not in memory message passing or RPC).
Final point of consideration, in order to scale, there are a load balanced set of [REST Services] and [Processors] in respective pools. Therefore, since the messages from the [REST Service] to the [Processor] need to be sent asynchronously via a queue (and everything is running is separate processes), there is no way to correlate the work done in a background [Processor] back to its original calling [REST Service] instance in order to return the final processed data in a promise or actor message and finally pass the response back to the original client.
So, the question is, how to make this correlation? Once the all the background processing is completed, I need to get the result back to the client either via a long waited response or a notification (I do not want to use something like UrbanAirship as most of the clients are browsers or other services.
I hope this is clear, if not, please ask for clarification.
Edit: Possible solution - thoughts?
I think I pass a spray RequestContext to any actor which can then response back to the client (does not have to be the original actor that received HTTP request). If this is true, can I cache the RequestContext and then use it later to asynchronously send the response to the appropriate client using this cached RequestContext when the processing is completed?
Well, it's not the best because it requires more work from your Client, but it sounds like you want to implement a webhook. So,
[Client] --- POST--> [REST Service] ---> [Calculations] ---> POST [Client]
[Client] --- GET
For explanation:
Client sends a POST request to your service. Your Service then does whatever processing necessary. Upon completion, your service will then send an HTTP-POST to a URL that the Client has already set. With that POST data, the Client will then have the necessary information to then do a GET request for the completed data.