I am using locust for performance testing of my application. Have to test following scenario. Want to know if its supported in locust.
Locust client sends post request to Application by setting callback url and timer.
On expiry of timer, Application will send back the request on callback url by acting as server.
Locust tool has to handle the request and respond back to server.
Do we have support of handling server initiated messages on locust?
Please let me know if anyone come accross this kind of scenario.
Locust doesnt support that out of the box, but you may be able to set up a new endpoint on the locust web UI that responds to the callbacks (using the pre-release 1.0b1 version you can do it this way: https://docs.locust.io/en/latest/extending-locust.html#adding-web-routes)
Your flask response function would then have to log the callback by calling web_ui.environment.events.request_success.fire(request_type="callback", name="mycallbackthingy", response_time=<time measured by you>, response_length=0)
If you can specify the timestamp for the initial call as a parameter in the callback url you can calculate the response time that way.
Related
I have the following case: I have a REST API, that can only be accessed with credentials. I need the frontend to make requests directly to the API to get the data. Because I don't want to hide the credentials somewhere in the frontend, I set up a proxy server, which forwards my request with http://docs.guzzlephp.org/en/stable/index.html but adds the necessary authentication.
No that worked neatly for some time, but now I added a new view where I need to fetch from one more endpoint. (so far it was 3 requests locally (MAMP))
Whenever I add a fourth API request, which all are being executed right on page load, my local server crashes.
I assume it is linked to this topic here:
Guzzle async requests not really async?, specifically because I make a new request for every fetch.
First: Do you think that could be the case? Could my local server indeed crash, because I have only 3 (probably simultaneous) requests?
Second: How could I approach this problem.
I don't really see the possibility to group the requests, because they are just incoming to the proxy url and every call of the proxy url will create a new Guzzle client with its own request...
(I mean, how many things can a simple PHP server execute at the same time? And why would it not just add requests to the call stack and execute them in order?)
Thanks for any help on this issue.
I need to create some kind of health check in Splunk that calls a Rest URL every hour and check if the response returns HTTP code 200 and send an alert in case there is an error like http code 400 or http code 500.
For example Splunk should make an http call to the URL of my application every hour and check if the URL of my application returns HTTP code 200. In case the response from the URL has a different code than 200 then send a notification email telling that something is wrong.
is that possible?
Please help.
Check out the REST API Modular Input app at https://splunkbase.splunk.com/app/1546/.
You can also create a Python program that checks the URL and reports on its health. Schedule that program as a scripted input.
You can use the website monitoring app for Splunk, https://splunkbase.splunk.com/app/1493/ to get the return codes for your endpoints
How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.
Web application operations are generally meant to be quick to avoid long wait times to users. However, some operations the web application may perform may be computationally-intensive and take a fair bit of time. What is the best practice in REST to deal with such operations that may be take several minutes yet require an immediate response to users? Is it okay for the web application to take several minutes to return the response of the HTTP request, or is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Is it okay for the web application to take several minutes to return the response of the HTTP request
No. Part of the problem with this approach is that if the server doesn't acknowledge the request in a timely fashion, the client won't know that it reached its intended destination.
is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Yes. That's exactly what 202 Accepted is designed for
The 202 response is intentionally noncommittal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
It can help, I think, to remember that we're talking about your integration domain; the client isn't talking to your app. It's instead talking to your API, which pretends to be a web site that the client can integrate with. So your client sends the request to the API, and the API responds with an accepted message accompanied by a bunch of links that will help the client continue with the protocol and eventually reach its goal.
I'm attempting to use SoapUI (5.0.0 beta) to test a RESTful web service which issues asynchronous responses to a supplied Callback URL.
So far, I am able to invoke the service and confirm the initial synchronous response received. I have also created a 'REST MockService' that issues an appropriate response to the callback received from the server, and I supply the endpoint for this as the Callback URL in the initial request.
What I am now struggling with is creating a test case that ties the two together such that I can 'expect' the asynchronous callback before proceeding to another step in my test case. I tried adding a 'Mock Response' Test Step to my test case following the initial request. However, this just yields an unhelpful 'Missing SOAP Operations to Mock in Project' error message. I took a brief look at the SoapUI source for this error message and discovered the method returning the error is first checking if I am using a WSDL interface. Why I would be doing so with a REST project is beyond me, but there you have it!
Appreciate any guidance on how to proceed!
Having logged a query on the SoapUI Community Board, it seems there is currently not a good mechanism for achieving this, but my query has prompted an enhancement request at least!
http://forum.soapui.org/viewtopic.php?f=5&t=23697