In JMeter, is there a way of testing an autocomplete that cancels requests - autocomplete

I'll start this question with 'this is not the same as the previous one'. I can see straightaway that an almost identical question has been asked but the answer is not what I'm after. I will explain...
I need to test an autocomplete search box in a web page. Normally I'd just do a series of requests with the HTML containing one extra letter each time (which is the answer to the other question similar to this). Problem is, that's not how the page behaves. It does submit a new request each time I type a letter, but it's cancelling the previous one instead of letting it continue. Therefore the only one that actually gets to a HTTP 200 response is the very last one.
This blog contains an example of what I'm seeing;
Autocomplete and request cancellation
But about halfway down it shows our test condition;
Client cancellation must also be supported by the search backend. Backend that doesn’t support cancellation continues processing request even after client disconnects.
I need to write a jmeter script that replicates a series of cancelled requests, followed by a single successful request, such that when I look on the backend I either see multiple running queries (bad) or just the last one (good).
Edit: I've also hit a follow up issue, how to identify canceled requests in web server logs. It looks like I'm only seeing single requests if they are allowed to complete (IE if I pause between letters). If the requests are cancelled, they don't get logged in the log. So, how do I verify that they happened at all? If we import the logs into a visualization tool, are we going to be missing the 'canceled' requests.

"Request cancellation" is nothing more than closing the connection from the client side
The easiest way of implementing it in JMeter is setting the response timeout, the setting lives under "Advanced" tab of the HTTP Request sampler or even better HTTP Request Defaults)
Just set this timeout to be lower than the threshold configured in your frontend and JMeter will close the connection making the backend "think" that the autocomplete request has been aborted because the user is still typing.
Demo:

Related

Which HTTP status should I return if the client tries to upload a new file while server is still processing the previous one?

My application has a button for allowing the user to upload a file. Uploading the file is very quick, we send the response to the client very quickly, but the service takes a while to process it.
The user can only upload a new file when the previous one is already processed. Therefore, if the user tries to upload a file while the server is still processing one, we should return an error to the client.
My question is which HTTP response status should I use? I checked all options and these are the ones which I believe are closer to my situation:
409 Conflict -
Indicates that the request could not be processed because of conflict in the current state of the resource, such as an edit conflict between multiple simultaneous updates.
425 Too Early (RFC 8470) - Indicates that the server is unwilling to risk processing a request that might be replayed.
428 Precondition Required (RFC 6585) - The origin server requires the request to be conditional. Intended to prevent the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict.[58]
Which one do you believe is the most appropriate to this situation? Or any of them, should I use another one?
Which one do you believe is the most appropriate to this situation?
429 Too Many Requests
The 429 status code indicates that the user has sent too many requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the condition, and MAY include a Retry-After header indicating how long to wait before making a new request.
429 Too Many Requests
Retry-After: 60
Content-Type: text/plain
Why don't you wait a minute?
Why I'm not advocating for any of the codes you recommended:
425 Too Early ; I would avoid this one as it seems to be specific to the context of early data.
428 Precondition Required ; the core problem here is how to communicate to a general purpose client which precondition should be included in the request. Also, it's a little bit off, semantically.
409 Conflict ; in practice, you might be able to make this one work. Semantically, the difficulty is that there really isn't a way for the client to resolve the conflict (ex: reloading the page to get a fresh copy of the server's copy of the resource).
The important thing to recognize is that HTTP status codes are only incidentally for humans. Status codes are metadata of the transfer-of-documents-over-a-network domain; the intended audience is general purpose HTTP components (browsers, spiders, caches, proxies, etc).
Therefore, the "best" code to use is going to be that code which tells general purpose components the right thing. Specialization happens in the response body, where we use the payload to communicate the fine grained details.
How about you receive the request and place it in a queue. Create a single consumer to your queue so no request could be processed until the previous one is finished. This way you can just accept the request from customer regardless of the current state, and just return a standard acknowledgement response - 200 or whatever response you send upon success.

REST API - "GET /user" changes user in database

We have a simple User API including "GET /user" to request user information. When processing the request we store the current datetime as "lastVisit" in our database. As a result we have a GET request updating the user in our database, which seems to be bad practice.
As we don't handle the login process on ourselves, GET /user is the first request to our backend. We cannot use /login to retrieve and store "lastVisit".
Is it bad practice? How to solve the issue?
There's nothing wrong with updating your database when you receive a GET request - the uniform interface of HTTP constrains what the GET method token means, but you have a lot of freedom in how your server implements the handling of that request.
So that much is fine.
"lastVisit", however, may be a problem - which is to say, your interpretation of what it means that somebody asked for a copy of the page ignores various edge cases: a web spider following links to index the documents (think Google), or a smart browser that is trying to reduce latency by downloading a link before the user clicks on it.
You don't know, from the request, whether the fetch was triggered by the client, or by the general purpose agent acting in the client's stead. Similarly, you don't know about any requests for the resource that were intercepted and handled by a cache that had a valid copy of the resource.
It may be that using request handling time as a proxy for last visit is a good enough cost effective approximation of what you want to get by, but you should keep in mind that it is an estimate, not a truth.

Why didn't Fiddler show this activity?

We have a Client Toolkit provided by our partner that allows us to access their web services. It started giving errors yesterday on any call and initially their support wanted us to provide a Fiddler log. I tried to do so, however there was no activity shown in Fiddler when the call was made.
From this I would have assumed that the error would have to have occurred before an actual web request was sent out. However, the issue turned out to be an update they did that requires an SSL connection. They rolled back the change but advised us to update our calls to use https so they can re-implement their update.
So if the change was on their end, that means that communications obviously were going on with their server. Why wouldn't that have shown up in Fiddler? Are there scenarios where communications occur but a request isn't fully created or something like that? I just assumed that if there was any communication whatsoever that "something" would show up in Fiddler.

Is there a way to stop reporting on a specific request when locust.io

I'm writing a locust.io based performance test scenarios. As part of the User journey I had to navigate to a url (which will end with a session/{guid} before getting a cookie for a session, which I can continue to apply within the cookie headers to carry on with other parts of the journey.
Now I want to avoid reporting on the initial url which ends in session/{guid}, as otherwise I'll be hitting different endpoints as the {guid} will keep changing for every single request, as that is a redirect URL that I get from a system.
So, is there a setting that I can use to stop locust.io from reporting on that specific request?
Hope the above question is clear enough in what I want to achieve.
Okay, actually, I avoided l.client.post and for that URL request alone I used the python requests api's post method, so locust can't track it :)

How can I implement a RESTful Progress Indicator?

I want my API to be be RESTful
Let say I have started a long running Task with POST and now want to be informed about the progress?
What is the idiomatic REST way to do this?
Poll with GET every 10 seconds?
The response to your
POST /new/long/running/task
should include a Location header. That header will point to an endpoint the client can hit to find out the status of the task. I would suggest your response look something like:
Location: http://my.server/task-status/15
{
"self": "/task-status/15",
"status": "running",
"expectedFinishedAt": <timestamp>
}
Then your client doesn't have to ping arbitrarily, because the server is giving it a hint as to when to check back. Later GETs to /task-status/15 will return updated timestamps. This saves you from having to blindly poll the server. Of course, this works much better if the server has some idea as to how long it will take to finish processing the task.
The way REST works, or rather the mechanism it uses - the HTTPS GET/POST/PUT/DELETE etc. doesn't provide a mechanism to have an event-driven mechanism where the server could send the data to the client. Though, it is theoretically be possible to have client/server functionality in both your server and in your client - though I wouldn't personally endorse this design. So having some sort of a submit API - POST/PUT and then a status query mechanism - GET would do the job.
The client should be the one giving you that information, showing you how many bytes have been sent already to the server. The server should not care about a partially uploaded resource.
That put aside, you will return a "Location" header indicating where is the resource once is created, but not earlier. I mean, when you POST you don´t know which is going to be the address of the resource (that is indicated later in the Location header), so there is no reasonable way of providing an URL to check the status of the upload, because there is no reasonable way of identifying it till is done (you may try crazy things, but it is not recommendable).
Again, the client should give you that feedback, not the server.