How to structure a JMeter test plan properly for a sockets.io scenario - event "subscription" - sockets

Scenario Outline
Test sends a REST API request to activate a game.
Website receives a socket.io event and displays an alert on the browser.
Question
Since I don't know when the event will be sent, do I need to run a WebSocket Sampler, or perhaps a WebSocket Single-Read Sampler, in a loop, until I get the matching message?
So far in my attempts, I can connect to the event server and get message, but they are empty frames or messages are entirely different from the below.
I expect a message like this, which I am able to verify manually using the browser debugger.
{
"locationId": 110,
"name": "GAME_STARTED", <---------------------
"payload": {
"id": 146418,
"boxId": 2002,
"userId": 419,
"createdAt": "2022-02-17T09:10:16",
"lastModifiedAt": "2022-02-17T09:10:22.189",
"completedAt": "2022-02-17T09:10:22.07",
"activationMethod": "TAG",
"nfcTagId": "123423423412342134",
"gameCount": 1,
"app": false
}
}
Alternatively, would this work?
thread A:
open socket
while (true):
read socket
if message ~ 'GAME_STARTED':
break
thread B:
send HTTP REST API request # triggers event to be sent
Here are the parameters used to connect and where I specify the response pattern, which needs wildcards or a JSON expression.

You can consider using WebSocket Text Frame Filter
If you add the filter configured like above it will remove all the frames which don't contain GAME_STARTED text so the WebSocket Single Read sampler will not "see" them, this way you can just have one sampler without any loops or other logic.
More information:
Smart close with filter sample example test plan
JMeter WebSocket Samplers - A Practical Guide

Related

Bidirectional communication of Unix sockets

I'm trying to create a server that sets up a Unix socket and listens for clients which send/receive data. I've made a small repository to recreate the problem.
The server runs and it can receive data from the clients that connect, but I can't get the server response to be read from the client without an error on the server.
I have commented out the offending code on the client and server. Uncomment both to recreate the problem.
When the code to respond to the client is uncommented, I get this error on the server:
thread '' panicked at 'called Result::unwrap() on an Err value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/main.rs:77:42
MRE Link
Your code calls set_read_timeout to set the timeout on the socket. Its documentation states that on Unix it results in a WouldBlock error in case of timeout, which is precisely what happens to you.
As to why your client times out, the likely reason is that the server calls stream.read_to_string(&mut response), which reads the stream until end-of-file. On the other hand, your client calls write_all() followed by flush(), and (after uncommenting the offending code) attempts to read the response. But the attempt to read the response means that the stream is not closed, so the server will wait for EOF, and you have a deadlock on your hands. Note that none of this is specific to Rust; you would have the exact same issue in C++ or Python.
To fix the issue, you need to use a protocol in your communication. A very simple protocol could consist of first sending the message size (in a fixed format, perhaps 4 bytes in length) and only then the actual message. The code that reads from the stream would do the same: first read the message size and then the message itself. Even better than inventing your own protocol would be to use an existing one, e.g. to exchange messages using serde.

How to produce a response body with asynchronously created body chunks in Swift Vapor

I am looking into the Swift Vapor framework.
I am trying to create a controller class that maps data obtained on an SSL link to a third party system (an Asterisk PBX server..) into a response body that is sent over some time down to the client.
So I need to send received text lines (obtained separately on the SSL connection) as they get in, without waiting for a 'complete response' to be constructed.
Seeing this example:
return Response(status: .ok) { chunker in
for name in ["joe\n", "pam\n", "cheryl\n"] {
sleep(1)
try chunker.send(name)
}
try chunker.close()
}
I thought it might be the way to go.
But what I see connecting to the Vapor server is that the REST call waits for the loop to complete, before the three lines are received as result.
How can I obtain to have try chunker.send(name) send it's characters back the client without first waiting for the loop to complete?
In the real code the controller method can potentially keep an HTTP connection to the client open for a long time, sending Asterisk activity data to the client as soon as it is obtained. So each .send(name) should actually pass immediately data to the client, not waiting for the final .close() call.
Adding a try chunker.flush() did not produce any better result..
HTTP requests aren't really designed to work like that. Different browsers and clients will function differently depending on their implementations.
For instance, if you connect with telnet to the chunker example you pasted, you will see the data is sent every second. But Safari on the other hand will wait for the entire response before displaying.
If you want to send chunked data like this reliably, you should use a protocol like WebSockets that is designed for it.

Rails Actioncable success callback

I use the perform javascript call to perform an action on the server, like this:
subscription.perform('action', {...});
However, from what I've seen there seems to be no builtin javascript "success" callback, i.e. to let me know the action is concluded on the server's side (or possibly failed). I was thinking about sending a broadcast at the end of the action like so:
def action(data)
...do_stuff
ActionCable.server.broadcast "room", success_message...
end
But all clients subscribed to this "room" would receive that message, possibly resulting in false positives. In addition, from what I've heard, message order isn't guaranteed, so a previous broadcast inside this action could be delivered after the success message, possibly leading to further issues.
Any ideas on this or am I missing something completely?
Looking at https://github.com/xtian/action-cable-js/blob/master/dist/cable.js and , https://developer.mozilla.org/en-US/docs/Web/API/WebSocket#send(), perform just executes WebSocket.send() and returns true or false, and there is no way to know whether your data has arrived. (That is just not possible with WebSockets, it seems.)
You could try using just a http call (I recommend setting up an api with jbuilder), or indeed broadcasting back a success message.
You can solve the order of the messages by creating a timestamp on the server, and sending it along with the message, and then sorting the messages with Javascript.
Good luck!
Maybe what you are looking for is the trasmit method: https://api.rubyonrails.org/v6.1.3/classes/ActionCable/Channel/Base.html#method-i-transmit
It sends a message to the current connection being handled for a channel.

Using http response headers in order to communicate server-side errors from the backend to the front-end

I am working on a REST backend consumed by a javascript/ajax front-end.
I am trying to find a way to deal with invalid requests sent over by the front-end to the backend.
One of the issues I have is that HTTP status codes such as (400, 409) are not fine-grained enough to cover business logic errors such as passwords not matching (in the case of a user changing his password) or an email being unknown to the system (in the case of a user trying to signin with the application).
I am thinking of using HTTP response headers in order to communicate server-side error from the backend to the front-end.
I could for instance have an Error enum (or a class with constants) as follows:
public enum Error {
UNKNOWN_EMAIL,
PASSWORDS_DONT_MATCH,
//etc.
}
I would then use that enum in order to set the headers on the response as follows:
response.setHeader(Error.UNKNOWN_EMAIL.name(), "true");
... and deal with the error appropriately on the front-end.
Can the above architecture be improved? If so how?
Is my usage of HTTP response headers correct?
Should I use constants or enums?
Is my usage of HTTP response headers correct?
I do not think it is incorrect, however I prefer to send an error message/code directly back in the response body. This is usually more convenient for the client to access and is more explicit. As part of consuming each response, the client can check the contents of the errors (you may have multiple) and act accordingly. The following is a little contrived just to provide an example:
// ...
{
"errors": {
"username": "not found"
"password": "no match"
}
"warnings": {
"account": "expired"
}
}
// ...
The above is quite a simple approach - your JSON message can be as sophisticated as you wish but keep in mind that you should only expose the information the client needs for it to achieve its goal. This will also depend on whether you are publishing an API for 3rd parties/public consumption or whether its just for your own clients ie. your own website. If you have other parties consuming it then put some thought into it since once you publish it then you need to maintain it that way - otherwise you break any consumers.
Check out JSON API for some standardized guidance on handling errors.
Should I use constants or enums?
Since these are a related set of properties an enum is preferable over constants (I assume you are using Java).

Play Framework WebSocket disconnecting IE clients

I've asked on the Play Framework forums, but figured I'd ask here as well for the additional coverage:
Using Play Framework 2.3, I have a WebSocket handled with an actor that I'm using to push "StatusUpdate" messages to connected clients:
def updateSocket = WebSocket.tryAcceptWithActor[StatusUpdate, StatusUpdate] {
implicit request =>
authorized(Set.empty[SecurityRole]).map {
case Right(user) =>
Right({upstream => DashboardListener.props(upstream, user.dblocations)})
case Left(_) =>
Left(Forbidden)
}
}
Everything is working wonderfully, except...
When a user connects via Internet Explorer, and the IE window loses focus, within 20 or so seconds the WebSocket forcibly closes. Firefox, so far, seems not to exhibit this behavior. I used Fiddler to inspect the WebSocket traffic, and it looks like IE is sending a "pong" message after it loses focus:
{"doneTime": "02:08:39.462","messageType": "Pong","messageID": "Client.2",
"wsSession":"WSSession-1","payload": "", "requestPartCount": "1"}
Immediately, the server sends:
{"doneTime": "02:08:39.462","messageType": "Close","messageID": "Server.3",
"wsSession": "WSSession-1","payload": "03-EB-54-68-69-73-20-57-65-62-53-6F-
63-6B-65-74-20-64-6F-65-73-20-6E-6F-74-20-68-61-6E-64-6C-65-20-66-72-61-6D-
65-73-20-6F-66-20-74-68-61-74-20-74-79-70-65", "requestPartCount": "1"}
I'm assuming that this is because my WebSocket doesn't know how to handle pongs (since I've declared incoming and outgoing traffic to be of the StatusUpdate type). Moreover, the client receives a closeEvent with code 1003 (The connection is being terminated because the endpoint received data of a type it cannot accept). I've done some research, and it seems that this ping/pong is supposed to keep the connection alive, but not be exposed to the API. Has anyone run into this before or know of a potential solution?
If it matters, the clients only receive StatusUpdates via this socket -- at no point is any sort of message ever explicitly sent on it. The StatusUpdate messages originate from elsewhere in my Actor system.