I have a simple WebSocket application, which is based on Akka HTTP/Reactive streams, like this https://github.com/calvinlfer/akka-http-streaming-response-examples/blob/master/src/main/scala/com/experiments/calvin/ws/WebSocketRoutes.scala#L82.
In other words, I have Sink, Source (which is produced from Publisher), and the Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
When I produce more, than 30 messages per second to the client, Akka closes a connection.
I cannot understand, where is a setting, which configure this behaviour. I know about OverflowStrategy, but I don't explicitly configure it.
It seems, that I have OverflowStrategy.fail(), or my problem looks like it.
You can tune Internal buffers.
There are two ways, how to do it:
1) application.conf:
akka.stream.materializer.max-input-buffer-size = 1024
2) You can configure it explicitly for your Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
.addAttributes(Attributes.inputBuffer(initial = 1, max = 1024))
Related
I'm trying to close a RESTEasy client after a certain delay (e.g 5 seconds) and it seems the current configuration I'm using is not working at all.
HttpClient httpClient = HttpClientBuilder.create()
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setDefaultRequestConfig(RequestConfig.custom()
.setConnectionRequestTimeout(5 * 1000)
.setConnectTimeout(5 * 1000)
.setSocketTimeout(5 * 1000).build())
.build();
ApacheHttpClient43Engine engine = new ApacheHttpClient43Engine(httpClient, localContext);
ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
according to the documentation the ConnectionTimeToLive should close the connection no matter if there's payload or not.
please find attached the link
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/developing_web_services_applications/index#jax_rs_client
In my specific case, there sometimes is some latency and the payload is sent in chunks (below the socketTimeout interval hence the connection is kept alive and it could happen that the client is active for hours)
My main goal is to kill the client and release the connection but I feel there is something I'm missing in the configuration.
I'm using wiremock to replicate this specific scenario by sending the payload in chucks.
.withChunkedDribbleDelay
any clue about the configuration?
You may try using .withFixedDelay(60000) instead of .withChunkedDribbleDelay().
Is it possible to write a gatling script which connects to WebSocket and then performs actions (e.g. new HTTP requests) when certain messages are received (preferably with out of the box support for STOMP messages, but I could probably workaround this).
In other words, "real clients" should be simulated as best as possible. The real clients (angular applications) would load data based on certain WebSocket messages.
I'm looking for something similar to (pseude code, does not work):
val scn = scenario("WebSocket")
.exec(http("Home").get("/"))
.pause(1)
.exec(session => session.set("id", "Steph" + session.userId))
.exec(http("Login").get("/room?username=${id}"))
.pause(1)
.exec(
ws("Connect WS")
.open("/room/chat?username=${id}")
// ---------------------------------------------------------------------
// Is it possible to trigger/exec something (http call, full scenario)
// as reaction to a STOMP/WebSocket message?
.onMessage(check(perform some check, maybe regex?).as("idFromPayload"))
.exec(http("STOMP reaction").get("/entity/${idFromPayload}"))
// ---------------------------------------------------------------------
)
.exec(ws("Close WS").close)
// ideally, closing the websocket should only be done once the full scenario is over
// (or never, until the script terminates in "forever" scenarios)
Is this currently possible? If not, is this planned for future versions of gatling?
To the best of my knowledge, this is not possible with Gatling.
I have since switched to k6 which supports writing and executing test scripts with this kind of logic/behavior.
We are using Akka HTTP to handle our web socket connections using the akka streams API. We are using a Flow that pipes the incoming messages to a "connection actor". A snippet of the code is below:
val connection = system.actorOf(ConnectionActor.props())
val in = Flow[Message]
.to(Sink.actorRef[Message](connection, WebSocketClosed))
val out = Source
.actorRef[Message](500, OverflowStrategy.fail)
.mapMaterializedValue(ws => connection ! WebSocketOpened(ws))
Flow.fromSinkAndSource(in, out)
When the web socket is closed, the connection actor is sent the "WebSocketClose" message and we clean up internal resources. We now have the requirement to know what the reason for closing the connection was according to the standard WebSocket CloseEvent codes.
Is there a way to get the close code from Akka HTTP and send it on to the connection actor so it can take the appropriate action?
I was able to handle client (browser) error code in an akka-http 10.2.6 server.
My use case was to pipe incoming messages to a Sink created by ActorSink.actorRef[T](). When creating the sink, 2 callbacks onCompleteMessage onFailureMessage can be set to converts normal WebSocket close (code=1000) or error to our custom message types.
I suppose that client close/error maps to Flow complete/failure, that means other sinks should be able to handle close/error in a similar way.
my code
`
As it turns out, this is not presently possible in Akka HTTP. See the following GitHub issue:
https://github.com/akka/akka-http/issues/2458
It looks as though this will need to be addressed before this is possible.
I noticed in the FAQ, in the Monitoring section, that it's not possible to get a list of connected peers or to be notified when peers connect/disconnect.
Does this imply that it's also not possible to know which topics a PUB/XPUB socket knows it should publish, from its upstream feedback? Or is there some way to access that data?
I know that ZMQ >= 3.0 "supports PUB/SUB filtering at the publisher", but what I really want is to filter at my application code, using the knowledge ZMQ has about which topics are subscribed to.
My use-case is that I want to publish info about the status of a robot. Some topics involve major hardware actions, like switching the select lines on an ADC to read IR values.
I have a publisher thread running on the bot that should only do that "read" to get IR data when there are actually subscribers. However, since I can only feed a string into my pub_sock.send, I always have to do the costly operation, even if ZMQ is about to drop that message when there are no subscribers.
I have an implementation that uses a backchannel REQ/REP socket to send topic information, which my app can check in its publish loop, thereby only collecting data that needs to be collected. This seems very inelegant though, since ZMQ must already have the data I need, as evidenced by its filtering at the publisher.
I noticed that in this mailing list message, the OP seems to be able to see subscribe messages being sent to an XPUB socket.
However, there's no mention of how they did that, and I'm not seeing any such ability in the docs (still looking). Maybe they were just using Wireshark (to see upstream subscribe messages to an XPUB socket).
Using zmq.XPUB socket type, there is a way to detect new and leaving subscribers. The following code sample shows how:
# Publisher side
import zmq
ctx = zmq.Context.instance()
xpub_socket = ctx.socket(zmq.XPUB)
xpub_socket.bind("tcp://*:%d" % port_nr)
poller = zmq.Poller()
poller.register(xpub_socket)
events = dict(poller.poll(1000))
if xpub_socket in events:
msg = xpub_socket.recv()
if msg[0] == b'\x01':
topic = msg[1:]
print "Topic '%s': new subscriber" % topic
elif msg[0] == b'\x00':
topic = msg[1:]
print "Topic '%s': subscriber left" % topic
Note that the zmq.XSUB socket type does not subscribe in the same manner as the "normal" zmq.SUB. Code sample:
# Subscriber side
import zmq
ctx = zmq.Context.instance()
# Subscribing of zmq.SUB socket
sub_socket = ctx.socket(zmq.SUB)
sub_socket.setsockopt(zmq.SUBSCRIBE, "sometopic") # OK
sub_socket.connect("tcp://localhost:%d" % port_nr)
# Subscribing zmq.XSUB socket
xsub_socket = ctx.socket(zmq.XSUB)
xsub_socket.connect("tcp://localhost:%d" % port_nr)
# xsub_socket.setsockopt(zmq.SUBSCRIBE, "sometopic") # NOK, raises zmq.error.ZMQError: Invalid argument
xsub_socket.send_multipart([b'\x01', b'sometopic']) # OK, triggers the subscribe event on the publisher
I'd also like to point out the zmq.XPUB_VERBOSE socket option. If set, all subscription events are received on the socket. If not set, duplicate subscriptions are filtered. See also the following post: ZMQ: No subscription message on XPUB socket for multiple subscribers (Last Value Caching pattern)
At least for the XPUB/XSUB socket case you can save a subscription state by forwarding and handling the packages manually:
context = zmq.Context()
xsub_socket = context.socket(zmq.XSUB)
xsub_socket.bind('tcp://*:10000')
xpub_socket = context.socket(zmq.XPUB)
xpub_socket.bind('tcp://*:10001')
poller = zmq.Poller()
poller.register(xpub_socket, zmq.POLLIN)
poller.register(xsub_socket, zmq.POLLIN)
while True:
try:
events = dict(poller.poll(1000))
except KeyboardInterrupt:
break
if xpub_socket in events:
message = xpub_socket.recv_multipart()
# HERE goes some subscription handle code which inspects
# message
xsub_socket.send_multipart(message)
if xsub_socket in events:
message = xsub_socket.recv_multipart()
xpub_socket.send_multipart(message)
(this is Python code but I guess C/C++ looks quite similar)
I'm currently working on this topic and I will add more information as soon as possible.
We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise
It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.