I've followed the instructions here:
https://code.kx.com/q/kb/websockets/#simple-websocket-client-example
And I'm able to open the websocket and I have streaming data coming in.
The issue is, the documentation doesn't state how to close the websocket.
I tried to do:
delete r from `.
which seems to delete environment variable representing the websocket, the stream keeps coming.
What am I doing wrong?
You should be able to use hclose as it is here:
https://code.kx.com/q/ref/hopen/
Related
I had made a whole chat application using signalr as a socket with the online and offline facility. I am facing a few problems,
Signalr connection is always time out after some time, to overcome that I had condition if hubconnection is not connected then create new hubconnection (onResume app), but still it get hubconnection._callback got increased when sending message and not moving to server side socket. Again need to refresh whole app.
Can someone tell me whether this is problem because there are lot of operations going on and so signalr loses its connection as flutter is single thread and it cannot handle much? or should I use Isolate or inherit widget.
Summary problem:
I cannot send message in chat after sometime. It stores all message in hubconnection._callback and not going for server.
Is anything better solution to keep alive in both Android+iOS.
I had used https://pub.dev/packages/signalr_netcore package.
Please do not mention about firebase.
Any other logic suggestion is appreciable.
Thank you.
I've been using a different package, https://pub.dev/packages/signalr_core, which works fine without any particular issues what I have observed at the moment.
I'm only running about 10 listeners simultaneously, not sure if that is more or less than you. In the package I'm running you can establish connection with automatic reconnect. It looks like this:
HubConnectionBuilder().withAutomaticReconnect().withUrl(....)
It seems like your package have the same functionality... Have you tried that?
I am facing the same issue as
Kafka Streams Deserialization Handler
After using logandcontinue, still on restarting server the corrupt messages show up everytime.
It looks like this jira issue is still open and needs to be addressed to fix the problem you are describing: https://issues.apache.org/jira/browse/KAFKA-6502
It only happens when you have a series of records in error though. As soon as you have a good record coming in, the offset moves along. Therefore, as a workaround, you can probably send a good record that will not cause an error maybe?
I'm new to Akka and Scala and self learning this to do a small project with websockets. End goal is simple, make a basic chat server that publishes + subscribes messages on some webpage.
In fact, after perusing their docs, I already found the pages that are relevant to my goal, namely this and this.
Using dynamic junctions (aka MergeHub & BroadcastHub), and the Flow.fromSinkAndSource() method, I was able to acheive a very basic example of what I wanted. We can even get a kill switch using the example from the akka docs which I have shown below. Code is like:
private lazy val connHub: Flow[Message, Message, UniqueKillSwitch] = {
val (sink, source) = MergeHub.source[Message].toMat(BroadcastHub.sink[Message])(Keep.both).run()
Flow.fromSinkAndSourceCoupled(sink, source).joinMat(KillSwitches.singleBidi[Message, Message])(Keep.right)
}
However, I now see one issue. The above will return a Flow that will be used by Akka's websocket directive: akka.http.scaladsl.server.Directives.handleWebSocketMessages(/* FLOW GOES HERE */)
That means the akka code itself will materialize this flow for me so long as I provide it the handler.
But let's say I wanted to arbitrarily kill one user's connection through a KillSwitch (maybe because their session has expired on my application). While a user's websocket would be added through the above handler, since my code would not be explicitly materializing that flow, I won't get access to a KillSwitch. Therefore, I can't kill the connection, only the user can when they leave the webpage.
It's strange to me that the docs would mention the kill switch method without showing how I would get one using the websocket api.
Can anyone suggest a solution as to how I could obtain the kill switch per connection? Do I have a fundamental misunderstanding of how this should work?
Thanks in advance.
I'm very happy to say that after a lot of time, research, and coding, I have an answer for this question. In order to do this, I had to post in the Akka Gitter as well as the Lightbend discussion forum. Please refer to the amazing answer I got there for some perspective on the problem and some solutions. I'll summarize that here.
In order to get the UniqueKillSwitch from the code that I was using, I needed to use the mapMaterializeValue() method on the Flow that I was returning. Here is the code that I'm using to return a Flow to the handleWebSocketMessages directive now:
// note - state will not be updated properly if cancellation events come through from the client side as user->killswitch mapping may still remain in concurrent map even if the connections are closed
Flow.fromSinkAndSourceCoupled(mergeHubSink, broadcastHubSource)
.joinMat(KillSwitches.singleBidi[Message, Message])(Keep.right)
.mapMaterializedValue { killSwitch =>
connections.put(user, killSwitch) // add kill switch in side effect once value is ready from materialization
NotUsed.notUsed()
}
The above code lives in a Chatroom class I've created that has access to the mergehub and broadcast hub materialized sink and source. It also has access to a concurrent hashmap that persists the kill switch to a user. In this way, we now have access to the Kill Switch through querying it through a map. From there, you can call switch.shutdown() to kill the user's connection from the server side.
My main issue was that I originally thought I could get the switch directly even though I didn't control the materialization. This doesn't seem possible. I suggest this method for when you know that the caller that requires your Flow doesn't care about the materialized value (aka the kill switch).
Please reference the answer I've linked for more scenarios and ways to handle this problem.
I need to send a stream of data to Play server. The length of the stream is unknown and I need to get a response every line break \n or for every several lines. Rather then wait for the whole data to be sent.
Think of the following usecase:
lets say i'm intended to write a console application, that when launched, connects to my web server, and all the user input are being sent to play on every line break, and gets responded asynchronously. All above should be performed on a single connection, i.e. I don't want to open a new connection on every request I send to Play (a good analog would be 2 processes communicating through 2 pipes).
What is the best way to achieve this?
And is it possible to achieve with a client that communicates with the server only via http (with a single http connection)?
EDIT:
my current thoughts on how to approach this are as follows:
i can define a new BodyParser[Future[String]] which is basically an Iteratee[Array[Byte],Future[String]]. while the parsing takes place, i can compute the result asynchronously and the action can return the result as ChunkedResult in the future's onComplete method.
does this sound like the right approach?
any suggestions on how to achieve this?
Maybe you should look at websockets.
Java: http://www.playframework.com/documentation/2.1-RC3/JavaWebSockets
Scala: http://www.playframework.com/documentation/2.0/ScalaWebSockets
I have exactly one Node server, which is currently running some code. This code is now outdated. How can I switch to the new code without any server down-time? Do I need to get another server to act as a buffer?
Basically you "kill" the old process and immediately start the server again, read the following article for more details & code sample:
http://codegremlins.com/28/Graceful-restart-without-downtime