I'm writing a websocket with netty and I seem to have a race condition in my code:
I have a channel initializer that builds pipeline consisting of:
ch.pipeline().addLast(new HttpServerCodec())
ch.pipeline().addLast(new HttpObjectAggregator(65536))
ch.pipeline().addLast(new MyServer())
And MyServer works as follows:
if it receives a websocket upgrade request, it tries to authenticate the request
if it fails, it returns bad request
if it succeeds, it tries to:
add websocket handlers followed my custom logic handler and finish the handshake and establish webscoket connection
it's done using following code:
awareLogger.debug(log"upgrading to websocket")(logContext)
ctx.pipeline()
.addLast(new WebSocketServerProtocolHandler(route, true))
.addLast(new WebSocketFrameAggregator(65536))
.addLast(new MyWebsocketLogic(logContext))
ctx.fireChannelRead(httpRequest)
val _ = awareLogger.debug(log"upgraded to websocket")(logContext)
It's trying to fireChannelRead(httpRequest) in hopes that WebSocketServerProtocolHandler will intercept it and finish the handshake.
My issue is - the httpRequest sometimes seems to be propagated all the way down to MyWebsocketLogic handler and fails to establish the connection and handshake.
Am I doing something obviously wrong? It's almost like i have some kind of race condition when in the code.
The issue was that:
awareLogger.debug(log"upgrading to websocket")(logContext)
ctx.pipeline()
.addLast(new WebSocketServerProtocolHandler(route, true))
.addLast(new WebSocketFrameAggregator(65536))
.addLast(new MyWebsocketLogic(logContext))
ctx.fireChannelRead(httpRequest)
val _ = awareLogger.debug(log"upgraded to websocket")(logContext)
was called in different thread than the one assigned to given ctx.
I was able to fix this by applying suggestion from Norman above, that is switching the pipeline modification to EventLoop of the channel, meaning:
ctx.channel().eventLoop().execute { () =>
val _ = ctx
.pipeline()
.addLast(new WebSocketServerProtocolHandler(route, true))
.addLast(new WebSocketFrameAggregator(65536))
.addLast(buildWebsocketHandler(logContext, connectionHandler))
val _ = ctx.fireChannelRead(msg)
}
This seems to work well
Related
I have an akka http web socket Route with a code similar to:
private val wsReader: Route =
path("v1" / "data" / "ws") {
log.info("Opening websocket connecting ...")
val testSource = Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
extractUpgradeToWebSocket { upgrade ⇒
complete(upgrade.handleMessagesWithSinkSource(Sink.ignore, testSource))
}
}
Everything works fine (I receive from the client 1 test message every second). The only problem is that I don't understand how to stop/close the Source (testSource) if the client close the web socket connection.
You can see that the source continue to produce elements (see println) also if the web socket is down.
How can I detect a client disconnection?
handleMessagesWithSinkSource is implemented as:
/**
* The high-level interface to create a WebSocket server based on "messages".
*
* Returns a response to return in a request handler that will signal the
* low-level HTTP implementation to upgrade the connection to WebSocket and
* use the supplied inSink to consume messages received from the client and
* the supplied outSource to produce message to sent to the client.
*
* Optionally, a subprotocol out of the ones requested by the client can be chosen.
*/
def handleMessagesWithSinkSource(
inSink: Graph[SinkShape[Message], Any],
outSource: Graph[SourceShape[Message], Any],
subprotocol: Option[String] = None): HttpResponse =
handleMessages(Flow.fromSinkAndSource(inSink, outSource), subprotocol)
This means the sink and the source are independent, and indeed the source should keep producing elements even when the client closes the incoming side of the connection. It should stop when the client resets the connection completely, though.
To stop producing outgoing data as soon as the incoming connection is closed, you may use Flow.fromSinkAndSourceCoupled, so:
val socket = upgrade.handleMessages(
Flow.fromSinkAndSourceCoupled(inSink, outSource)
subprotocol = None
)
One way is to use KillSwitches to handle testSource shutdown.
private val wsReader: Route =
path("v1" / "data" / "ws") {
logger.info("Opening websocket connecting ...")
val sharedKillSwitch = KillSwitches.shared("my-kill-switch")
val testSource =
Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
.via(sharedKillSwitch.flow)
extractUpgradeToWebSocket { upgrade ⇒
val inSink = Sink.onComplete(_ => sharedKillSwitch.shutdown())
val outSource = testSource
val socket = upgrade.handleMessagesWithSinkSource(inSink, outSource)
complete(socket)
}
}
Is there any way I can trigger a job from the controller (to not to wait for its completion) and display the message to the user that job will be running in the background?
I have one controller method which takes quite long time to run. So I want to make that run offline and display the message to the user that it will be running in the background.
I tried Action.async as shown below. But the processing of the Future object is still taking more time and getting timed out.
def submit(id: Int) = Action.async(parse.multipartFormData) { implicit request =>
val result = Future {
//process the data
}
result map {
res =>
Redirect(routes.testController.list()).flashing(("success", s"Job(s) will be ruuning in background."))
}
}
You can also return a result without waiting for the result of the future in a "fire and forget" way
def submit(id: Int) = Action(parse.multipartFormData) { implicit request =>
Future {
//process the data
}
Redirect(routes.testController.list()).flashing(("success", s"Job(s) will be running in background."))
}
The docs state:
By giving a Future[Result] instead of a normal Result, we are able to quickly generate the result without blocking. Play will then serve the result as soon as the promise is redeemed.
The web client will be blocked while waiting for the response, but nothing will be blocked on the server, and server resources can be used to serve other clients.
You can configure your client code to use ajax request and display a Waiting for data message for some part of the page without blocking the rest of the web page from loading.
I also tried the "Futures.timeout" option. It seems to work fine. But I'm not sure its correct way to do it or not.
result.withTimeout(20.seconds)(futures).map { res =>
Redirect(routes.testController.list()).flashing(("success", s"Job(s) will be updated in background."))
}.recover {
case e: scala.concurrent.TimeoutException =>
Redirect(routes.testController.list()).flashing(("success", s"Job(s) will be updated in background."))
}
Web Socket connections in Akka Http are treated as an Akka Streams Flow. This seems like it works great for basic request-reply, but it gets more complex when messages should also be pushed out over the websocket. The core of my server looks kind of like:
lazy val authSuccessMessage = Source.fromFuture(someApiCall)
lazy val messageFlow = requestResponseFlow
.merge(updateBroadcastEventSource)
lazy val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow)
handleWebSocketMessages {
handler
}
Here, codec is a (de)serialization BidiFlow and authGate is a BidiFlow that processes an authorization message and prevents outflow of any messages until authorization succeeds. Upon success, it sends authSuccessMessage as a reply. requestResponseFlow is the standard request-reply pattern, and updateBroadcastEventSource mixes in async push messages.
I want to be able to send an error message and terminate the connection gracefully in certain situations, such as bad authorization, someApiCall failing, or a bad request processed by requestResponseFlow. So basically, basically it seems like I want to be able to asynchronously complete messageFlow with one final message, even though its other constituent flows are still alive.
Figured out how to do this using a KillSwitch.
Updated version
The old version had the problem that it didn't seem to work when triggered by a BidiFlow stage higher up in the stack (such as my authGate). I'm not sure exactly why, but modeling the shutoff as a BidiFlow itself, placed further up the stack, resolved the issue.
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the connection. It is triggered when `shutoffPromise`
* completes, and sends a final optional termination message if that
* promise resolves with one.
*/
val shutoffBidi = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
val terminationMessageBidi = BidiFlow.fromFlows(
Flow[IncomingWebsocketEventOrAuthorize],
Flow[OutgoingWebsocketEvent].merge(terminationMessageSource)
)
val terminator = BidiFlow
.fromGraph(KillSwitches.singleBidi[IncomingWebsocketEventOrAuthorize, OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach { _ => println("Shutting down connection"); killSwitch.shutdown() }
}
terminationMessageBidi.atop(terminator)
}
Then I apply it just inside the codec:
val handler = codec
.atop(shutoffBidi)
.atop(authGate(authSuccessMessage))
.join(messageFlow)
Old version
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the flow of outgoing messages. It is triggered when
* `shutoffPromise` completes, and sends a final optional termination
* message if that promise resolves with one.
*/
val shutoffFlow = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
Flow
.fromGraph(KillSwitches.single[OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach(_ => killSwitch.shutdown())
}
.merge(terminationMessageSource)
}
Then handler looks like:
val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow via shutoffFlow)
I'm using RxJava to pull out values from RabbitMQ. Here's the code:
val amqp = new RabbitQueue("queueName")
val obs = Observable[String](subscr => while (true) subscr onNext amqp.next)
obs subscribe (
s => println(s"String from rabbitmq: $s"),
error => amqp.connection.close
)
It works fine but now I have a requirement that a value should be pulled at most once per second while all the values should be preserved (so debounce won't do since it drops intermediary values).
It should be like amqp.next blocks thread so we're waiting... (RabbitMQ got two messages in queue) pulled a 1st message... wait 1 second... pulled a 2nd message... wait indefinitely for the next message...
How can I achieve this using rx methods?
Alternatively you could create a observable from a timer like that. I personally find this more elegant.
RabbitQueue amqp = new RabbitQueue("queueName");
Observable.timer(0, 1, TimeUnit.SECONDS)
.map(tick -> amp.next())
.subscribe(...)
One option may be to use the Schedulers API in combination with a PublishSubject as the observable.
Unfortunately, I don't know Scala syntax but here is the Java version you should be able to convert:
RabbitQueue amqp = new RabbitQueue("queueName");
Scheduler.Worker worker = Schedulers.newThread().createWorker();
PublishSubject<String> obs = PublishSubject.create();
worker.schedulePeriodically(new Action0() {
#Override
public void call() {
obs.onNext(amqp.next);
}
}, 1, 1, TimeUnit.SECONDS);
Your subscribe code from above would remain the same:
obs subscribe (
s => println(s"String from rabbitmq: $s"),
error => amqp.connection.close
)
I am trying to create a jetty consumer. I am able to get it successfully running using the endpoint uri:
jetty:http://0.0.0.0:8080
However, when I modify the endpoint uri for https:
jetty:https://0.0.0.0:8443
The page times out trying to load. This seems odd because the camel documentation states it should function right out of the box.
I have since loaded a signed SSL into java's default keystore, with my attempted implementation to load it below:http://camel.apache.org/jetty.html
I have a basic Jetty instance using the akka-camel library with akka and scala. ex:
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
This resulted in some progress, because the page does not timeout anymore, but instead gives a "The connection was interrupted" error. I am not sure where to go from here because camel is not throwing any Exceptions, but rather failing silently somewhere (apparently).
Does anybody know what would cause this behavior?
When using java's "keytool" I did not specify an output file. It didn't throw back an error, so it probably went somewhere. I created a new keystore and explicitly imported my crt into the keyfile. I then explicitly added the filepath to that keystore I created, and everything works now!
If I had to speculate, it is possible things failed silently because I was adding the certs to jetty's general bank of certs to use if eligible, instead of explicitly binding it as the SSL for the endpoint.
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setResource("/path/to/keystore");
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
Hopefully somebody in the future can find use for this code as a template in implementing Jetty over SSL with akka-camel (surprisingly no examples seem to exist)