In the following code I turn a TCP socket into an Observable[Array[Byte]]:
import rx.lang.scala.Observable
import rx.lang.scala.schedulers.IOScheduler
val sock = new Socket
type Bytes = Array[Byte]
lazy val s: Observable[Bytes] = Obs.using[Bytes, Socket] {
sock.connect(new InetSocketAddress("10.0.2.2", 9002), 1000)
sock
}(
socket => Observable.from[Bytes] {
val incoming = socket.getInputStream
val buffer = new Bytes(1024)
Stream.continually {
val read = incoming.read(buffer, 0, 1024)
buffer.take(read)
}.takeWhile(_.nonEmpty)
},
socket => {
println("Socket disposed")
socket.close
s.retry // Does not work
})
.subscribeOn(IOScheduler.apply)
s.subscribe(bytes => println(new String(bytes, "UTF-8")), println)
Connection to a remote server may be interrupted at any moment and in that case I'd like an Observable to try to reconnect automatically but s.retry does not do anything. How can I achieve this? Also can it be done "inside" the current Observable without creating a new one and re-subscribing?
You want to set up a new socket connection on each new subscription. This is easiest with (A)SyncOnSubscribe, ported to RxScala since version 0.26.5. One you have this observable you can use normal error control methods like .retry.
Something like this:
val socketObservable: Observable[Byte] = Observable.create(SyncOnSubscribe.singleState(
generator = () =>
sock
.connect(new InetSocketAddress("10.0.2.2", 9002), 1000)
.getInputStream
)(next = is => Try(is.read()) match {
case Success(-1) => Notification.OnCompleted()
case Success(byte) => Notification.OnNext(byte)
case Failure(e) => Notification.OnError(e)
},
onUnsubscribe = is => Try(is.close)
)
Note: this reads a single byte at a time and isn't terribly efficient. You can improve this with ASyncOnSubscribe or having each event of your observable be an array of bytes.
Note: this is a cold observable and will create a new socket for each subscriber. For example this will open 2 sockets:
socketObservable.foreach(b => System.out.print(b))
socketObservable.buffer(1024).foreach(kiloByte => System.out.println(kiloByte))
If this is not what you want you can turn it into a hot one with .share
Related
I want to implement a high-throughput server that accepts multiple clients. Every request should query a database, so I need some kind of async behavior.
I followed the ROUTER-to-REQ pattern from documentation + Futures, so I ended with this "architecture":
trait ZmqProtocol extends Protocol {
private val pool = Executors.newCachedThreadPool()
private implicit val ec: ExecutionContextExecutor = ExecutionContext.fromExecutor(pool)
val context: ZMQ.Context = ZMQ.context(1)
val socket: ZMQ.Socket = context.socket(ZMQ.ROUTER)
socket.bind("tcp://*:5555")
override def receiveMessages(): String = {
while (true) {
val address = socket.recv(0)
val empty = socket.recv(0)
val request = socket.recv(0)
Future {
val message = new String(request)
getResponseFromDb(message)
} onComplete {
case Success(response) =>
// Send reply back to client
socket.send(address, ZMQ.SNDMORE)
socket.send("".getBytes, ZMQ.SNDMORE)
socket.send(response.getBytes(), 0)
case Failure(ex) => println(ex)
}
}
"DONE"
}
}
I understand this won't work because I'm sharing socket in Future so I need a better model. I know the ZeroMQ sockets are fast and creating several worker threads would be enough on input side, but if the bottleneck is on the database side and if I need to do some other work while waiting for DB, I presume all my threads would soon be exhausted.
Would it be too much of an overhead if I create new socket and bind on ROUTER in every Future or is there some better solution?
Also, for Scala developers: is there a way to force onComplete being executed on main thread (I suppose it would solve the issue)? Thanks!
I started playing around scala and came to this particular boilerplate of web socket chatroom in scala.
They use MessageHub.source() and BroadcastHub.sink() as their Source and Sink for sending the messages to all connected clients.
The example is working fine for exchanging messages as it is.
private val (chatSink, chatSource) = {
// Don't log MergeHub$ProducerFailed as error if the client disconnects.
// recoverWithRetries -1 is essentially "recoverWith"
val source = MergeHub.source[WSMessage]
.log("source")
.recoverWithRetries(-1, { case _: Exception ⇒ Source.empty })
val sink = BroadcastHub.sink[WSMessage]
source.toMat(sink)(Keep.both).run()
}
private val userFlow: Flow[WSMessage, WSMessage, _] = {
Flow.fromSinkAndSource(chatSink, chatSource)
}
def chat(): WebSocket = {
WebSocket.acceptOrResult[WSMessage, WSMessage] {
case rh if sameOriginCheck(rh) =>
Future.successful(userFlow).map { flow =>
Right(flow)
}.recover {
case e: Exception =>
val msg = "Cannot create websocket"
logger.error(msg, e)
val result = InternalServerError(msg)
Left(result)
}
case rejected =>
logger.error(s"Request ${rejected} failed same origin check")
Future.successful {
Left(Forbidden("forbidden"))
}
}
}
I want to store the messages that are exchanged in the chatroom in a DB.
I tried adding map and fold functions to source and sink to get hold of the messages that are sent but I wasn't able to.
I tried adding a Flow stage between MergeHub and BroadcastHub like below
val flow = Flow[WSMessage].map(element => println(s"Message: $element"))
source.via(flow).toMat(sink)(Keep.both).run()
But it throws a compilation error that cannot reference toMat with such signature.
Can someone help or point me how can I get hold of messages that are sent and store them in DB.
Link for full template:
https://github.com/playframework/play-scala-chatroom-example
Let's look at your flow:
val flow = Flow[WSMessage].map(element => println(s"Message: $element"))
It takes elements of type WSMessage, and returns nothing (Unit). Here it is again with the correct type:
val flow: Flow[Unit] = Flow[WSMessage].map(element => println(s"Message: $element"))
This will clearly not work as the sink expects WSMessage and not Unit.
Here's how you can fix the above problem:
val flow = Flow[WSMessage].map { element =>
println(s"Message: $element")
element
}
Not that for persisting messages in the database, you will most likely want to use an async stage, roughly:
val flow = Flow[WSMessage].mapAsync(parallelism) { element =>
println(s"Message: $element")
// assuming DB.write() returns a Future[Unit]
DB.write(element).map(_ => element)
}
I'm trying to keep counting on each successful import. But here is a problem - Counter works if the router receives a message from its parent but if I'm trying to send a message from its children it receives it but doesn't update the global variable that is out of the scope.
I know it sounds complicated. Let me show you the code.
Here is the router
class Watcher(size: Int) extends Actor {
var router = {
val routees = Vector.fill(size) {
val w = context.actorOf(
Props[Worker]
)
context.watch(w)
ActorRefRoutee(w)
}
Router(RoundRobinRoutingLogic(), routees)
}
var sent = 0
override def supervisorStrategy(): SupervisorStrategy = OneForOneStrategy(maxNrOfRetries = 100) {
case _: DocumentNotFoundException => {
Resume
}
case _: Exception => Escalate
}
override def receive: Receive = {
case container: MessageContainer =>
router.route(container, sender)
case Success =>
sent += 1
case GetValue =>
sender ! sent
case Terminated(a) =>
router.removeRoutee(a)
val w = context.actorOf(Props[Worker])
context.watch(w)
router = router.addRoutee(w)
case undef =>
println(s"${this.getClass} received undefinable message: $undef")
}
}
Here is the worker
class Worker() extends Actor with ActorLogging {
var messages = Seq[MessageContainer]()
var received = 0
override def receive: Receive = {
case container: MessageContainer =>
try {
importMessage(container.message, container.repo)
context.parent ! Success
} catch {
case e: Exception =>
throw e
}
case e: Error =>
log.info(s"Error occurred $e")
sender ! e
case undef => println(s"${this.getClass} received undefinable message: $undef")
}
}
So on supervisor ? GetValue I get 0 but suppose to have 1000.The strangest thing is that when I debug it with the breakpoint right on the case Success => ... the value is incremented every time the new message arrives. But supervisor ? GetValue still returns 0.
Let's assume I want to count on case container: MessageContainer => ... and it will magically work; I'll get desirable number, but it doesn't show if I actually imported anything. What's going on?
Here is the test case.
#Test
def testRouter(): Unit = {
val system = ActorSystem("RouterTestSystem")
// val serv = AddressFromURIString("akka.tcp://master#host:1334")
val supervisor = system.actorOf(Props(new Watcher(20)))//.withDeploy(akka.actor.Deploy(scope = RemoteScope(serv))))
val repo = coreSession.getRepositoryName
val containers = (0 until num)
.map(_ => MessageContainer(MessageFactory.generate("/"), repo))
val watch = Stopwatch.createStarted()
(0 until num).par
.foreach( i => {
supervisor ! containers.apply(i)
})
implicit val timeout = Timeout(60 seconds)
val future = supervisor ? GetValue
val result = Await.result(future, timeout.duration).asInstanceOf[Int]
val speed = result / (watch.elapsed(TimeUnit.MILLISECONDS) / 1000.0)
println(f"Import speed: $speed%.2f")
assertEquals(num, result)
}
Can you please explained it in details. Why is it happening? Why only on message received from the children? Another approach?
Well... there can be many potential problems hidden in the parts of code that you have not shared. But, for the sake of this discussion I will assume that everything else is fine and we will just discuss problems with your shared code.
Now, let me explain a bit about Actors. To put things simply, every actor has a mailbox (where it keeps messages in the sequence they were received) and processes them one by one in the order they were received. Since the mailbox is used like a Queue we will refer to it as a Queue in this discussion.
Also... I don't know what this container.apply(i) is going to return... so I will refer to the return value of that container.apply(1) as MessageContainer__1
In your test runner you are first creating an instance of Watcher,
val supervisor = system.actorOf(Props(new Watcher(20)))
Now, lets say that you are sending these 2 messages (num = 2) to supervisor,
So supervisor's mailbox will look something like,
Queue(MessageContainer__0, MessageContainer__1)
Then you send it another message GetValue so the mailbox will look like,
Queue(MessageContainer__0, MessageContainer__1, GetValue)
Now the actor will process the first message and pass it to the workers, the mail-box will look like,
Queue(MessageContainer__1, GetValue)
Now even if your worker is ultra-fast and instantaneous in sending the reply the mailbox will look like,
Queue(MessageContainer__1, GetValue, Success)
And now since your worker super-ultra-fast and instantaneously replies with a Success, the state after passing the second MessageContainer will look like,
Queue(GetValue, Success, Success)
And... here is the root of your problem. The Supervisor sees the GetValue massage before any Success messages, no matter how fast your workers are.
And thus it will process GetValue and reply with current value of sent which is 0.
How to keep the client (web) connection in a memory variable and then send outgoing messages to the client (web) when needed?
I already have some simple code for pushing back message to the client once the server receives messages from the client. How to modify the code below for the outgoing messaging part?
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
implicit val executionContext = actorSystem.dispatcher
val ip = "127.0.0.1"
val port = 32000
val route = get {
pathEndOrSingleSlash {
complete("Welcome to websocket server")
}
} ~
path("hello") {
get {
handleWebSocketMessages(echoService)
}
}
def sendMessageToClient(msg : String) {
// *** How to implement this?
// *** How to save the client connection when it is first connected?
// Then how to send message to this connection?
}
val echoService = Flow[Message].collect {
// *** Here the server push back messages when receiving msg from client
case tm : TextMessage => TextMessage(Source.single("Hello ") ++ tm.textStream)
case _ => TextMessage("Message type unsupported")
}
val binding = Http().bindAndHandle(route, ip, port)
You can look into pipelining the sink flow via .map call. Inside the .map call you can capture the value and then return the same message. For example:
Flow[Message].collect {
case tm : TextMessage =>
TextMessage(Source.single("Hello ") ++ tm.textStream.via(
Flow[String].map((message) => {println(message) /* capture value here*/; message})))
case _ => TextMessage("Message type unsupported")
}
Now, if your intention is to process those values and send out values later, what you want is not a single source-to-sink flow, but two separate streams for sink and source, for which you can use Flow.fromSinkAndSource e.g.
Flow.fromSinkAndSource[Message, Message](
Flow[Message].collect { /* capture values */},
// Or send stream to other sink for more processing
source
)
In all likelihood, this source will be either constructed out of graph DSL, a hand-rolled actor, or you can look into utilizing reusable helpers such as MergeHub.
Server code :
object EchoService {
def route: Route = path("ws-echo") {
get {
handleWebSocketMessages(flow)
}
} ~ path("send-client") {
get {
sourceQueue.map(q => {
println(s"Offering message from server")
q.offer(BinaryMessage(ByteString("ta ta")))
} )
complete("Sent from server successfully")
}
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(Sink.ignore, source)(Keep.right)
}
Client Code :
object Client extends App {
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
val config = actorSystem.settings.config
val interface = config.getString("app.interface")
val port = config.getInt("app.port")
// print each incoming strict text message
val printSink: Sink[Message, Future[Done]] =
Sink.foreach {
case message: TextMessage.Strict =>
println(message.text)
case _ => println(s"received unknown message format")
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(printSink, source)(Keep.right)
val (upgradeResponse, sourceClosed) =
Http().singleWebSocketRequest(WebSocketRequest("ws://localhost:8080/ws-echo"), flow)
val connected = upgradeResponse.map { upgrade =>
// just like a regular http request we can get 404 NotFound,
// with a response body, that will be available from upgrade.response
if (upgrade.response.status == StatusCodes.SwitchingProtocols || upgrade.response.status == StatusCodes.OK ) {
Done
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
}
when i hit http://localhost:8080/send-client i see messages coming to client but after a while if try to send to client again i don't see any messages on client side :s . I also tried source.concatMat(Source.maybe)(Keep.right) but no luck :(
Edit : I tested with js client, somehow connection/flow closed on server end , is there anyway to prevent this ? and how can i listen to this event while using akka-http websocket client :s
Hi,
The reason why it does not keep connected is because by default all
HTTP connections have idle-timeout on by default to keep the system
from leaking connections if clients disappear without any signal.
One way to overcome this limitation (and actually my recommended
approach) is to inject keep-alive messages on the client side
(messages that the server otherwise ignore, but informs the underlying
HTTP server that the connection is still live).
You can override the idle-timeouts in the HTTP server configuration to
a larger value but I don't recommend that.
If you are using stream based clients, injecting heartbeats when
necessary is as simple as calling keepAlive and providing it a time
interval and a factory for the message you want to inject:
http://doc.akka.io/api/akka/2.4.7/index.html#akka.stream.scaladsl.Flow#keepAliveU>:Out:FlowOps.this.Repr[U]
That combinator will make sure that no periods more than T will be
silent as it will inject elements to keep this contract if necessary
(and will not inject anything if there is enough background traffic)
-Endre
thank you Endre :) , working snippet ..
// on client side
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](Int.MaxValue, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
}).keepAlive(FiniteDuration(1, TimeUnit.SECONDS), () => TextMessage.Strict("Heart Beat"))
(s, p.future)
}