With rxscala, we can subscribe on the observables like this:
val stream = Observable.just(1, 2, 3)
stream.subscribe(x => doSomething(x))
stream.subscribe(x => doSomething(x))
stream.subscribe(x => doSomething(x))
stream.subscribe(x => doSomething(x))
Is it possible to unsubscribe all the observers(there are 4) from the stream side?
Related
Because I do some "complex" operations, I think my Actor became asyncronous. The main problem I think is that I use Await.result inside the method which return responses.
actor:
def process(subscribers: Set[ActorRef]): Receive = {
case Join(ref) => context become process(subscribers + ref)
case Leave(ref) => context become process(subscribers - ref)
case Push(request) =>
val filteredSubscribers = (subscribers - sender())
.filter(s => exists(s, request)) // just some actor filters
filteredSubscribers.foreach { subscriber =>
// here I have a Map with each actor requests
val actorOptions = getActorOptions(subscriber)
subscriber ? getResponse(actorOptions, request)
}
}
The problem is inside getResponse (I think).
getResponse(actorOptions: JsValue, request: SocketRequest): JsValue = {
(actorOptions \ "dashboardId").asOpt[Int] match {
case Some(id) => {
val response = widgetsService.getByDashboadId(id) map { widgets =>
val widgetsResponse: List[Future[String]] = widgets.map(w => {
widgetsService.getDataById(w.id) map {
data => s"""{ "widgetId": ${w.id}, "data": $data }"""
}
})
var responses: List[String] = List.empty
widgetsResponse.foreach(f => {
f.onComplete {
case Success(value) => responses = value :: responses
case Failure(e) => println(s"Something happened: ${e.getMessage}")
}
})
// first time when I use Await.result
// used to populate the responses list with data from all futures
Await.result(Future.sequence(widgetsResponse), Duration.Inf)
Json.parse(s"""{
"dashboardId": $id,
"widgets": [${response.mkString(", ")}]
}""".stripMargin)
}
// second time when I use Await.result
// used to return a JsValue instead of a Future[JsValue]
Await.result(response, Duration.Inf)
}
case None => buildDefaultJson // return default json value, unimportant for this example
}
}
Due of that, In frontend, if I have 2 sockets clients, the response for the second will be send only after first.
I found that I can obtain a "fake" increase of performance if I embrance the getResponse in a future inside of my Actor.
filteredSubscribers.foreach { subscriber =>
val actorOptions = getActorOptions(subscriber)
Future(subscriber ? getResponse(actorOptions, request))
}
So, for both subscribers the action will be started in same time, but when the first will reach the Await.result, the second will be locked until first is done.
I need to avoid using Await.result there, but I don't know how to get the results of a list of futures, without using for-comprehension (because is a dynamically list) for first time where I use it.
Because Akka ask operator (?) return a Future[Any], I tried that my getResponse method to return directly a JsValue to be mapped then in Future[JsValue]. If I remove the second Await.result and my method will return Future[JsValue], then the actor will return a Future[Future[JsValue]] which I don't think is too right.
After some more researches and solutions found on so, my code become:
Future.sequence(widgetsResponse) map { responses =>
Json.parse(
s"""
|{
|"dashboardId": $id,
|"tableSourceId": $tableSourceId,
|"widgets": [ ${responses.mkString(", ")}]
|}""".stripMargin
)
}
getResponse returns a Future[JsValue] now, removing both Await.result cases, and actor case become:
filteredSubscribers.foreach { subscriber =>
val actorOptions = getActorOptions(subscriber)
getResponse(actorOptions, request) map { data =>
subscriber ? data
}
}
I don't know why, still have a synchronous behavior. Damn, this can be due of my subscribers type: Set[ActorRef]? I tried to use parallel foreach and this looks like solving my problem:
filteredSubscribers.par.foreach { subscriber =>
val actorOptions = getActorOptions(subscriber)
getResponse(actorOptions, request) map { data =>
subscriber ? data
}
}
I'm using pub-sub pattern in fs2. I dynamically create topics and subscribers while processing a stream of messages. For some reason, my subscribers receive only initial message, but further published messages never get to subscribers
def startPublisher2[In](inputStream: Stream[F, Event]): Stream[F, Unit] = {
inputStream.through(processingPipe)
}
val processingPipe: Pipe[F, Event, Unit] = { inputStream =>
inputStream.flatMap {
case message: Message[_] => initSubscriber(message)
.flatMap { topic => Stream.eval(topic.publish1(message)) }
}
}
def initSubscriber[In](message: Message[In]): Stream[F,Topic[F, Event]] = {
Option(sessions.get(message.sessionId)) match {
case None =>
println(s"=== Create new topic for sessionId=${message.sessionId}")
val topic = Topic[F, Event](message)
sessions.put(message.sessionId, topic)
Stream.eval(topic) flatMap {t =>
//TODO: Is there a better solution?
Stream.empty.interruptWhen(interrupter) concurrently startSubscribers2(t)
}
case Some(topic) =>
println(s"=== Existing topic for sessionId=${message.sessionId}")
Stream.eval(topic)
}
}
Subscriber code is simple:
def startSubscribers2(topic: Topic[F, Event]): Stream[F, Unit] = {
def processEvent(): Pipe[F, Event, Unit] =
_.flatMap {
case e#Text(_) =>
Stream.eval(F.delay(println(s"Subscriber processing event: $e")))
case Message(content, sessionId) =>
//Thread.sleep(2000)
Stream.eval(F.delay(println(s"Subscriber #$sessionId got message: ${content}")))
case Quit =>
println("Quit")
Stream.eval(interrupter.set(true))
}
topic.subscribe(10).through(processEvent())
}
The output is the following:
=== Create new topic for sessionId=11111111-1111-1111-1111-111111111111
Subscriber #11111111-1111-1111-1111-111111111111 got message: 1
=== Existing topic for sessionId=11111111-1111-1111-1111-111111111111
=== Create new topic for sessionId=22222222-2222-2222-2222-222222222222
Subscriber #22222222-2222-2222-2222-222222222222 got message: 1
=== Create new topic for sessionId=33333333-3333-3333-3333-333333333333
Subscriber #33333333-3333-3333-3333-333333333333 got message: 1
=== Existing topic for sessionId=22222222-2222-2222-2222-222222222222
=== Existing topic for sessionId=22222222-2222-2222-2222-222222222222
I don't see messages published to existing topic.
Also, I'm wondering if there is a better way to start an async stream of subscribers, instead of Stream.empty.interruptWhen(interrupter) concurrently startSubscribers2(t)
I am currently playing around with akka streams and tried the following example.
Get the first element from kafka when requesting a certain HTTP endpoint.
This is the code I wrote and its working.
get {
path("ticket" / IntNumber) { ticketNr =>
val future = Consumer.plainSource(consumerSettings, Subscriptions.topics("tickets"))
.take(1)
.completionTimeout(5 seconds)
.runWith(Sink.head)
onComplete(future) {
case Success(record) => complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, record.value()))
case _ => complete(HttpResponse(StatusCodes.NotFound))
}
}
}
I am just wondering if this is the ideomatic way of working with (akka) streams.
So is there a more "direct" way of connecting the kafka stream to the HTTP response stream?
For example, when POSTing I do this:
val kafkaTicketsSink = Flow[String]
.map(new ProducerRecord[Array[Byte], String]("tickets", _))
.toMat(Producer.plainSink(producerSettings))(Keep.right)
post {
path("ticket") {
(entity(as[Ticket]) & extractMaterializer) { (ticket, mat) => {
val f = Source.single(ticket).map(t => t.description).runWith(kafkaTicketsSink)(mat)
onComplete(f) { _ =>
val locationHeader = headers.Location(s"/ticket/${ticket.id}")
complete(HttpResponse(StatusCodes.Created, headers = List(locationHeader)))
}
}
}
}
}
Maybe this can also be improved??
You could keep a single, backpressured stream alive using Sink.queue. You can pull an element from the materialized queue every time a request is received. This should give you back one element if available, and backpressure otherwise.
Something along the lines of:
val queue = Consumer.plainSource(consumerSettings, Subscriptions.topics("tickets"))
.runWith(Sink.queue())
get {
path("ticket" / IntNumber) { ticketNr =>
val future: Future[Option[ConsumerRecord[String, String]]] = queue.pull()
onComplete(future) {
case Success(Some(record)) => complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, record.value()))
case _ => complete(HttpResponse(StatusCodes.NotFound))
}
}
}
More info on Sink.queue can be found in the docs.
I've been banging my head against the wall for quite some time as I can't figure out how to add an error flow for an akka http websocket flow. What I'm trying to achieve is:
Message comes in from WS client
It's parsed with circe from json
If the message was the right format send the parsed message to an actor
If the message was the wrong format return an error message to the client
The actor can additionally send messages to the client
Without the error handling this was quite easy, but I can't figure out how to add the errors. Here's what I have:
type GameDecodeResult =
Either[(String, io.circe.Error), GameLobby.LobbyRequest]
val errorFlow =
Flow[GameDecodeResult]
.mapConcat {
case Left(err) => err :: Nil
case Right(_) => Nil
}
.map { case (message, error) =>
logger.info(s"failed to parse message $message", error)
TextMessage(Error(error.toString).asJson.spaces2)
}
val normalFlow = {
val normalFlowSink =
Flow[GameDecodeResult]
.mapConcat {
case Right(msg) => msg :: Nil
case Left(_) => Nil
}
.map(req => GameLobby.IncomingMessage(userId, req))
.to(Sink.actorRef[GameLobby.IncomingMessage](gameLobby, PoisonPill))
val normalFlowSource: Source[Message, NotUsed] =
Source.actorRef[GameLobby.OutgoingMessage](10, OverflowStrategy.fail)
.mapMaterializedValue { outActor =>
gameLobby ! GameLobby.UserConnected(userId, outActor)
NotUsed
}
.map(outMessage => TextMessage(Ok(outMessage.message).asJson.spaces2))
Flow.fromSinkAndSource(normalFlowSink, normalFlowSource)
}
val incomingMessageParser =
Flow[Message]
.flatMapConcat {
case tm: TextMessage =>
tm.textStream
case bm: BinaryMessage =>
bm.dataStream.runWith(Sink.ignore)
Source.empty }
.map { message =>
decode[GameLobby.LobbyRequest](message).left.map(err => message -> err)
}
These are my flows defined and I think this should bee good enough, but I have no idea how to assemble them and the complexity of the akka streaming API doesn't help. Here's what I tried:
val x: Flow[Message, Message, NotUsed] =
GraphDSL.create(incomingMessageParser, normalFlow, errorFlow)((_, _, _)) { implicit builder =>
(incoming, normal, error) =>
import GraphDSL.Implicits._
val partitioner = builder.add(Partition[GameDecodeResult](2, {
case Right(_) => 0
case Left(_) => 1
}))
val merge = builder.add(Merge[Message](2))
incoming.in ~> partitioner ~> normal ~> merge
partitioner ~> error ~> merge
}
but admittedly I have absolutely no idea how GraphDSL.create works, where I can use the ~> arrow or what I'm doing in genreal at the last part. It just won't type check and the error messages are not helping me one bit.
A few things needing to be fixed in the Flow you're building using the GraphDSL:
There is no need to pass the 3 subflows to the GraphDSL.create method, as this is only needed to customize the materialized value of your graph. You have already decided the materialized value of your graph is going to be NotUsed.
When connecting incoming using the ~> operator, you need to connect its outlet (.out) to the partition stage.
Every GraphDSL definition block needs to return the shape of your graph - i.e. its external ports. You do that by returning a FlowShape that has incoming.in as input, as merge.out as output. These will define the blueprint of your custom flow.
Because in the end you want to obtain a Flow, you're missing a last call to create is from the graph you defined. This call is Flow.fromGraph(...).
Code example below:
val x: Flow[Message, Message, NotUsed] =
Flow.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val partitioner = builder.add(Partition[GameDecodeResult](2, {
case Right(_) => 0
case Left(_) => 1
}))
val merge = builder.add(Merge[Message](2))
val incoming = builder.add(incomingMessageParser)
incoming.out ~> partitioner
partitioner ~> normalFlow ~> merge
partitioner ~> errorFlow ~> merge
FlowShape(incoming.in, merge.out)
})
In the following code I turn a TCP socket into an Observable[Array[Byte]]:
import rx.lang.scala.Observable
import rx.lang.scala.schedulers.IOScheduler
val sock = new Socket
type Bytes = Array[Byte]
lazy val s: Observable[Bytes] = Obs.using[Bytes, Socket] {
sock.connect(new InetSocketAddress("10.0.2.2", 9002), 1000)
sock
}(
socket => Observable.from[Bytes] {
val incoming = socket.getInputStream
val buffer = new Bytes(1024)
Stream.continually {
val read = incoming.read(buffer, 0, 1024)
buffer.take(read)
}.takeWhile(_.nonEmpty)
},
socket => {
println("Socket disposed")
socket.close
s.retry // Does not work
})
.subscribeOn(IOScheduler.apply)
s.subscribe(bytes => println(new String(bytes, "UTF-8")), println)
Connection to a remote server may be interrupted at any moment and in that case I'd like an Observable to try to reconnect automatically but s.retry does not do anything. How can I achieve this? Also can it be done "inside" the current Observable without creating a new one and re-subscribing?
You want to set up a new socket connection on each new subscription. This is easiest with (A)SyncOnSubscribe, ported to RxScala since version 0.26.5. One you have this observable you can use normal error control methods like .retry.
Something like this:
val socketObservable: Observable[Byte] = Observable.create(SyncOnSubscribe.singleState(
generator = () =>
sock
.connect(new InetSocketAddress("10.0.2.2", 9002), 1000)
.getInputStream
)(next = is => Try(is.read()) match {
case Success(-1) => Notification.OnCompleted()
case Success(byte) => Notification.OnNext(byte)
case Failure(e) => Notification.OnError(e)
},
onUnsubscribe = is => Try(is.close)
)
Note: this reads a single byte at a time and isn't terribly efficient. You can improve this with ASyncOnSubscribe or having each event of your observable be an array of bytes.
Note: this is a cold observable and will create a new socket for each subscriber. For example this will open 2 sockets:
socketObservable.foreach(b => System.out.print(b))
socketObservable.buffer(1024).foreach(kiloByte => System.out.println(kiloByte))
If this is not what you want you can turn it into a hot one with .share