scala infinite rx observable creation - how to do this properly? - scala

i'm recently started playing with rxjava-scala, and I wanted to create a (possibly) infinite stream observable. looking at the code and open issues on github, i found out that an "out of the box" solution is unimplemented yet (usecase06 in the issue says it's not even implemented for java).
so, i tried to come up with my own implementation. consider the following:
def getIterator: Iterator[String] = {
def fib(a: BigInt, b: BigInt): Stream[BigInt] = a #:: fib(b, a + b)
fib(1, 1).iterator.map{bi =>
Thread.sleep(100)
s"next fibonacci: ${bi}"
}
}
and a helper method:
def startOnThread(body: => Unit): Thread = {
val t = new Thread {
override def run = body
}
t.start
t
}
and the example core:
val observable: Observable[String] = Observable(
observer => {
var cancelled = false
val fs = getIterator
val t = startOnThread{
while (!cancelled) {observer.onNext(fs.next)}
observer.onCompleted()
}
Subscription(new rx.Subscription {
override def unsubscribe() = {
cancelled = true
t.join
}
})
}
)
val observer = Observer(new rx.Observer[String]{
def onNext(args: String) = println(args)
def onError(e: Throwable) = logger.error(e.getMessage)
def onCompleted() = println("DONE!")
})
val subscription = observable.subscribe(observer)
Thread.sleep(5000)
subscription.unsubscribe()
this seems to work fine, but i'm not happy with this. first of all, i'm creating a new Thread, which could be bad. but even if i use some kind of thread pool, it would still feel wrong. so i'm thinking i should use a scheduler, which sounds like a proper solution, only i can't figure out how to use it in such a scenario. i tried suppling rx.lang.scala.concurrency.Schedulers.threadPoolForIO in the observeOn method, but it seems like i'm doing it wrong. observable's code won't compile with it. any help would be greatly appreciate. thanks!

First of all, there are already adapters to convert Iterable to Observable: "from" function.
Seconds, iterator wont return control, so your Sleep and unsubscribe wont be called. You need to execute subscription operation in a dedicated thread "subscribeOn(NewThreadScheduler())"
def getIterator: Iterator[String] = {
def fib(a: BigInt, b: BigInt): Stream[BigInt] = a #:: fib(b, a + b)
fib(1, 1).iterator.map{bi =>
Thread.sleep(1000)
s"next fibonacci: ${bi}"
}
}
val sub = Observable.from(getIterator.toIterable)
.subscribeOn(NewThreadScheduler())
.subscribe(println(_))
readLine()
sub.unsubscribe()
println("fib complete")
readLine()

Related

MVar tryPut returns true and isEmpty also returns true

I wrote simple callback(handler) function which i pass to async api and i want to wait for result:
object Handlers {
val logger: Logger = Logger("Handlers")
implicit val cs: ContextShift[IO] =
IO.contextShift(ExecutionContext.Implicits.global)
class DefaultHandler[A] {
val response: IO[MVar[IO, A]] = MVar.empty[IO, A]
def onResult(obj: Any): Unit = {
obj match {
case obj: A =>
println(response.flatMap(_.tryPut(obj)).unsafeRunSync())
println(response.flatMap(_.isEmpty).unsafeRunSync())
case _ => logger.error("Wrong expected type")
}
}
def getResponse: A = {
response.flatMap(_.take).unsafeRunSync()
}
}
But for some reason both tryPut and isEmpty(when i'd manually call onResult method) returns true, therefore when i calling getResponse it sleeps forever.
This is the my test:
class HandlersTest extends FunSuite {
test("DefaultHandler.test") {
val handler = new DefaultHandler[Int]
handler.onResult(3)
val response = handler.getResponse
assert(response != 0)
}
}
Can somebody explain why tryPut returns true, but nothing puts. And what is the right way to use Mvar/channels in scala?
IO[X] means that you have the recipe to create some X. So on your example, yuo are putting in one MVar and then asking in another.
Here is how I would do it.
object Handlers {
trait DefaultHandler[A] {
def onResult(obj: Any): IO[Unit]
def getResponse: IO[A]
}
object DefaultHandler {
def apply[A : ClassTag]: IO[DefaultHandler[A]] =
MVar.empty[IO, A].map { response =>
new DefaultHandler[A] {
override def onResult(obj: Any): IO[Unit] = obj match {
case obj: A =>
for {
r1 <- response.tryPut(obj)
_ <- IO(println(r1))
r2 <- response.isEmpty
_ <- IO(println(r2))
} yield ()
case _ =>
IO(logger.error("Wrong expected type"))
}
override def getResponse: IO[A] =
response.take
}
}
}
}
The "unsafe" is sort of a hint, but every time you call unsafeRunSync, you should basically think of it as an entire new universe. Before you make the call, you can only describe instructions for what will happen, you can't actually change anything. During the call is when all the changes occur. Once the call completes, that universe is destroyed, and you can read the result but no longer change anything. What happens in one unsafeRunSync universe doesn't affect another.
You need to call it exactly once in your test code. That means your test code needs to look something like:
val test = for {
handler <- TestHandler.DefaultHandler[Int]
_ <- handler.onResult(3)
response <- handler.getResponse
} yield response
assert test.unsafeRunSync() == 3
Note this doesn't really buy you much over just using the MVar directly. I think you're trying to mix side effects inside IO and outside it, but that doesn't work. All the side effects need to be inside.

Submitting operations in created future

I have a Future lazy val that obtains some object and a function which submits operations in the Future.
class C {
def printLn(s: String) = println(s)
}
lazy val futureC: Future[C] = Future{Thread.sleep(3000); new C()}
def func(s: String): Unit = {
futureC.foreach{c => c.printLn(s)}
}
The problem is when Future is completed it executes operations in reverse order than they have been submited. So for example if I execute sequentialy
func("A")
func("B")
func("C")
I get after Future completion
scala> C
B
A
This order is important for me. Is there a way to preserve this order?
Of course I can use an actor who asks for future and stashing strings while future is not ready, but it seems redundant for me.
lazy val futureC: Future[C]
lazy vals in scala will be compiled in to the code which uses a synchronized block for thread safety.
Here when the func(A) is called, it will obtain the lock for the lazy val and that thread will go to sleep.
Therefore func(B) & func(C) will blocked by the lock.
When those blocked threads are run, the order cannot be guaranteed.
If you do it like below, you'll have the order as you expect. This is because the for comprehension creates a flatMap, & map based chain that gets executed sequentially.
lazy val futureC: Future[C] = Future {
Thread.sleep(1000)
new C()
}
def func(s: String) : Future[Unit] = {
futureC.map { c => c.printLn(s) }
}
val x = for {
_ <- func("A")
_ <- func("B")
_ <- func("C")
} yield ()
The order preserves even without the lazy keyword. You can remove the lazy keyword unless it is really necessary.
Hope this helps.
You can use Future.traverse to ensure the order of execution.
Something like this.. Im not sure how your func has a reference to the correct futureC, so I moved it inside.
def func(s: String): Future[Unit] = {
lazy val futureC = Future{Thread.sleep(3000); new C()}
futureC.map{c => c.printLn(s)}
}
def traverse[A,B](xs: Seq[A])(fn: A => Future[B]): Future[Seq[B]] =
xs.foldLeft(Future(Seq[B]())) { (acc, item) =>
acc.flatMap { accValue =>
fn(item).map { itemValue =>
accValue :+ itemValue
}
}
}
traverse(Seq("A","B","C"))(func)

Closing an Akka stream from inside a GraphStage (Akka 2.4.2)

In Akka Stream 2.4.2, PushStage has been deprecated. For Streams 2.0.3 I was using the solution from this answer:
How does one close an Akka stream?
which was:
import akka.stream.stage._
val closeStage = new PushStage[Tpe, Tpe] {
override def onPush(elem: Tpe, ctx: Context[Tpe]) = elem match {
case elem if shouldCloseStream ⇒
// println("stream closed")
ctx.finish()
case elem ⇒
ctx.push(elem)
}
}
How would I close a stream in 2.4.2 immediately, from inside a GraphStage / onPush() ?
Use something like this:
val closeStage = new GraphStage[FlowShape[Tpe, Tpe]] {
val in = Inlet[Tpe]("closeStage.in")
val out = Outlet[Tpe]("closeStage.out")
override val shape = FlowShape.of(in, out)
override def createLogic(inheritedAttributes: Attributes) = new GraphStageLogic(shape) {
setHandler(in, new InHandler {
override def onPush() = grab(in) match {
case elem if shouldCloseStream ⇒
// println("stream closed")
completeStage()
case msg ⇒
push(out, msg)
}
})
setHandler(out, new OutHandler {
override def onPull() = pull(in)
})
}
}
It is more verbose but one the one side one can define this logic in a reusable way and on the other side one no longer has to worry about differences between the stream elements because the GraphStage can be handled in the same way as a flow would be handled:
val flow: Flow[Tpe] = ???
val newFlow = flow.via(closeStage)
Posting for other people's reference. sschaef's answer is correct procedurally, but the connections was kept open for a minute and eventually would time out and throw a "no activity" exception, closing the connection.
In reading the docs further, I noticed that the connection was closed when all upstreams flows completed. In my case, I had more than one upstream.
For my particular use case, the fix was to add eagerComplete=true to close stream as soon as any (rather than all) upstream completes. Something like:
... = builder.add(Merge[MyObj](3,eagerComplete = true))
Hope this helps someone.

scala, transform a callback pattern to a functional style internal iterator

Suppose this API is given and we cannot change it:
object ProviderAPI {
trait Receiver[T] {
def receive(entry: T)
def close()
}
def run(r: Receiver[Int]) {
new Thread() {
override def run() {
(0 to 9).foreach { i =>
r.receive(i)
Thread.sleep(100)
}
r.close()
}
}.start()
}
}
In this example, ProviderAPI.run takes a Receiver, calls receive(i) 10 times and then closes. Typically, ProviderAPI.run would call receive(i) based on a collection which could be infinite.
This API is intended to be used in imperative style, like an external iterator. If our application needs to filter, map and print this input, we need to implement a Receiver which mixes all these operations:
object Main extends App {
class MyReceiver extends ProviderAPI.Receiver[Int] {
def receive(entry: Int) {
if (entry % 2 == 0) {
println("Entry#" + entry)
}
}
def close() {}
}
ProviderAPI.run(new MyReceiver())
}
Now, the question is how to use the ProviderAPI in functional style, internal iterator (without changing the implementation of ProviderAPI, which is given to us). Note that ProviderAPI could also call receive(i) infinite times, so it is not an option to collect everything in a list (also, we should handle each result one by one, instead of collecting all the input first, and processing it afterwards).
I am asking how to implement such a ReceiverToIterator, so that we can use the ProviderAPI in functional style:
object Main extends App {
val iterator = new ReceiverToIterator[Int] // how to implement this?
ProviderAPI.run(iterator)
iterator
.view
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
Update
Here are four solutions:
IteratorWithSemaphorSolution: The workaround solution I proposed first attached to the question
QueueIteratorSolution: Using the BlockingQueue[Option[T]] based on the suggestion of nadavwr.
It allows the producer to continue producing up to queueCapacity before being blocked by the consumer.
PublishSubjectSolution: Very simple solution, using PublishSubject from Netflix RxJava-Scala API.
SameThreadReceiverToTraversable: Very simple solution, by relaxing the constraints of the question
Updated: BlockingQueue of 1 entry
What you've implemented here is essentially Java's BlockingQueue, with a queue size of 1.
Main characteristic: uber-blocking. A slow consumer will kill your producer's performance.
Update: #gzm0 mentioned that BlockingQueue doesn't cover EOF. You'll have to use BlockingQueue[Option[T]] for that.
Update: Here's a code fragment. It can be made to fit with your Receiver.
Some of it inspired by Iterator.buffered. Note that peek is a misleading name, as it may block -- and so will hasNext.
// fairness enabled -- you probably want to preserve order...
// alternatively, disable fairness and increase buffer to be 'big enough'
private val queue = new java.util.concurrent.ArrayBlockingQueue[Option[T]](1, true)
// the following block provides you with a potentially blocking peek operation
// it should `queue.take` when the previous peeked head has been invalidated
// specifically, it will `queue.take` and block when the queue is empty
private var head: Option[T] = _
private var headDefined: Boolean = false
private def invalidateHead() { headDefined = false }
private def peek: Option[T] = {
if (!headDefined) {
head = queue.take()
headDefined = true
}
head
}
def iterator = new Iterator[T] {
// potentially blocking; only false upon taking `None`
def hasNext = peek.isDefined
// peeks and invalidates head; throws NoSuchElementException as appropriate
def next: T = {
val opt = peek; invalidateHead()
if (opt.isEmpty) throw new NoSuchElementException
else opt.get
}
}
Alternative: Iteratees
Iterator-based solutions will generally involve more blocking. Conceptually, you could use continuations on the thread doing the iteration to avoid blocking the thread, but continuations mess with Scala's for-comprehensions, so no joy down that road.
Alternatively, you could consider an iteratee-based solution. Iteratees are different than iterators in that the consumer isn't responsible for advancing the iteration -- the producer is. With iteratees, the consumer basically folds over the entries pushed by the producer over time. Folding each next entry as it becomes available can take place in a thread pool, since the thread is relinquished after each fold completes.
You won't get nice for-syntax for iteration, and the learning curve is a little challenging, but if you feel confident using a foldLeft you'll end up with a non-blocking solution that does look reasonable on the eye.
To read more about iteratees, I suggest taking a peek at PlayFramework 2.X's iteratee reference. The documentation describes their stand-alone iteratee library, which is 100% usable outside the context of Play. Scalaz 7 also has a comprehensive iteratee library.
IteratorWithSemaphorSolution
The first workaround solution that I proposed attached to the question.
I moved it here as an answer.
import java.util.concurrent.Semaphore
object Main extends App {
val iterator = new ReceiverToIterator[Int]
ProviderAPI.run(iterator)
iterator
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
class ReceiverToIterator[T] extends ProviderAPI.Receiver[T] with Iterator[T] {
var lastEntry: T = _
var waitingToReceive = new Semaphore(1)
var waitingToBeConsumed = new Semaphore(1)
var eof = false
waitingToReceive.acquire()
def receive(entry: T) {
println("ReceiverToIterator.receive(" + entry + "). START.")
waitingToBeConsumed.acquire()
lastEntry = entry
waitingToReceive.release()
println("ReceiverToIterator.receive(" + entry + "). END.")
}
def close() {
println("ReceiverToIterator.close().")
eof = true
waitingToReceive.release()
}
def hasNext = {
println("ReceiverToIterator.hasNext().START.")
waitingToReceive.acquire()
waitingToReceive.release()
println("ReceiverToIterator.hasNext().END.")
!eof
}
def next = {
println("ReceiverToIterator.next().START.")
waitingToReceive.acquire()
if (eof) { throw new NoSuchElementException }
val entryToReturn = lastEntry
waitingToBeConsumed.release()
println("ReceiverToIterator.next().END.")
entryToReturn
}
}
QueueIteratorSolution
The second workaround solution that I proposed attached to the question. I moved it here as an answer.
Solution using the BlockingQueue[Option[T]] based on the suggestion of nadavwr.
It allows the producer to continue producing up to queueCapacity before being blocked by the consumer.
I implement a QueueToIterator that uses a ArrayBlockingQueue with a given capacity.
BlockingQueue has a take() method, but not a peek or hasNext, so I need an OptionNextToIterator as follows:
trait OptionNextToIterator[T] extends Iterator[T] {
def getOptionNext: Option[T] // abstract
def hasNext = { ... }
def next = { ... }
}
Note: I am using the synchronized block inside OptionNextToIterator, and I am not sure it is totally correct
Solution:
import java.util.concurrent.ArrayBlockingQueue
object Main extends App {
val receiverToIterator = new ReceiverToIterator[Int](queueCapacity = 3)
ProviderAPI.run(receiverToIterator)
Thread.sleep(3000) // test that ProviderAPI.run can produce 3 items ahead before being blocked by the consumer
receiverToIterator.filter(_ % 2 == 0).map("Entry#" + _).foreach(println)
}
class ReceiverToIterator[T](val queueCapacity: Int = 1) extends ProviderAPI.Receiver[T] with QueueToIterator[T] {
def receive(entry: T) { queuePut(entry) }
def close() { queueClose() }
}
trait QueueToIterator[T] extends OptionNextToIterator[T] {
val queueCapacity: Int
val queue = new ArrayBlockingQueue[Option[T]](queueCapacity)
var queueClosed = false
def queuePut(entry: T) {
if (queueClosed) { throw new IllegalStateException("The queue has already been closed."); }
queue.put(Some(entry))
}
def queueClose() {
queueClosed = true
queue.put(None)
}
def getOptionNext = queue.take
}
trait OptionNextToIterator[T] extends Iterator[T] {
def getOptionNext: Option[T]
var answerReady: Boolean = false
var eof: Boolean = false
var element: T = _
def hasNext = {
prepareNextAnswerIfNecessary()
!eof
}
def next = {
prepareNextAnswerIfNecessary()
if (eof) { throw new NoSuchElementException }
val retVal = element
answerReady = false
retVal
}
def prepareNextAnswerIfNecessary() {
if (answerReady) {
return
}
synchronized {
getOptionNext match {
case None => eof = true
case Some(e) => element = e
}
answerReady = true
}
}
}
PublishSubjectSolution
A very simple solution using PublishSubject from Netflix RxJava-Scala API:
// libraryDependencies += "com.netflix.rxjava" % "rxjava-scala" % "0.20.7"
import rx.lang.scala.subjects.PublishSubject
class MyReceiver[T] extends ProviderAPI.Receiver[T] {
val channel = PublishSubject[T]()
def receive(entry: T) { channel.onNext(entry) }
def close() { channel.onCompleted() }
}
object Main extends App {
val myReceiver = new MyReceiver[Int]()
ProviderAPI.run(myReceiver)
myReceiver.channel.filter(_ % 2 == 0).map("Entry#" + _).subscribe{n => println(n)}
}
ReceiverToTraversable
This stackoverflow question came when I wanted to list and process a svn repository using the svnkit.com API as follows:
SvnList svnList = new SvnOperationFactory().createList();
svnList.setReceiver(new ISvnObjectReceiver<SVNDirEntry>() {
public void receive(SvnTarget target, SVNDirEntry dirEntry) throws SVNException {
// do something with dirEntry
}
});
svnList.run();
the API used a callback function, and I wanted to use a functional style instead, as follows:
svnList.
.filter(e => "pom.xml".compareToIgnoreCase(e.getName()) == 0)
.map(_.getURL)
.map(getMavenArtifact)
.foreach(insertArtifact)
I thought of having a class ReceiverToIterator[T] extends ProviderAPI.Receiver[T] with Iterator[T],
but this required the svnkit api to run in another thread.
That's why I asked how to solve this problem with a ProviderAPI.run method that run in a new thread. But that was not very wise: if I had explained the real case, someone might have found a better solution before.
Solution
If we tackle the real problem (so, no need of using a thread for the svnkit),
a simpler solution is to implement a scala.collection.Traversable instead of a scala.collection.Iterator.
While Iterator requires a next and hasNext def, Traversable requires a foreach def,
which is very similar to the svnkit callback!
Note that by using view, we make the transformers lazy, so elements are passed one by one through all the chain to foreach(println).
this allows to process an infinite collection.
object ProviderAPI {
trait Receiver[T] {
def receive(entry: T)
def close()
}
// Later I found out that I don't need a thread
def run(r: Receiver[Int]) {
(0 to 9).foreach { i => r.receive(i); Thread.sleep(100) }
}
}
object Main extends App {
new ReceiverToTraversable[Int](r => ProviderAPI.run(r))
.view
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
class ReceiverToTraversable[T](val runProducer: (ProviderAPI.Receiver[T] => Unit)) extends Traversable[T] {
override def foreach[U](f: (T) => U) = {
object MyReceiver extends ProviderAPI.Receiver[T] {
def receive(entry: T) = f(entry)
def close() = {}
}
runProducer(MyReceiver)
}
}

Scala Actors + Console.withOut possible bug

I found some strange behavior when Console.withOut used within an actor. For code:
case object I
val out = new PipedOutputStream
val pipe = new PipedInputStream(out)
def read: String = ** read from `pipe` stream
class A extends Actor{
var b: Actor = _
Console.withOut(out){
b = actor { loop { self react {
case I => println("II")
}}}
}
def act = {
loop { self react {
case I =>
println("I")
b ! I
}}
}
}
def main(args: Array[String]): Unit = {
val a = new A
a.start
a ! I
Thread sleep 100
println("!!\n" + read + "!!")
}
got following output:
!!
I
II
!!
Any idea why output from A actor's act method is also redirected? Thank you for your answers.
UPDATE:
Here is read function:
#tailrec
def read(instream: InputStream, acc: List[Char] = Nil): String =
if(instream.available > 0) read(instream, acc :+ instream.read.toChar) else acc mkString ""
def read: String = read(pipe)
It seems to me, on the contrary, that neither actor has its output redirected, since withOut will have finished executing long before println("II") is called. Since this is all based on DynamicVariable, however, I'm not willing to bet on it. :-) The absence of working code precludes any testing as well.