I'm doing some basic test with queue I want to add a service time before dequeuing the queue.
for instance, the service time is not defined, It will be a mean of 3 seconds it may vary among elements.
Can someone provide some enlightenment?
Here is what I am trying to do: For instance, for 1, it may take 3s, for 2 it may take 5s, for 3 it may take 4s ...
The idea is the mean service time for example 4 seconds...
import cats.effect.{ExitCode, IO, IOApp, Timer}
import fs2._
import fs2.concurrent.Queue
import scala.concurrent.duration._
class StreamTypeIntToDouble(q: Queue[IO, (Double, String)])(
implicit timer: Timer[IO]
) {
def storeInQueue: Stream[IO, Unit] = {
Stream(1, 2, 3,4)
.covary[IO]
.evalTap(n => IO.delay(println(s"Pushing $n to Queue")))
.map { n =>
(n.toDouble, "Service")
}
.through(q.enqueue)
}
def getFromQueue: Stream[IO, Unit] = {
q.dequeue.evalMap(n => IO.delay(println(s"Pulling from queue $n")))
}
}
object Five extends IOApp {
override def run(args: List[String]): IO[ExitCode] = {
val program = for {
q <- Queue.bounded[IO, (Double, String)](10)
b = new StreamTypeIntToDouble(q)
_ <- b.storeInQueue.compile.drain.start
_ <- b.getFromQueue.compile.drain
} yield ()
program.as(ExitCode.Success)
}
}
Related
As mentioned in the jump-start guide, mapN will run all the futures in parallel, so I created the below simple Scala program, but a sample run shows diff to be 9187 ms and diffN to be 9106 ms. So it looks like that the mapN is also running the futures sequentially, isn't it? Please let me know if I am missing something?
package example
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
import java.time.LocalDateTime
import java.time.Duration
import scala.util.Failure
import scala.util.Success
import java.time.ZoneOffset
import cats.instances.future._
import cats.syntax.apply._
object FutureEx extends App {
val before = LocalDateTime.now()
val sum = for {
a <- getA
b <- getB
c <- getC
} yield (a + b + c)
sum onComplete {
case Failure(ex) => println(s"Error: ${ex.getMessage()}")
case Success(value) =>
val after = LocalDateTime.now()
println(s"before: $before")
println(s"after: $after")
val diff = getDiff(before, after)
println(s"diff: $diff")
println(s"sum: $value")
}
// let the above finish
Thread.sleep(20000)
val beforeN = LocalDateTime.now()
val usingMapN = (getA, getB, getC).mapN(add)
usingMapN onComplete {
case Failure(ex) => println(s"Error: ${ex.getMessage()}")
case Success(value) =>
val afterN = LocalDateTime.now()
println(s"beforeN: $beforeN")
println(s"afterN: $afterN")
val diff = getDiff(beforeN, afterN)
println(s"diffN: $diff")
println(s"sum: $value")
}
def getA: Future[Int] = {
println("inside A")
Thread.sleep(3000)
Future.successful(2)
}
def getB: Future[Int] = {
println("inside B")
Thread.sleep(3000)
Future.successful(3)
}
def getC: Future[Int] = {
println("inside C")
Thread.sleep(3000)
Future.successful(4)
}
def add(a: Int, b: Int, c: Int) = a + b + c
def getDiff(before: LocalDateTime, after: LocalDateTime): Long = {
Duration.between(before.toInstant(ZoneOffset.UTC), after.toInstant(ZoneOffset.UTC)).toMillis()
}
}
Because you have sleep outside Future it should be like:
def getA: Future[Int] = Future {
println("inside A")
Thread.sleep(3000)
2
}
So you start async Future with apply - Future.successful on the other hand returns pure value, meaning you execute sleep in same thread.
The time is going before mapN is ever called.
(getA, getB, getC).mapN(add)
This expression is creating a tuple (sequentially) and then calling mapN on it. So it is calling getA then getB then getC and since each of them has a 3 second delay, it takes 9 seconds.
I'm exploring fs2 library and for testing in I have a Stream[IO, Int] that are pushed into a Queue[IO,(Double, String, String, String)]. I need to identify the entry time to the queue as well as the exit time.
The problem here is that the processing time may vary among elements. For this, I thought of using .metered((Random.between(1,10)).seconds)
The issue is that I can not figure out a way in order to store entry time at the time of enqueuing and the exitTime after a certain amount of time given with the .metered((Random.between(1,10)).seconds)
Here is what I have tried :
import cats.effect.{ExitCode, IO, IOApp, Timer}
import fs2._
import fs2.concurrent.Queue
import scala.concurrent.duration._
import scala.util.Random
class StreamTypeIntToDouble(q: Queue[IO, (Double, String, String, String)])(
implicit timer: Timer[IO]
) {
import core.Processing._
def storeInQueue: Stream[IO, Unit] = {
Stream(1, 2, 3)
.covary[IO]
.evalTap(n => IO.delay(println(s"Pushing $n to Queue")))
.map { n =>
val entryTime = currentTimeNow
val exitTime = currentTimeNow
(n.toDouble, "Service", entryTime, exitTime)
}
.metered(Random.between(1, 20).seconds)
.through(q.enqueue)
//.metered((Random.between(1, 10).seconds))
}
def getFromQueue: Stream[IO, Unit] = {
q.dequeue
.evalMap(n => IO.delay(println(s"Pulling from queue $n")))
}
}
object Five extends IOApp {
override def run(args: List[String]): IO[ExitCode] = {
val program = for {
q <- Queue.bounded[IO, (Double, String, String, String)](10)
b = new StreamTypeIntToDouble(q)
_ <- b.storeInQueue.compile.drain.start
_ <- b.getFromQueue.compile.drain
} yield ()
program.as(ExitCode.Success)
}
}
The method used for identifying the current time is implemented as follows
def currentTimeNow: String = {
val format = new SimpleDateFormat("dd-MM-yy hh:mm:ss")
format.format(Calendar.getInstance().getTime())
}
Question : How can I keep track of the entry and exit time to construct the new output stream?
I'd like to measure elapsed time inside IO container. It's relatively easy to do with plain calls or futures (e.g. something like the code below)
class MonitoringComponentSpec extends FunSuite with Matchers with ScalaFutures {
import scala.concurrent.ExecutionContext.Implicits.global
def meter[T](future: Future[T]): Future[T] = {
val start = System.currentTimeMillis()
future.onComplete(_ => println(s"Elapsed ${System.currentTimeMillis() - start}ms"))
future
}
def call(): Future[String] = Future {
Thread.sleep(500)
"hello"
}
test("metered call") {
whenReady(meter(call()), timeout(Span(550, Milliseconds))) { s =>
s should be("hello")
}
}
}
But not sure how to wrap IO call
def io_meter[T](effect: IO[T]): IO[T] = {
val start = System.currentTimeMillis()
???
}
def io_call(): IO[String] = IO.pure {
Thread.sleep(500)
"hello"
}
test("metered io call") {
whenReady(meter(call()), timeout(Span(550, Milliseconds))) { s =>
s should be("hello")
}
}
Thank you!
Cats-effect has a Clock implementation that allows pure time measurement as well as injecting your own implementations for testing when you just want to simulate the passing of time. The example from their documentation is:
def measure[F[_], A](fa: F[A])
(implicit F: Sync[F], clock: Clock[F]): F[(A, Long)] = {
for {
start <- clock.monotonic(MILLISECONDS)
result <- fa
finish <- clock.monotonic(MILLISECONDS)
} yield (result, finish - start)
}
In cats effect 3, you can use .timed. Like,
import cats.effect.IO
import cats.effect.unsafe.implicits.global
import cats.implicits._
import scala.concurrent.duration._
val twoSecondsLater = IO.sleep(2.seconds) *> IO.println("Hi")
val elapsedTime = twoSecondsLater.timed.map(_._1)
elapsedTime.unsafeRunSync()
// would give something like this.
Hi
res0: FiniteDuration = 2006997667 nanoseconds
I want my stream to fail if the time between finishing the processing of one element until beginning the processing of the next element exceeds a specific amount.
None of the current timeout methods seem to deal with this case. How would I do this?
This is the closest to a solution I have made (try it out here):
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{Sink, Source}
import concurrent.duration._
import concurrent.Await
import concurrent.ExecutionContext.Implicits.global
import concurrent.Future
implicit class StreamTakeWithinTime[Out, Mat](src: Source[Out, Mat]) {
def takeWithinTime(maxIdleTime: FiniteDuration): Source[Out, Mat] =
src
.map(Option.apply)
.keepAlive(maxIdleTime, () => None)
.takeWhile {
case Some(_) => true
case None => false
}
.collect {
case Some(x) => x
}
}
implicit val actorSystem = ActorSystem("test")
implicit val actorMaterializer = ActorMaterializer()
var delay = 0
def tick = {
delay += 500
Thread.sleep(delay)
"tick"
}
val maxIdleTime = 2.seconds
val pipeline = Source
.fromIterator(() =>
new Iterator[String] {
override def hasNext: Boolean = true
override def next(): String = tick
})
.map { s =>
println("Long processing function...")
Thread.sleep(3000)
s
}
.takeWithinTime(maxIdleTime)
val res = Await.result(pipeline.runForeach(println), 30.seconds)
println("done")
which prints:
Long processing function...
tick
Long processing function...
tick
Long processing function...
tick
Long processing function...
done
The objective of this code is to take a sequence of futures, process them with Future.sequence, generate another sequence of futures and process them again with another Future.sequence.
The problem is that it doesn't print anything. What's wrong with this code?
object TestFutures extends App {
def future (i:Int) = Future { i }
def future2 (i:Int) = Future { i * 20 }
val futureResult = (1 to 10).map {
x => future(x)
}
var futureInts = Seq[Future[Int]]()
Future.sequence(futureResult).map{ list =>
list.foreach( y => futureInts = futureInts :+ future2(y))
}
Future.sequence(futureInts).map { list2 =>
list2.foreach( z => println(z))
}
Thread.sleep(5000)
}
It does work, you just have a race condition between the Thread.sleep and the futures finishing execution. You can wait on the futures completion with Await.result:
import scala.concurrent.{Await, Future}
import scala.concurrent.ExecutionContext.Implicits._
import scala.concurrent.duration._
def future(i: Int) = Future.successful(i)
def future2(i: Int) = Future.successful(i * 20)
val futureResult = (1 to 10).map(x => future(x))
val firstResult =
Future
.sequence(futureResult)
.flatMap(list => Future.sequence(list.map(future2))
val sequence = Await.result(firstResult, 10 seconds)
sequence.foreach(println)
Note you should not be synchronously blocking on futures in production code, this is merely for the demonstration that the future does do what you want it to.