I am trying to get the time spend for each request to send it to a remote system in my Play application.
The case class TimeAction which works well with asynchronous action doesn't work with synchronous action
//Helper
case class TimeAction(label: String] = Map()) = {
def logging[A](action: Action[A])(implicit ec: ExecutionContext) =
Action.async(action.parser) { request =>
MetricLogger.timeFuture(s"controllers.action.$label") {
action(request)
}
}
}
//controller
def test =
TimeAction("toto.test.sync").logging {
Action { _ =>
Thread.sleep(4000)
Ok
}
}
// expected output 4003 milliseconds but got 3 milliseconds
def testAsync =
TimeAction("toto.test.async").logging {
Action.async { _ =>
Future {
Thread.sleep(4000)
Ok
}
}
}
// go 4003 milliseconds as expected
//calculate time elapsed during a future
def timeImpl[T](f: Future[T],
onError: (Duration, Throwable) => Unit,
onSuccess: (Duration, T) => Unit)(implicit ec: ExecutionContext): Future[T] = {
val start = System.nanoTime
f.onComplete {
case Success(d) => onSuccess(calculateTimeDiff(start, System.nanoTime()), d)
case Failure(e) => onError(calculateTimeDiff(start, System.nanoTime()), e)
}
f
}
I suppose the action is executed before it got to my TimeAction and is only wrapped in a instant Future. Do you know where ? What am I missing ?
I am using Play 2.3
Related
Trying to wrap my head around how async tasks are chained together, for ex futures and flatMap
val a: Future[Int] = Future { 123 }
val b: Future[Int] = a.flatMap(a => Future { a + 321 })
How would i implement something similar where i build a new Future by waiting for a first and applying a f on the result. I tried looking a scala's source code but got stuck here: https://github.com/scala/scala/blob/2.13.x/src/library/scala/concurrent/Future.scala#L216
i would imagine some code like:
def myFlatMap(a: Future[Int])(f: a => Future[Int]): Future[Int] = {
Future {
// somehow wait for a to complete and the apply 'f'
a.onComplete {
case Success(x) => f(a)
}
}
}
but i guess above will return a Future[Unit] instead. I would just like to understand the "pattern" of how async tasks are chained together.
(Disclaimer: I'm the main maintainer of Scala Futures)
You wouldn't want to implement flatMap/recoverWith/transformWith yourself—it is a very complex feature to implement safely (stack-safe, memory-safe, and concurrency-safe).
You can see what I am talking about here:
https://github.com/scala/scala/blob/2.13.x/src/library/scala/concurrent/impl/Promise.scala#L442
https://github.com/scala/scala/blob/2.13.x/src/library/scala/concurrent/impl/Promise.scala#L307
https://github.com/scala/scala/blob/2.13.x/src/library/scala/concurrent/impl/Promise.scala#L274
There is more reading on the topic here: https://viktorklang.com/blog/Futures-in-Scala-2.12-part-9.html
I found that looking at the source for scala 2.12 was easier for me to understand how Future / Promise was implemented, i learned that Future is in fact a Promise and by looking at the implementation i now understand how "chaining" async tasks can be implemented.
Im posting the code that i wrote to help me better understand what was going on under the hood.
import java.util.concurrent.{ScheduledThreadPoolExecutor, TimeUnit}
import scala.collection.mutable.ListBuffer
trait MyAsyncTask[T] {
type Callback = T => Unit
var result: Option[T]
// register a new callback to be called when task is completed.
def onComplete(callback: Callback): Unit
// complete the given task, and invoke all registered callbacks.
def complete(value: T): Unit
// create a new task, that will wait for the current task to complete and apply
// the provided map function, and complete the new task when completed.
def flatMap[B](f: T => MyAsyncTask[B]): MyAsyncTask[B] = {
val wrapper = new DefaultAsyncTask[B]()
onComplete { x =>
f(x).onComplete { y =>
wrapper.complete(y)
}
}
wrapper
}
def map[B](f: T => B): MyAsyncTask[B] = flatMap(x => CompletedAsyncTask(f(x)))
}
/**
* task with fixed pre-calculated result.
*/
case class CompletedAsyncTask[T](value: T) extends MyAsyncTask[T] {
override var result: Option[T] = Some(value)
override def onComplete(callback: Callback): Unit = {
// we already have the result just call the callback.
callback(value)
}
override def complete(value: T): Unit = () // noop nothing to complete.
}
class DefaultAsyncTask[T] extends MyAsyncTask[T] {
override var result: Option[T] = None
var isCompleted = false
var listeners = new ListBuffer[Callback]()
/**
* register callback, to be called when task is completed.
*/
override def onComplete(callback: Callback): Unit = {
if (isCompleted) {
// already completed just invoke callback
callback(result.get)
} else {
// add the listener
listeners.addOne(callback)
}
}
/**
* trigger all registered callbacks to `onComplete`
*/
override def complete(value: T): Unit = {
result = Some(value)
listeners.foreach { listener =>
listener(value)
}
}
}
object MyAsyncTask {
def apply[T](body: (T => Unit) => Unit): MyAsyncTask[T] = {
val task = new DefaultAsyncTask[T]()
// pass in `complete` as callback.
body(task.complete)
task
}
}
object MyAsyncFlatMap {
/**
* helper to simulate async task.
*/
def delayedCall(body: => Unit, delay: Int): Unit = {
val scheduler = new ScheduledThreadPoolExecutor(1)
val run = new Runnable {
override def run(): Unit = {
body
}
}
scheduler.schedule(run, delay, TimeUnit.SECONDS)
}
def main(args: Array[String]): Unit = {
val getAge = MyAsyncTask[Int] { cb =>
delayedCall({ cb(66) }, 1)
}
val getName = MyAsyncTask[String] { cb =>
delayedCall({ cb("John") }, 2)
}
// same as: getAge.flatMap(age => getName.map(name => (age, name)))
val result: MyAsyncTask[(Int, String)] = for {
age <- getAge
name <- getName
} yield (age, name)
result.onComplete {
case (age, name) => println(s"hello $name $age")
}
}
}
In my Play application, I service my requests usings cats-effect's IO, instead of Future in the controller, like this (super-simplified):
def handleServiceResult(serviceResult: ServiceResult): Result = ...
def serviceMyRequest(request: Request): IO[ServiceResult] = ...
def myAction = Action { request =>
handleServiceResult(
serviceMyRequest(request).unsafeRunSync()
)
}
Requests are then processed (asynchronously) on Play's default thread pool. Now, I want to implement multiple thread pools to handle different sorts of requests. Were I using Futures, I could do this:
val myCustomExecutionContext: ExecutionContext = ...
def serviceMyRequest(request: Request): Future[ServiceResult] = ...
def myAction = Action.async { request =>
Future(serviceMyRequest(request))(myCustomExecutionContext)
.map(handleServiceResult)(defaultExecutionContext)
}
But I'm not using Futures, I'm using IO, and I'm not sure about the right way to go about implementing it. This looks promising, but seems a bit clunky:
def serviceMyRequest(request: Request): IO[ServiceResult] = ...
def myAction = Action { request =>
val ioServiceResult = for {
_ <- IO.shift(myCustomExecutionContext)
serviceResult <- serviceMyRequest(request)
_ <- IO.shift(defaultExecutionContext)
} yield {
serviceResult
}
handleServiceResult(ioServiceResult.unsafeRunSync())
}
Is this the right way to implement it? Is there a best practice here? Am I screwing up badly? Thanks.
Ok, so since this doesn't seem to be well-trodden ground, this is what I ended up implementing:
trait PlayIO { self: BaseControllerHelpers =>
implicit class IOActionBuilder[A](actionBuilder: ActionBuilder[Request, A]) {
def io(block: Request[A] => IO[Result]): Action[A] = {
actionBuilder.apply(block.andThen(_.unsafeRunSync()))
}
def io(executionContext: ExecutionContext)(block: Request[A] => IO[Result]): Action[A] = {
val shiftedBlock = block.andThen(IO.shift(executionContext) *> _ <* IO.shift(defaultExecutionContext))
actionBuilder.apply(shiftedBlock.andThen(_.unsafeRunSync()))
}
}
}
Then (using the framework from the question) if I mix PlayIO into the controller, I can do this,
val myCustomExecutionContext: ExecutionContext = ...
def handleServiceResult(serviceResult: ServiceResult): Result = ...
def serviceMyRequest(request: Request): IO[ServiceResult] = ...
def myAction = Action.io(myCustomExecutionContext) { request =>
serviceMyRequest(request).map(handleServiceResult)
}
such that I execute the action's code block on myCustomExecutionContext and then, once complete, thread-shift back to Play's default execution context.
Update:
This is a bit more flexible:
trait PlayIO { self: BaseControllerHelpers =>
implicit class IOActionBuilder[R[_], A](actionBuilder: ActionBuilder[R, A]) {
def io(block: R[A] => IO[Result]): Action[A] = {
actionBuilder.apply(block.andThen(_.unsafeRunSync()))
}
def io(executionContext: ExecutionContext)(block: R[A] => IO[Result]): Action[A] = {
if (executionContext == defaultExecutionContext) io(block) else {
val shiftedBlock = block.andThen(IO.shift(executionContext) *> _ <* IO.shift(defaultExecutionContext))
io(shiftedBlock)
}
}
}
}
Update2:
Per the comment above, this will ensure we always shift back to the default thread pool:
trait PlayIO { self: BaseControllerHelpers =>
implicit class IOActionBuilder[R[_], A](actionBuilder: ActionBuilder[R, A]) {
def io(block: R[A] => IO[Result]): Action[A] = {
actionBuilder.apply(block.andThen(_.unsafeRunSync()))
}
def io(executionContext: ExecutionContext)(block: R[A] => IO[Result]): Action[A] = {
if (executionContext == defaultExecutionContext) io(block) else {
val shiftedBlock = block.andThen { ioResult =>
IO.shift(executionContext).bracket(_ => ioResult)(_ => IO.shift(defaultExecutionContext))
}
io(shiftedBlock)
}
}
}
}
private def responseValidationFlow[T](responsePair: ResponsePair)(implicit evidence: FromByteStringUnmarshaller[T]) = responsePair match {
case (Success(response), _) => {
response.entity.dataBytes
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 8192))
.mapAsyncUnordered(Runtime.getRuntime.availableProcessors()) { body =>
if (response.status == OK) {
val obj: Future[T] = Unmarshal(body).to[T]
obj.foreach(x => log.debug("Received {}: {}.", x.getClass.getSimpleName, x))
obj.map(Right(_))
} else {
val reason = body.utf8String
log.error("Non 200 response status: {}, body: {}.", response.status.intValue(), reason)
Future.successful(reason)
.map(Left(_))
}
}
}
case (Failure(t), _) => {
Source.single(Left(t.getMessage))
}
}
What I’d like to do is parameterize both sides of the Either. That’s not hard to do, but what I’m having trouble with is creating a Left or Right that doesn’t have a value. In that case, the body should be consumed fully and discarded. I tried using ClassTags, but the compiler thinks that the type is Any, not S or T. An sample invocation of this a method would look like responseValidationFlow[String, Unit] producing an Source[Either[String, Unit]]
I believe, you can just define an implicit unmarshaller to Unit in scope:
implicit val unitUnmarshaller: FromByteStringUnmarshaller[Unit] =
Unmarshaller.strict(_ => ())
Based on what #Kolmar suggested, here is the working code.
private def responseValidationFlow[L, R](responsePair: ResponsePair)(
implicit ev1: FromByteStringUnmarshaller[L], ev2: FromByteStringUnmarshaller[R]
): Source[Either[L, R], Any] = {
responsePair match {
case (Success(response), _) => {
response.entity.dataBytes
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 8192))
.mapAsyncUnordered(Runtime.getRuntime.availableProcessors()) { body =>
if (response.status == OK) {
val obj: Future[R] = Unmarshal(body).to[R]
obj.foreach(x => log.debug("Received {}.", x.getClass.getSimpleName))
obj.map(Right(_))
} else {
log.error("Non 200 response status: {}.", response.status.intValue())
Unmarshal(body).to[L]
.map(Left(_))
}
}
}
case (Failure(t), _) => {
log.error(t, "Request failed.")
Source.empty
}
}
}
If the method is invoked like this responseValidationFlow[Status, Unit], then a FromByteStringUnmarshaller[Unit] is made available at call site. The compiler uses the implicit evidence to find the required Unmarshaller.
Suppose I have the following code:
val futureInt1 = getIntAsync1();
val futureInt2 = getIntAsync2();
val futureSum = for {
int1 <- futureInt1
int2 <- futureInt2
} yield (int1 + int2)
val sum = Await.result(futureSum, 60 seconds)
Now suppose one of getIntAsync1 or getIntAsync2 takes a very long time, and it leads to Await.result throwing exception:
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [60 seconds]
How am I supposed to know which one of getIntAsync1 or getIntAsync2 was still pending and actually lead to the timeout?
Note that here I'm merging 2 futures with zip, and this is a simple example for the question, but in my app I have this kind of code at different level (ie getIntAsync1 itself can use Future.zip or Future.sequence, map/flatMap/applicative)
Somehow what I'd like is to be able to log the pending concurrent operation stacktraces when a timeout occur on my main thread, so that I can know where are the bottlenexts on my system.
I have an existing legacy API backend which is not fully reactive yet and won't be so soon. I'm trying to increase response times by using concurrency. But since using this kind of code, It's become way more painful to understand why something takes a lot of time in my app. I would appreciate any tip you can provide to help me debugging such issues.
The key is realizing is that a the Future doesn't time out in your example—it's your calling thread which pauses for at most X time.
So, if you want to model time in your Futures you should use zipWith on each branch and zip with a Future which will contain a value within a certain amount of time. If you use Akka then you can use akka.pattern.after for this, together with Future.firstCompletedOf.
Now, even if you do, how do you figure out why any of your futures weren't completed in time, perhaps they depended on other futures which didn't complete.
The question boils down to: Are you trying to do some root-cause analysis on throughput? Then you should monitor your ExecutionContext, not your Futures. Futures are only values.
The proposed solution wraps each future from for block into TimelyFuture which requires timeout and name. Internally it uses Await to detect individual timeouts.
Please bear in mind this style of using futures is not intended for production code as it uses blocking. It is for diagnostics only to find out which futures take time to complete.
package xxx
import java.util.concurrent.TimeoutException
import scala.concurrent.{Future, _}
import scala.concurrent.duration.Duration
import scala.util._
import scala.concurrent.duration._
class TimelyFuture[T](f: Future[T], name: String, duration: Duration) extends Future[T] {
override def onComplete[U](ff: (Try[T]) => U)(implicit executor: ExecutionContext): Unit = f.onComplete(x => ff(x))
override def isCompleted: Boolean = f.isCompleted
override def value: Option[Try[T]] = f.value
#scala.throws[InterruptedException](classOf[InterruptedException])
#scala.throws[TimeoutException](classOf[TimeoutException])
override def ready(atMost: Duration)(implicit permit: CanAwait): TimelyFuture.this.type = {
Try(f.ready(atMost)(permit)) match {
case Success(v) => this
case Failure(e) => this
}
}
#scala.throws[Exception](classOf[Exception])
override def result(atMost: Duration)(implicit permit: CanAwait): T = {
f.result(atMost)
}
override def transform[S](ff: (Try[T]) => Try[S])(implicit executor: ExecutionContext): Future[S] = {
val p = Promise[S]()
Try(Await.result(f, duration)) match {
case s#Success(_) => ff(s) match {
case Success(v) => p.success(v)
case Failure(e) => p.failure(e)
}
case Failure(e) => e match {
case e: TimeoutException => p.failure(new RuntimeException(s"future ${name} has timed out after ${duration}"))
case _ => p.failure(e)
}
}
p.future
}
override def transformWith[S](ff: (Try[T]) => Future[S])(implicit executor: ExecutionContext): Future[S] = {
val p = Promise[S]()
Try(Await.result(f, duration)) match {
case s#Success(_) => ff(s).onComplete({
case Success(v) => p.success(v)
case Failure(e) => p.failure(e)
})
case Failure(e) => e match {
case e: TimeoutException => p.failure(new RuntimeException(s"future ${name} has timed out after ${duration}"))
case _ => p.failure(e)
}
}
p.future
}
}
object Main {
import scala.concurrent.ExecutionContext.Implicits.global
def main(args: Array[String]): Unit = {
val f = Future {
Thread.sleep(5);
1
}
val g = Future {
Thread.sleep(2000);
2
}
val result: Future[(Int, Int)] = for {
v1 <- new TimelyFuture(f, "f", 10 milliseconds)
v2 <- new TimelyFuture(g, "g", 10 milliseconds)
} yield (v1, v2)
val sum = Await.result(result, 1 seconds) // as expected, this throws exception : "RuntimeException: future g has timed out after 10 milliseconds"
}
}
If you are merely looking for informational metrics on which individual future was taking a long time (or in combination with others), your best bet is to use a wrapper when creating the futures to log the metrics:
object InstrumentedFuture {
def now() = System.currentTimeMillis()
def apply[T](name: String)(code: => T): Future[T] = {
val start = now()
val f = Future {
code
}
f.onComplete {
case _ => println(s"Future ${name} took ${now() - start} ms")
}
f
}
}
val future1 = InstrumentedFuture("Calculator") { /*...code...*/ }
val future2 = InstrumentedFuture("Differentiator") { /*...code...*/ }
You can check if a future has completed by calling its isComplete method
if (futureInt1.isComplete) { /*futureInt2 must be the culprit */ }
if (futureInt2.isComplete) { /*futureInt1 must be the culprit */ }
As a first approach i would suggest to lift your Future[Int] into Future[Try[Int]]. Something like that:
object impl {
def checkException[T](in: Future[T]): Future[Try[T]] =
in.map(Success(_)).recover {
case e: Throwable => {
Failure(new Exception("Error in future: " + in))
}
}
implicit class FutureCheck(s: Future[Int]) {
def check = checkException(s)
}
}
Then a small function to combine the results, something like that:
object test {
import impl._
val futureInt1 = Future{ 1 }
val futureInt2 = Future{ 2 }
def combine(a: Try[Int], b: Try[Int])(f: (Int, Int) => (Int)) : Try[Int] = {
if(a.isSuccess && b.isSuccess) {
Success(f(a.get, b.get))
}
else
Failure(new Exception("Error adding results"))
}
val futureSum = for {
int1 <- futureInt1.check
int2 <- futureInt2.check
} yield combine(int1, int2)(_ + _)
}
In futureSum you will have a Try[Int] with the integer or a Failure with the exception corresponding with the possible error.
Maybe this can be useful
val futureInt1 = getIntAsync1();
val futureInt2 = getIntAsync2();
val futureSum = for {
int1 <- futureInt1
int2 <- futureInt2
} yield (int1 + int2)
Try(Await.result(futureSum, 60 seconds)) match {
case Success(sum) => println(sum)
case Failure(e) => println("we got timeout. the unfinished futures are: " + List(futureInt1, futureInt2).filter(!_.isCompleted)
}
Here is the problem, I have a library which has a blocking method return Try[T]. But since it's a blocking one, I would like to make it non-blocking using Future[T]. In the future block, I also would like to compute something that's depend on the origin blocking method's return value.
But if I use something like below, then my nonBlocking will return Future[Try[T]] which is less convince since Future[T] could represent Failure[U] already, I would rather prefer propagate the exception to Future[T] is self.
def blockMethod(x: Int): Try[Int] = Try {
// Some long operation to get an Int from network or IO
throw new Exception("Network Exception") }
}
def nonBlocking(x: Int): Future[Try[Int]] = future {
blockMethod(x).map(_ * 2)
}
Here is what I tried, I just use .get method in future {} block, but I'm not sure if this is the best way to do that.
def blockMethod(x: Int): Try[Int] = Try {
// Some long operation to get an Int from network or IO
throw new Exception("Network Exception") }
}
def nonBlocking(x: Int): Future[Int] = future {
blockMethod(x).get * 2
}
Is this correct way to do that? Or there is a more scala idiomatic way to convert t Try[T] to Future[T]?
Here's an example that doesn't block, note that you probably want to use your own execution context and not scala's global context:
import scala.util._
import scala.concurrent._
import scala.concurrent.duration._
import ExecutionContext.Implicits.global
object Main extends App {
def blockMethod(x: Int): Try[Int] = Try {
// Some long operation to get an Int from network or IO
Thread.sleep(10000)
100
}
def tryToFuture[A](t: => Try[A]): Future[A] = {
future {
t
}.flatMap {
case Success(s) => Future.successful(s)
case Failure(fail) => Future.failed(fail)
}
}
// Initiate long operation
val f = tryToFuture(blockMethod(1))
println("Waiting... 10 seconds to complete")
// Should return before 20 seconds...
val res = Await.result(f, 20 seconds)
println(res) // prints 100
}
In my opinion: Try & Future is a monadic construction and idiomatic way to is monadic composition (for-comprehension):
That you need to define monad transformer for Future[Try[_]] (code for your library):
case class FutureT[R](run : Future[Try[R]])(implicit e: ExecutionContext) {
def map[B](f : R => B): FutureT[B] = FutureT(run map { _ map f })
def flatMap[B](f : R => FutureT[B]): FutureT[B] = {
val p = Promise[Try[B]]()
run onComplete {
case Failure(e) => p failure e
case Success(Failure(e)) => p failure e
case Success(Success(v)) => f(v).run onComplete {
case Failure(e) => p failure e
case Success(s) => p success s
}
}
FutureT(p.future)
}
}
object FutureT {
def futureTry[R](run : => Try[R])(implicit e: ExecutionContext) =
new FutureT(future { run })
implicit def toFutureT[R](run : Future[Try[R]]) = FutureT(run)
implicit def fromFutureT[R](futureT : FutureT[R]) = futureT.run
}
and usage example:
def blockMethod(x: Int): Try[Int] = Try {
Thread.sleep(5000)
if(x < 10) throw new IllegalArgumentException
else x + 1
}
import FutureT._
// idiomatic way :)
val async = for {
x <- futureTry { blockMethod(15) }
y <- futureTry { blockMethod(25) }
} yield (x + y) * 2 // possible due to using modan transformer
println("Waiting... 10 seconds to complete")
val res = Await.result(async, 20 seconds)
println(res)
// example with Exception
val asyncWithError = for {
x <- futureTry { blockMethod(5) }
y <- futureTry { blockMethod(25) }
} yield (x + y) * 2 // possible due to using modan transformer
// Can't use Await because will get exception
// when extract value from FutureT(Failure(java.lang.IllegalArgumentException))
// no difference between Failure produced by Future or Try
asyncWithError onComplete {
case Failure(e) => println(s"Got exception: $e.msg")
case Success(res) => println(res)
}
// Output:
// Got exception: java.lang.IllegalArgumentException.msg