Akka Streams: How to update one field with the result of the future - scala

I have an entity passing down the akka stream and it has one field that has to be updated during one of the flows.
Let's say case class Entity(f: Int)
The value to update the entity is coming from the future.
Flow[Entity]
.map({ entity ⇒
entity.copy(
f = // get result of the future
)
})
There are several options coming to my mind.
First is to Await for the execution of the future. But in this case, I would have to provide it with its own execution context etc... How do I use the graph's execution context within a flow?
Second is passing a tuple of (Entity, Future[Int]) to the next stage. But it would be easier to transform it into Future[(Entity, Int)] and then mapAsync it. But is there a way to transform a Tuple with Future into a Future of a Tuple within akka stream?
What would be a perfect solution to this simple problem?

How about something like:
Flow[Entity]
.mapAsync { entity =>
createIntFuture.map { int =>
entity.copy(f = int)
}
}
?

Related

Order of execution of Future - Making sequential inserts in a db non-blocking

A simple scenario here. I am using akka streams to read from kafka and write into an external source, in my case: cassandra.
Akka streams(reactive-kafka) library equips me with backpressure and other nifty things to make this possible.
kafka being a Source and Cassandra being a Sink, when I get bunch of events which are, for example be cassandra queries here through Kafka which are supposed to be executed sequentially (ex: it could be a INSERT, UPDATE and a DELETE and must be sequential).
I cannot use mayAsync and execute both the statement, Future is eager and there is a chance that DELETE or UPDATE might get executed first before INSERT.
I am forced to use Cassandra's execute as opposed to executeAsync which is non-blocking.
There is no way to make a complete async solution to this issue, but how ever is there a much elegant way to do this?
For ex: Make the Future lazy and sequential and offload it to a different execution context of sorts.
mapAsync gives a parallelism option as well.
Can Monix Task be of help here?
This a general design question and what are the approaches one can take.
UPDATE:
Flow[In].mapAsync(3)(input => {
input match {
case INSERT => //do insert - returns future
case UPDATE => //do update - returns future
case DELETE => //delete - returns future
}
The scenario is a little more complex. There could be thousands of insert, update and delete coming in order for specific key(s)(in kafka)
I would ideally want to execute the 3 futures of a single key in sequence. I believe Monix's Task can help?
If you process things with parallelism of 1, they will get executed in strict sequence, which will solve your problem.
But that's not interesting. If you want, you can run operations for different keys in parallel - if processing for different keys is independent, which, I assume from your description, is possible. To do this, you have to buffer the incoming values and then regroup it. Let's see some code:
import monix.reactive.Observable
import scala.concurrent.duration._
import monix.eval.Task
// Your domain logic - I'll use these stubs
trait Event
trait Acknowledgement // whatever your DB functions return, if you need it
def toKey(e: Event): String = ???
def processOne(event: Event): Task[Acknowledgement] = Task.deferFuture {
event match {
case _ => ??? // insert/update/delete
}
}
// Monix Task.traverse is strictly sequential, which is what you need
def processMany(evs: Seq[Event]): Task[Seq[Acknowledgement]] =
Task.traverse(evs)(processOne)
def processEventStreamInParallel(source: Observable[Event]): Observable[Acknowledgement] =
source
// Process a bunch of events, but don't wait too long for whole 100. Fine-tune for your data source
.bufferTimedAndCounted(2.seconds, 100)
.concatMap { batch =>
Observable
.fromIterable(batch.groupBy(toKey).values) // Standard collection methods FTW
.mapAsync(3)(processMany) // processing up to 3 different keys in parallel - tho 3 is not necessary, probably depends on your DB throughput
.flatMap(Observable.fromIterable) // flattening it back
}
The concatMap operator here will ensure that your chunks are processed sequentially as well. So even if one buffer has key1 -> insert, key1 -> update and the other has key1 -> delete, that causes no problems. In Monix, this is the same as flatMap, but in other Rx libraries flatMap might be an alias for mergeMap which has no ordering guarantee.
This can be done with Futures too, tho there's no standard "sequential traverse", so you have to roll your own, something like:
def processMany(evs: Seq[Event]): Future[Seq[Acknowledgement]] =
evs.foldLeft(Future.successful(Vector.empty[Acknowledgement])){ (acksF, ev) =>
for {
acks <- acksF
next <- processOne(ev)
} yield acks :+ next
}
You can use akka-streams subflows, to group by key, then merge substreams if you want to do something with what you get from your database operations:
def databaseOp(input: In): Future[Out] = input match {
case INSERT => ...
case UPDATE => ...
case DELETE => ...
}
val databaseFlow: Flow[In, Out, NotUsed] =
Flow[In].groupBy(Int.maxValues, _.key).mapAsync(1)(databaseOp).mergeSubstreams
Note that order from input source won't be kept in output as it is done in mapAsync, but all operations on the same key will still be in order.
You are looking for Future.flatMap:
def doSomething: Future[Unit]
def doSomethingElse: Future[Unit]
val result = doSomething.flatMap { _ => doSomethingElse }
This executes the first function, and then, when its Future is satisfied, starts the second one. The result is a new Future that completes when the result of the second execution is satisfied.
The result of the first future is passed into the function you give to .flatMap, so the second function can depend on the result of the first one. For example:
def getUserID: Future[Int]
def getUser(id: Int): Future[User]
val userName: Future[String] = getUserID.flatMap(getUser).map(_.name)
You can also write this as a for-comprehension:
for {
id <- getUserID
user <- getUser(id)
} yield user.name

How to create an Akka flow with backpressure and Control

I need to create a function with the following Interface:
import akka.kafka.scaladsl.Consumer.Control
object ItemConversionFlow {
def build(config: StreamConfig): Flow[Item, OtherItem, Control] = {
// Implementation goes here
}
My problem is that I don't know how to define the flow in a way that it fits the interface above.
When I am doing something like this
val flow = Flow[Item]
.map(item => doConversion(item)
.filter(_.isDefined)
.map(_.get)
the resulting type is Flow[Item, OtherItem, NotUsed]. I haven't found something in the Akka documentation so far. Also the functions on akka.stream.scaladsl.Flow only offer a "NotUsed" instead of Control. Would be great if someone could point me into the right direction.
Some background: I need to setup several pipelines which only distinguish in the conversion part. These pipelines are sub streams to a main stream which might be stopped for some reason (a corresponding message arrives in some kafka topic). Therefor I need the Control part. The idea would be to create a Graph template where I just insert the mentioned flow as argument (a factory returning it). For a specific case we have a solution which works. To generalize it I need this kind of flow.
You actually have backpressure. However, think about what do you really need about backpressure... you are not using asynchronous stages to increase your throughput... for example. Backpressure avoids fast producers overgrowing susbscribers https://doc.akka.io/docs/akka/2.5/stream/stream-rate.html. In your sample don´t worry about it, your stream will ask for new elements to he publisher depending on how long doConversion takes to complete.
In case that you want to obtain the result of the stream use toMat or viaMat. For example, if your stream emits Item and transform these into OtherItem:
val str = Source.fromIterator(() => List(Item(Some(1))).toIterator)
.map(item => doConversion(item))
.filter(_.isDefined)
.map(_.get)
.toMat(Sink.fold(List[OtherItem]())((a, b) => {
// Examine the result of your stream
b :: a
}))(Keep.right)
.run()
str will be Future[List[OtherItem]]. Try to extrapolate this to your case.
Or using toMat with KillSwitches, "Creates a new [[Graph]] of [[FlowShape]] that materializes to an external switch that allows external completion
* of that unique materialization. Different materializations result in different, independent switches."
def build(config: StreamConfig): Flow[Item, OtherItem, UniqueKillSwitch] = {
Flow[Item]
.map(item => doConversion(item))
.filter(_.isDefined)
.map(_.get)
.viaMat(KillSwitches.single)(Keep.right)
}
val stream =
Source.fromIterator(() => List(Item(Some(1))).toIterator)
.viaMat(build(StreamConfig(1)))(Keep.right)
.toMat(Sink.ignore)(Keep.both).run
// This stops the stream
stream._1.shutdown()
// When it finishes
stream._2 onComplete(_ => println("Done"))

Nesting Futures in Play Action

Im using Play and have an action in which I want to do two things:-
firstly check my cache for a value
secondly, call a web service with the value
Since WS API returns a Future, I'm using Action.async.
My Redis cache module also returns a Future.
Assume I'm using another ExecutionContext appropriately for the potentially long running tasks.
Q. Can someone confirm if I'm on the right track by doing the following. I know I have not catered for the Exceptional cases in the below - just keeping it simple for brevity.
def token = Action.async { implicit request =>
// 1. Get Future for read on cache
val cacheFuture = scala.concurrent.Future {
cache.get[String](id)
}
// 2. Map inside cache Future to call web service
cacheFuture.map { result =>
WS.url(url).withQueryString("id" -> result).get().map { response =>
// process response
Ok(responseData)
}
}
}
My concern is that this may not be the most efficient way of doing things because I assume different threads may handle the task of completing each of the Futures.
Any recommendations for a better approach are greatly appreciated.
That's not specific to Play. I suggest you have a look at documentations explaining how Futures work.
val x: Future[FutureOp2ResType] = futureOp1(???).flatMap { res1 => futureOp2(res1, ???) }
Or with for-comprehension
val x: Future[TypeOfRes] = for {
res1 <- futureOp1(???)
res2 <- futureOp2(res1, ???)
// ...
} yield res
As for how the Futures are executed (using threads), it depends on which ExecutionContext you use (e.g. the global one, the Play one, ...).
WS.get returning a Future, it should not be called within cacheFuture.map, or it will returns a Future[Future[...]].

Losing types on sequencing Futures

I'm trying to do this:
case class ConversationData(members: Seq[ConversationMemberModel], messages: Seq[MessageModel])
val membersFuture: Future[Seq[ConversationMemberModel]] = ConversationMemberPersistence.searchByConversationId(conversationId)
val messagesFuture: Future[Seq[MessageModel]] = MessagePersistence.searchByConversationId(conversationId)
Future.sequence(List(membersFuture, messagesFuture)).map{ result =>
// some magic here
self ! ConversationData(members, messages)
}
But when I'm sequencing the two futures compiler is losing types. The compiler says that type of result is List[Seq[Product with Serializable]] At the beginning I expect to do something like
Future.sequence(List(membersFuture, messagesFuture)).map{ members, messages => ...
But it looks like sequencing futures don't work like this... I also tried to using a collect inside the map but I get similar errors.
Thanks for your help
When using Future.sequence, the assumption is that the underlying types produced by the multiple Futures are the same (or extend from the same parent type). With sequence, you basically invert a Seq of Futures for a particular type to a single Future for a Seq of that particular type. A concrete example is probably more illustrative of that point:
val f1:Future[Foo] = ...
val f2:Future[Foo] = ...
val f3:Future[Foo] = ...
val futures:List[Future[Foo]] = List(f1, f2, f3)
val aggregateFuture:Future[List[Foo]] = Future.sequence(futures)
So you can see that I went from a List of Future[Foo] to a single Future wrapping a List[Foo]. You use this when you already have a bunch of Futures for results of the same type (or base type) and you want to aggregate all of the results for the next processing step. The sequence method product a new Future that won't be completed until all of the aggregated Futures are done and it will then contain the aggregated results of all of those Futures. This works especially well when you have an indeterminate or variable number of Futures to process.
For your case, it seems that you have a fixed number of Futures to handle. As #Zoltan suggested, a simple for comprehension is probably a better fit here because the number of Futures is known. So solving your problem like so:
for{
members <- membersFuture
messages <- messagesFuture
} {
self ! ConversationData(members, messages)
}
is probably the best way to go for this specific example.
What are you trying to achieve with the sequence call? I'd just use a for-comprehension instead:
val membersFuture: Future[Seq[ConversationMemberModel]] = ConversationMemberPersistence.searchByConversationId(conversationId)
val messagesFuture: Future[Seq[MessageModel]] = MessagePersistence.searchByConversationId(conversationId)
for {
members <- membersFuture
messages <- messagesFuture
} yield (self ! ConversationData(members, messages))
Note that it is important that you declare the two futures outside the for-comprehension, because otherwise your messagesFuture wouldn't be submitted until the membersFuture is completed.
You could also use zip:
membersFuture.zip(messagesFuture).map {
case (members, messages) => self ! ConversationData(members, messages)
}
but I'd prefer the for-comprehension.

Sharing database session between multiple methods in Slick 3

I have recently switched from Slick-2 to Slick-3. Everything is working very well with slick-3. However, I am having some issues when it comes to transaction.
I have seen different questions and sample code in which transactionally and withPinnedSession are used to handle the transaction. But my case is slightly different. Both transcationally and withPinnedSession can be applied on Query. But what I want to do is to pass the same session to another method which will do some operations and want to wrap multiple methods in same transaction.
I have the below slick-2 code, I am not sure how this can be implemented with Slick-3.
def insertWithTransaction(row: TTable#TableElementType)(implicit session: Session) = {
val entity = (query returning query.map(obj => obj) += row).asInstanceOf[TEntity]
// do some operations after insert
//eg: invoke another method for sending the notification
entity
}
override def insert(row: TTable#TableElementType) = {
db.withSession {
implicit session => {
insertWithTransaction(row)
}
}
}
Now, if someone is not interested in having transactions, they can just invoke the insert() method.
If we need to do some transaction, it can be done by using insertWithTransaction() in db.withTransaction block.
For eg :
db.withTransaction { implicit session =>
insertWithTransaction(row1)
insertWithTransaction(row2)
//check some condition, invoke session.rollback if something goes wrong
}
But with slick-3, the transactionally can be applied on query only.
That means, wherever we need to do some logic centrally after insertion, it is possible. Every developer needs to manually handle those scenarios explicitly, if they are using transactions. I believe this could potentially cause errors. I am trying to abstract the whole logic in insert operation so that the implementors need to worry only about the transaction success/failure
Is there any other way, in slick-3, in which I can pass the same session to multiple methods so that everything can be done in single db session.
You are missing something : .transactionally doesn't apply to a Query, but to a DBIOAction.
Then, a DBIOAction can be composed of multiple queries by using monadic composition.
Here is a exemple coming from the documentation :
val action = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
action is composed of a select query and as many delete queries as rows returned by the first query. All that creates DBIOAction that be executed in a transaction.
Then, to run the action against the database, you have to call db.run, so, like this:
val f: Future[Unit] = db.run(action)
Now, to come back to your exemple, let's say you want to apply an update query after your insert, you can create an action this way
val action = (for {
entity <- (query returning query.map(obj => obj) += row)
_ <- query.map(_.foo).update(newFoo)
} yield entity).transactionally
Hope it helps.