Akka Streams Inlets and Outlets match - scala

Here is the simplest graph using a Partition and Merge that I could come up with, but when run it gives the following error:
requirement failed: The inlets [] and outlets [] must correspond to the inlets [Merge.in0, Merge.in1] and outlets [Partition.out0, Partition.out1]
I understand that the message indicates that I either have more outputs than inputs or an unconnected flow, but I can't seem to see in this simple example where the mismatch is.
Any help is appreciated.
The graph:
def createGraph()(implicit actorSystem: ActorSystem): Graph[ClosedShape, Future[Done]] = {
GraphDSL.create(Sink.ignore) { implicit builder: GraphDSL.Builder[Future[Done]] => s =>
import GraphDSL.Implicits._
val inputs: List[Int] = List(1, 2, 3, 4)
val source: Source[Int, NotUsed] = Source(inputs)
val messageSplit: UniformFanOutShape[Int, Int] = builder.add(Partition[Int](2, i => i%2))
val messageMerge: UniformFanInShape[Int, Int] = builder.add(Merge[Int](2))
val processEven: Flow[Int, Int, NotUsed] = Flow[Int].map(rc => {
actorSystem.log.debug(s"even: $rc")
rc
})
val processOdd: Flow[Int, Int, NotUsed] = Flow[Int].map(rc => {
actorSystem.log.debug(s"odd: $rc")
rc
})
source ~> messageSplit.in
messageSplit.out(0) -> processEven -> messageMerge.in(0)
messageSplit.out(1) -> processOdd -> messageMerge.in(1)
messageMerge.out ~> s
ClosedShape
}
}
The test:
import akka.actor.ActorSystem
import akka.stream._
import akka.stream.scaladsl.{Flow, GraphDSL, Merge, Partition, RunnableGraph, Sink, Source}
import akka.{Done, NotUsed}
import org.scalatest.FunSpec
import scala.concurrent.Future
class RoomITSpec extends FunSpec {
implicit val actorSystem: ActorSystem = ActorSystem("RoomITSpec")
implicit val actorCreator: ActorMaterializer = ActorMaterializer()
describe("graph") {
it("should run") {
val graph = createGraph()
RunnableGraph.fromGraph(graph).run
}
}
}

Small syntactic mistake.
// Notice the curly arrows
messageSplit.out(0) ~> processEven ~> messageMerge.in(0)
messageSplit.out(1) ~> processOdd ~> messageMerge.in(1)
Instead of what you wrote:
// Straight arrows
messageSplit.out(0) -> processEven -> messageMerge.in(0)
messageSplit.out(1) -> processOdd -> messageMerge.in(1)
You ended up generating (and throwing away) tuples instead of adding to the graph.

Related

Adding retry to future sequence for running Databricks notebooks in parallel in Scala

I use the below code from Databricks itself on how to run its notebook in parallel in Scala, https://docs.databricks.com/notebooks/notebook-workflows.html#run-multiple-notebooks-concurrently . I am trying to add retry feature where if one of the notebooks in the sequence failed, it will retry that notebook based on the retry value I passed to it.
Here is the parallel notebook code from Databricks:
//parallel notebook code
import scala.concurrent.{Future, Await}
import scala.concurrent.duration._
import scala.util.control.NonFatal
case class NotebookData(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String])
def parallelNotebooks(notebooks: Seq[NotebookData]): Future[Seq[String]] = {
import scala.concurrent.{Future, blocking, Await}
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import com.databricks.WorkflowException
val numNotebooksInParallel = 5
// If you create too many notebooks in parallel the driver may crash when you submit all of the jobs at once.
// This code limits the number of parallel notebooks.
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numNotebooksInParallel))
val ctx = dbutils.notebook.getContext()
Future.sequence(
notebooks.map { notebook =>
Future {
dbutils.notebook.setContext(ctx)
if (notebook.parameters.nonEmpty)
dbutils.notebook.run(notebook.path, notebook.timeout, notebook.parameters)
else
dbutils.notebook.run(notebook.path, notebook.timeout)
}
.recover {
case NonFatal(e) => s"ERROR: ${e.getMessage}"
}
}
)
}
This is an example of how I am calling the above code to run multiple examples notebooks:
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.language.postfixOps
val notebooks = Seq(
NotebookData("Notebook1", 0, Map("client"->client)),
NotebookData("Notebook2", 0, Map("client"->client))
)
val res = parallelNotebooks(notebooks)
Await.result(res, 3000000 seconds) // this is a blocking call.
res.value
Here is one attempt. Since your code does not compile, I inserted a few dummy classes.
Also, you did not fully specify the desired behavior, so I made some assumptions. Only five retries will be made for each connection. If any of the Futures are still failing after five retries, then the entire Future is failed. Both of these behaviors can be changed, but since you did not specify, I am not sure what it is you want.
If you have questions or would like me to make an alteration to the program, let me know in the comments section.
object TestNotebookData extends App{
//parallel notebook code
import scala.concurrent.{Future, Await}
import scala.concurrent.duration._
import scala.util.control.NonFatal
case class NotebookData(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String])
case class Context()
case class Notebook(){
def getContext(): Context = Context()
def setContext(ctx: Context): Unit = ()
def run(path: String, timeout: Int, paramters: Map[String, String] = Map()): Seq[String] = Seq()
}
case class Dbutils(notebook: Notebook)
val dbutils = Dbutils(Notebook())
def parallelNotebooks(notebooks: Seq[NotebookData]): Future[Seq[Seq[String]]] = {
import scala.concurrent.{Future, blocking, Await}
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
// This code limits the number of parallel notebooks.
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numNotebooksInParallel))
val ctx = dbutils.notebook.getContext()
val isRetryable = true
val retries = 5
def runNotebook(notebook: NotebookData): Future[Seq[String]] = {
def retryWrapper(retry: Boolean, current: Int, max: Int): Future[Seq[String]] = {
val fut = Future {runNotebookInner}
if (retry && current < max) fut.recoverWith{ _ => retryWrapper(retry, current + 1, max)}
else fut
}
def runNotebookInner() = {
dbutils.notebook.setContext(ctx)
if (notebook.parameters.nonEmpty)
dbutils.notebook.run(notebook.path, notebook.timeout, notebook.parameters)
else
dbutils.notebook.run(notebook.path, notebook.timeout)
}
retryWrapper(isRetryable, 0, retries)
}
Future.sequence(
notebooks.map { notebook =>
runNotebook(notebook)
}
)
}
val notebooks = Seq(
NotebookData("Notebook1", 0, Map("client"->"client")),
NotebookData("Notebook2", 0, Map("client"->"client"))
)
val res = parallelNotebooks(notebooks)
Await.result(res, 3000000 seconds) // this is a blocking call.
res.value
}
I found this to work:
import scala.util.{Try, Success, Failure}
def tryNotebookRun (path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String]): Try[Any] = {
Try(
if (parameters.nonEmpty){
dbutils.notebook.run(path, timeout, parameters)
}
else{
dbutils.notebook.run(path, timeout)
}
)
}
//parallel notebook code
import scala.concurrent.{Future, Await}
import scala.concurrent.duration._
import scala.util.control.NonFatal
def runWithRetry(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String], maxRetries: Int = 2) = {
var numRetries = 0
while (numRetries < maxRetries){
tryNotebookRun(path, timeout, parameters) match {
case Success(_) => numRetries = maxRetries
case Failure(_) => numRetries = numRetries + 1
}
}
}
case class NotebookData(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String])
def parallelNotebooks(notebooks: Seq[NotebookData]): Future[Seq[Any]] = {
import scala.concurrent.{Future, blocking, Await}
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import com.databricks.WorkflowException
val numNotebooksInParallel = 5
// If you create too many notebooks in parallel the driver may crash when you submit all of the jobs at once.
// This code limits the number of parallel notebooks.
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numNotebooksInParallel))
val ctx = dbutils.notebook.getContext()
Future.sequence(
notebooks.map { notebook =>
Future {
dbutils.notebook.setContext(ctx)
runWithRetry(notebook.path, notebook.timeout, notebook.parameters)
}
.recover {
case NonFatal(e) => s"ERROR: ${e.getMessage}"
}
}
)
}

Put elements in stream and return an object

In akka, I want to put the elements in stream and return an object. I know the elements could be a source to run a graph. But how can I put the element and return an object on runtime?
import akka.actor.ActorSystem
import akka.stream.QueueOfferResult.{Dropped, Enqueued, Failure, QueueClosed}
import akka.stream.{ActorMaterializer, OverflowStrategy}
import akka.stream.scaladsl.{Keep, Sink, Source}
import scala.Array.range
import scala.util.Success
object StreamElement {
implicit val system = ActorSystem("StreamElement")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
def main(args: Array[String]): Unit = {
val (queue, value) = Source
.queue[Int](10, OverflowStrategy.backpressure)
.map(x => {
x * x
})
.toMat(Sink.asPublisher(false))(Keep.both)
.run()
range(0, 10)
.map(x => {
queue.offer(x).onComplete {
case Success(Enqueued) => {
}
case Success(Dropped) => {}
case _ => {
println("others")
}
}
})
}
}
How can I get the value returned?
Actually, you want to return the int value for each element.
So you could create the flow, then connect to source and Sink for each time.
package tech.parasol.scala.akka
import akka.actor.ActorSystem
import akka.stream.QueueOfferResult.{Dropped, Enqueued, Failure, QueueClosed}
import akka.stream.{ActorMaterializer, OverflowStrategy}
import akka.stream.scaladsl.{Flow, Keep, Sink, Source}
import scala.Array.range
import scala.util.Success
object StreamElement {
implicit val system = ActorSystem("StreamElement")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
val flow = Flow[Int]
.buffer(16, OverflowStrategy.backpressure)
.map(x => x * x)
def main(args: Array[String]): Unit = {
range(0, 10)
.map(x => {
Source.single(x).via(flow).runWith(Sink.head)
}.map( v => println("v ===> " + v)
))
}
}
It's unclear to me why the Scala collection isn't fed to the Stream as a Source in your sample code. Given that you've already composed a Stream with materialized values to be captured in a Source Queue and a publisher Sink, you could create a subscriber Source using Source.fromPublisher to collect the wanted values, as shown below:
import akka.actor.ActorSystem
import akka.stream.scaladsl._
import akka.stream._
implicit val system = ActorSystem("system")
implicit val materializer = ActorMaterializer() // Not needed for Akka 2.6+
val (queue, pub) = Source
.queue[Int](10, OverflowStrategy.backpressure)
.map(x => x * x)
.toMat(Sink.asPublisher(false))(Keep.both)
.run()
val fromQueue = Source(0 until 10).runForeach(queue.offer(_))
val source = Source.fromPublisher(pub)
source.runForeach(x => print(x + " "))
// Output:
// 0 1 4 9 16 25 36 49 64 81

Streaming CSV with akka-http in scala

I am very new to akka-http, and I would like to stream a csv with an arbitrary number of lines.
For instance, I would like to return :
a,1
b,2
c,3
with the following code
implicit val actorSystem = ActorSystem("system")
implicit val actorMaterializer = ActorMaterializer()
val map = new mutable.HashMap[String, Int]()
map.put("a", 1)
map.put("b", 2)
map.put("c", 3)
val `text/csv` = ContentType(MediaTypes.`text/csv`, `UTF-8`)
val route =
path("test") {
complete {
HttpEntity(`text/csv`, ??? using map)
}
}
Http().bindAndHandle(route,"localhost",8080)
Thanks for your help
EDIT: Thanks to Ramon J Romero y Vigil
package test
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.HttpCharsets.`UTF-8`
import akka.http.scaladsl.model._
import akka.http.scaladsl.server.Directives._
import akka.stream._
import akka.util.ByteString
import scala.collection.mutable
object Test{
def main(args: Array[String]) {
implicit val actorSystem = ActorSystem("system")
implicit val actorMaterializer = ActorMaterializer()
val map = new mutable.HashMap[String, Int]()
map.put("a", 1)
map.put("b", 2)
map.put("c", 3)
val mapStream = Stream.fromIterator(() => map.toIterator)
.map((k: String, v: Int) => s"$k,$v")
.map(ByteString.apply)
val `text/csv` = ContentType(MediaTypes.`text/csv`, `UTF-8`)
val route =
path("test") {
complete {
HttpEntity(`text/csv`, mapStream)
}
}
Http().bindAndHandle(route, "localhost", 8080)
}
}
With this code I have two compile error:
Error:(29, 28) value fromIterator is not a member of object scala.collection.immutable.Stream
val mapStream = Stream.fromIterator(() => map.toIterator)
Error:(38, 11) overloaded method value apply with alternatives:
(contentType: akka.http.scaladsl.model.ContentType,file: java.io.File,chunkSize: Int)akka.http.scaladsl.model.UniversalEntity <and>
(contentType: akka.http.scaladsl.model.ContentType,data: akka.stream.scaladsl.Source[akka.util.ByteString,Any])akka.http.scaladsl.model.HttpEntity.Chunked <and>
(contentType: akka.http.scaladsl.model.ContentType,data: akka.util.ByteString)akka.http.scaladsl.model.HttpEntity.Strict <and>
(contentType: akka.http.scaladsl.model.ContentType,bytes: Array[Byte])akka.http.scaladsl.model.HttpEntity.Strict <and>
(contentType: akka.http.scaladsl.model.ContentType.NonBinary,string: String)akka.http.scaladsl.model.HttpEntity.Strict
cannot be applied to (akka.http.scaladsl.model.ContentType.WithCharset, List[akka.util.ByteString])
HttpEntity(`text/csv`, mapStream)
I used a List of tuples to get arround the first issue (hower i do not know how to stream a map in Scala)
No idea for the second
Thanks for your help.
(I am using scala 2.11.8)
Use the apply function in HttpEntity that takes in a Source[ByteString,Any]. The apply creates a Chunked entity. You can read your file using code based on the documentation for streaming file IO using an akka stream Source:
import akka.stream.scaladsl._
val file = Paths.get("yourFile.csv")
val entity = HttpEntity(`txt/csv`, FileIO.fromPath(file))
The stream will break up your file into chunk sizes, default is currently set to 8192.
To stream the map that you've created you can use a similar trick:
val mapStream = Source.fromIterator(() => map.toIterator)
.map( (k : String, v : Int) => s"$k,$v" )
.map(ByteString.apply)
val mapEntity = HttpEntity(`test/csv`, mapStream)

Akka Stream: Can not write to file sink

I am trying to run a simple Akka Stream File Sink example but without success. I could create a Source, run Flow and then create a file but the ByteString is not getting written to the file. Whereas if I try to print the flow output to console, I could do so. Am I missing something here?
import akka.stream._
import akka.stream.scaladsl._
import akka.{ NotUsed, Done}
import akka.actor.ActorSystem
import akka.util.ByteString
import scala.concurrent._
import scala.concurrent.duration._
import java.nio.file.Paths
object First extends App {
val source: Source[Int, NotUsed] = Source ( 1 to 100)
implicit val system = ActorSystem("QuickStart")
implicit val materializer = ActorMaterializer()
// works: prints 1-100
//source.runForeach(println) (materializer)
val factorials = source.scan(BigInt(1))((acc,next) => acc * next)
// there is no content in the Sink (file)
/**val result =
factorials
.map(num => ByteString(s"${num}\n"))
.runWith(FileIO.toPath(Paths.get("factorials.txt")))
**/
def lineSink(fileName: String): Sink[String, Future[IOResult]] =
Flow[String]
.map(s => ByteString(s + "\n"))
.toMat(FileIO.toPath(Paths.get(fileName))) (Keep.right)
//There is no content in the Sink.
factorials.map(_.toString).runWith(lineSink("factorials.txt"))
system.terminate()
}
build.sbt has:
name := "akkaGuide"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-stream" % "2.4.10"
)
Thanks in advance for your time.
I think you may just be terminating too early. Try waiting until the Future completes:
val result = factorials.map(_.toString).runWith(lineSink("factorials.txt"))
import system.dispatcher
result.onComplete { _ => system.terminate() }
Have a look at this working example below:
package ru.io
import java.io.File
import akka.actor.ActorSystem
import akka.stream.scaladsl._
import akka.stream.{ActorMaterializer, ClosedShape}
import akka.util.ByteString
import scala.util.{Failure, Success}
object WriteStreamApp extends App {
implicit val actorSystem = ActorSystem()
implicit val flowMaterializer = ActorMaterializer()
import actorSystem.dispatcher
// Source
val source = Source(1 to 10000).filter(isPrime)
// Sink
val sink = FileIO.toFile(new File("src/main/resources/prime.txt"))
// output for file
val fileSink = Flow[Int]
.map(i => ByteString(i.toString))
.toMat(sink)((_, bytesWritten) => bytesWritten)
val consoleSink = Sink.foreach[Int](println)
// using Graph API send the integers to both skins: file and console
val graph = GraphDSL.create(fileSink, consoleSink)((file, _) => file) { implicit builder => (file, console) =>
import GraphDSL.Implicits._
val broadCast = builder.add(Broadcast[Int](2))
source ~> broadCast ~> file
broadCast ~> console
ClosedShape
}
val materialized = RunnableGraph.fromGraph(graph).run()
// make sure the system is terminated
materialized.onComplete {
case Success(_) =>
actorSystem.terminate()
case Failure(e) =>
println(s"Failure: ${e.getMessage}")
actorSystem.terminate()
}
def isPrime(n: Int): Boolean = {
if (n <= 1) false
else if (n == 2) true
else !(2 until n).exists(x => n % x == 0)
}
}

Split Akka Stream Source into two

I have an Akka Streams Source which I want to split into two sources according to a predicate.
E.g. having a source (types are simplified intentionally):
val source: Source[Either[Throwable, String], NotUsed] = ???
And two methods:
def handleSuccess(source: Source[String, NotUsed]): Future[Unit] = ???
def handleFailure(source: Source[Throwable, NotUsed]): Future[Unit] = ???
I would like to be able to split the source according to _.isRight predicate and pass the right part to handleSuccess method and left part to handleFailure method.
I tried using Broadcast splitter but it requires Sinks at the end.
Although you can choose which side of the Source you want to retrieve items from it's not possible to create a Source that that yields two outputs which is what it seems like you would ultimately want.
Given the GraphStage below which essentially splits the left and right values into two outputs...
/**
* Fans out left and right values of an either
* #tparam L left value type
* #tparam R right value type
*/
class EitherFanOut[L, R] extends GraphStage[FanOutShape2[Either[L, R], L, R]] {
import akka.stream.{Attributes, Outlet}
import akka.stream.stage.GraphStageLogic
override val shape: FanOutShape2[Either[L, R], L, R] = new FanOutShape2[Either[L, R], L, R]("EitherFanOut")
override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) {
var out0demand = false
var out1demand = false
setHandler(shape.in, new InHandler {
override def onPush(): Unit = {
if (out0demand && out1demand) {
grab(shape.in) match {
case Left(l) =>
out0demand = false
push(shape.out0, l)
case Right(r) =>
out1demand = false
push(shape.out1, r)
}
}
}
})
setHandler(shape.out0, new OutHandler {
#scala.throws[Exception](classOf[Exception])
override def onPull(): Unit = {
if (!out0demand) {
out0demand = true
}
if (out0demand && out1demand) {
pull(shape.in)
}
}
})
setHandler(shape.out1, new OutHandler {
#scala.throws[Exception](classOf[Exception])
override def onPull(): Unit = {
if (!out1demand) {
out1demand = true
}
if (out0demand && out1demand) {
pull(shape.in)
}
}
})
}
}
.. you can route them to only receive one side:
val sourceRight: Source[String, NotUsed] = Source.fromGraph(GraphDSL.create(source) { implicit b => s =>
import GraphDSL.Implicits._
val eitherFanOut = b.add(new EitherFanOut[Throwable, String])
s ~> eitherFanOut.in
eitherFanOut.out0 ~> Sink.ignore
SourceShape(eitherFanOut.out1)
})
Await.result(sourceRight.runWith(Sink.foreach(println)), Duration.Inf)
... or probably more desirable, route them to two seperate Sinks:
val leftSink = Sink.foreach[Throwable](s => println(s"FAILURE: $s"))
val rightSink = Sink.foreach[String](s => println(s"SUCCESS: $s"))
val flow = RunnableGraph.fromGraph(GraphDSL.create(source, leftSink, rightSink)((_, _, _)) { implicit b => (s, l, r) =>
import GraphDSL.Implicits._
val eitherFanOut = b.add(new EitherFanOut[Throwable, String])
s ~> eitherFanOut.in
eitherFanOut.out0 ~> l.in
eitherFanOut.out1 ~> r.in
ClosedShape
})
val r = flow.run()
Await.result(Future.sequence(List(r._2, r._3)), Duration.Inf)
(Imports and initial setup)
import akka.NotUsed
import akka.stream.scaladsl.{GraphDSL, RunnableGraph, Sink, Source}
import akka.stream.stage.{GraphStage, InHandler, OutHandler}
import akka.stream._
import akka.actor.ActorSystem
import com.typesafe.config.ConfigFactory
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Await
import scala.concurrent.duration.Duration
val classLoader = getClass.getClassLoader
implicit val system = ActorSystem("QuickStart", ConfigFactory.load(classLoader), classLoader)
implicit val materializer = ActorMaterializer()
val values: List[Either[Throwable, String]] = List(
Right("B"),
Left(new Throwable),
Left(new RuntimeException),
Right("B"),
Right("C"),
Right("G"),
Right("I"),
Right("F"),
Right("T"),
Right("A")
)
val source: Source[Either[Throwable, String], NotUsed] = Source.fromIterator(() => values.toIterator)
Edit: this other answer with divertTo is a better solution than mine, IMO. I'll leave my answer as-is for posterity.
original answer:
This is implemented in akka-stream-contrib as PartitionWith. Add this dependency to SBT to pull it in to your project:
libraryDependencies += "com.typesafe.akka" %% "akka-stream-contrib" % "0.9"```
`PartitionWith` is shaped like a `Broadcast(2)`, but with potentially different types for each of the two outlets. You provide it with a predicate to apply to each element, and depending on the outcome, they get routed to the applicable outlet. You can then attach a `Sink` or `Flow` to each of these outlets independently as appropriate. Building on [cessationoftime's example](https://stackoverflow.com/a/39744355/147806), with the `Broadcast` replaced with a `PartitionWith`:
val eitherSource: Source[Either[Throwable, String], NotUsed] = Source.empty
val leftSink = Sink.foreach[Throwable](s => println(s"FAILURE: $s"))
val rightSink = Sink.foreach[String](s => println(s"SUCCESS: $s"))
val flow = RunnableGraph.fromGraph(GraphDSL.create(eitherSource, leftSink, rightSink)
((_, _, _)) { implicit b => (s, l, r) =>
import GraphDSL.Implicits._
val pw = b.add(
PartitionWith.apply[Either[Throwable, String], Throwable, String](identity)
)
eitherSource ~> pw.in
pw.out0 ~> leftSink
pw.out1 ~> rightSink
ClosedShape
})
val r = flow.run()
Await.result(Future.sequence(List(r._2, r._3)), Duration.Inf)
For this you can use a broadcast, then filter and map the streams within the GraphDSL:
val leftSink = Sink.foreach[Throwable](s => println(s"FAILURE: $s"))
val rightSink = Sink.foreach[String](s => println(s"SUCCESS: $s"))
val flow = RunnableGraph.fromGraph(GraphDSL.create(eitherSource, leftSink, rightSink)((_, _, _)) { implicit b => (s, l, r) =>
import GraphDSL.Implicits._
val broadcast = b.add(Broadcast[Either[Throwable,String]](2))
s ~> broadcast.in
broadcast.out(0).filter(_.isLeft).map(_.left.get) ~> l.in
broadcast.out(1).filter(_.isRight).map(_.right.get) ~> r.in
ClosedShape
})
val r = flow.run()
Await.result(Future.sequence(List(r._2, r._3)), Duration.Inf)
I expect you will be able to run the functions you want from within the map.
In the meantime this has been introduced to standard Akka-Streams:
https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Partition.html.
You can split the input stream with a predicate and then use collect on each outputs to get only the types you are interested in.
You can use divertTo to attach alternative Sink to the flow to handle Lefts: https://doc.akka.io/docs/akka/current/stream/operators/Source-or-Flow/divertTo.html
source
.divertTo(handleFailureSink, _.isLeft)
.map(rightEither => handleSuccess(rightEither.right.get()))