Akka Streams - Understanding when and how materialisation works - scala

An app that I am developing requires/gives the users the ability to create and define arbitrary streams at runtime. I understand that in Akka streams in particular
Materialisation = Execute or Run
My questions
1) Should materialisation for a stream be done only once? i.e if it is already materialised then can I use the value for subsequent runs?
2) As said above, maybe I misunderstood the term materialisation. If a stream has to run, it is materialised each time?
I am confused because in the docs, it says materialisation actually creates the resources needed for stream execution. So my immediate understanding is that it has to be done only once. Just like a JDBC connection to a database. Can someone please explain in a non-akka terminology.

Yes, a stream can be materialized multiple times. And yes, if a stream is run multiple times, it is materialized each time. From the documentation:
Since a stream can be materialized multiple times, the materialized value will also be calculated anew for each such materialization, usually leading to different values being returned each time. In the example below we create two running materialized instance of the stream that we described in the runnable variable, and both materializations give us a different Future from the map even though we used the same sink to refer to the future:
// connect the Source to the Sink, obtaining a RunnableGraph
val sink = Sink.fold[Int, Int](0)(_ + _)
val runnable: RunnableGraph[Future[Int]] =
Source(1 to 10).toMat(sink)(Keep.right)
// get the materialized value of the FoldSink
val sum1: Future[Int] = runnable.run()
val sum2: Future[Int] = runnable.run()
// sum1 and sum2 are different Futures!
Think of a stream as a reusable blueprint that can be run/materialized multiple times. A materializer is required to materialize a stream, and Akka Streams provides a materializer called ActorMaterializer. The materializer allocates the necessary resources (actors, etc.) and executes the stream. While it is common to use the same materializer for different streams and multiple materializations, each materialization of a stream triggers the resource allocation needed to run the stream. In the example above, sum1 and sum2 use the same blueprint (runnable) and the same materializer, but they are the results of distinct materializations that incurred distinct resource allocations.

Related

How to use SparkContext.submitJob to call REST API

Can someone please provide an example of submitJob method call
Found reference here: How to execute async operations (i.e. returning a Future) from map/filter/etc.?
I believe i can implement it for my use case
In my current Implementation I am using paritions to invoke parallel calls, but they are waiting for the response before invoking the next call
Dataframe.rdd.reparition(TPS allowed on API)
.map(row => {
val response = callApi(row)
parse(response)
})
But as there is latency at the API end, I am waiting 10 seconds for the response before parsing and then make the next call. I have a 100 TPS but current logic i see only 4-7 TPS
If someone has used SparkContext.submitJob , to make asynchronous calls please provide an example as I am new spark and scala
I want to invoke the calls without waiting for response, ensuring 100 TPS and then once I receive response I want to parse and create Dataframe on top of it.
I had previously tried collecting the rows and invoking API calls from master node, seems to be limited by hardware for creating large thread pool
submitJob[T, U, R](rdd: RDD[T], processPartition: (Iterator[T]) ⇒ U, partitions: Seq[Int], resultHandler: (Int, U) ⇒ Unit, resultFunc: ⇒ R): SimpleFutureAction[R]
Rdd - rdd out of my Dataframe
paritition - my rdd is already partitioned, do i provide range 0 to No.of.partitions in my rdd ?
processPartition - is it my callApi() ?
resultHandler - not sure what is to be done here
resultFunc - I believe this would be parsing my response
How to I create Dataframe after SimpleFutureAction
Can someone please assist
submitJob won't make your API calls automatically faster. It is part of the low-level implementation of Spark's parallel processing - Spark splits actions into jobs and then submits them to whatever cluster scheduler is in place. Calling submitJob is like starting a Java thread - the job will run asynchronously, but not faster than if you simply call the action on the dataframe/RDD.
IMHO your best option is to use mapPartitions which allows you to run a function within the context of each partition. You already have your data partitioned so to ensure maximum concurrency, just make sure you have enough Spark executors to actually have those partitions running simultaneously:
df.rdd.repartition(#concurrent API calls)
.mapPartitions(partition => {
partition.map(row => {
val response = callApi(row)
parse(response)
})
})
.toDF("col1", "col2", ...)
mapPartitions expects a function that maps Iterator[T] (all data in a single partition) to Iterator[U] (transformed partition) and returns RDD[U]. Converting back to a dataframe is a matter of chaining a call to toDF() with the appropriate column names.
You may wish to implement some sort of per-thread rate limiting in callApi to make sure no single executor fires a large number of requests per second. Keep in mind that executors may run in both separate threads and/or separate JVMs.
Of course, just calling mapPartitions does nothing. You need to trigger an action on the resulting dataframe for the API calls to actually fire.

Calling a rest service from Spark

I'm trying to figure out the best approach to call a Rest endpoint from Spark.
My current approach (solution [1]) looks something like this -
val df = ... // some dataframe
val repartitionedDf = df.repartition(numberPartitions)
lazy val restEndPoint = new restEndPointCaller() // lazy evaluation of the object which creates the connection to REST. lazy vals are also initialized once per JVM (executor)
val enrichedDf = repartitionedDf
.map(rec => restEndPoint.getResponse(rec)) // calls the rest endpoint for every record
.toDF
I know I could have used .mapPartitions() instead of .map(), but looking at the DAG, it looks like spark optimizes the repartition -> map to a mapPartition anyway.
In this second approach (solution [2]), a connection is created once for every partition and reused for all records within the partition.
val newDs = myDs.mapPartitions(partition => {
val restEndPoint = new restEndPointCaller /*creates a db connection per partition*/
val newPartition = partition.map(record => {
restEndPoint.getResponse(record, connection)
}).toList // consumes the iterator, thus calls readMatchingFromDB
restEndPoint.close() // close dbconnection here
newPartition.iterator // create a new iterator
})
In this third approach (solution [3]), a connection is created once per JVM (executor) reused across all partitions processed by the executor.
lazy val connection = new DbConnection /*creates a db connection per partition*/
val newDs = myDs.mapPartitions(partition => {
val newPartition = partition.map(record => {
readMatchingFromDB(record, connection)
}).toList // consumes the iterator, thus calls readMatchingFromDB
newPartition.iterator // create a new iterator
})
connection.close() // close dbconnection here
[a] With Solutions [1] and [3] which are very similar, is my understanding of how lazy val work correct? The intention is to restrict the number of connections to 1 per executor/ JVM and reuse the open connections for processing subsequent requests. Will I be creating 1 connection per JVM or 1 connection per partition?
[b] Are there any other ways by which I can control the number of requests (RPS) we make to the rest endpoint ?
[c] Please let me know if there are better and more efficient ways to do this.
Thanks!
IMO the second solution with mapPartitions is better. First, you explicitly tells what you're expecting to achieve. The name of the transformation and the implemented logic tell it pretty clearly. For the first option you need to be aware of the how Apache Spark optimizes the processing. And it's maybe obvious to you just now but you should also think about the people who will work on your code or simply about you in 6 months, 1 year, 2 years and so fort. And they should understand better the mapPartitions than repartition + map.
Moreover maybe the optimization for repartition with map will change internally (I don't believe in it but you can still consider is as a valid point) and at this moment your job will perform worse.
Finally, with the 2nd solution you avoid a lot of problems that you can encounter with the serialization. In the code you wrote the driver will create one instance of the endpoint object, serialize it and send to the executors. So yes, maybe it'll be a single instance but only if it's serializable.
[edit]
Thanks for clarification. You can achieve what are you looking for in different manners. To have exactly 1 connection per JVM you can use a design pattern called singleton. In Scala it's expressed pretty easily as an object (the first link I found on Google https://alvinalexander.com/scala/how-to-implement-singleton-pattern-in-scala-with-object)
And that it's pretty good because you don't need to serialize anything. The singletons are read directly from the classpath on the executor side. With it you're sure to have exactly one instance of given object.
[a] With Solutions [1] and [3] which are very similar, is my
understanding of how lazy val work correct? The intention is to
restrict the number of connections to 1 per executor/ JVM and reuse
the open connections for processing subsequent requests. Will I be
creating 1 connection per JVM or 1 connection per partition?
It'll create 1 connection per partition. You can execute this small test to see that:
class SerializationProblemsTest extends FlatSpec {
val conf = new SparkConf().setAppName("Spark serialization problems test").setMaster("local")
val sparkContext = SparkContext.getOrCreate(conf)
"lazy object" should "be created once per partition" in {
lazy val restEndpoint = new NotSerializableRest()
sparkContext.parallelize(0 to 120).repartition(12)
.mapPartitions(numbers => {
//val restEndpoint = new NotSerializableRest()
numbers.map(nr => restEndpoint.enrich(nr))
})
.collect()
}
}
class NotSerializableRest() {
println("Creating REST instance")
def enrich(id: Int): String = s"${id}"
}
It should print Creating REST instance 12 times (# of partitions)
[b] Are there ways by which I can control the number of requests (RPS)
we make to the rest endpoint ?
To control the number of requests you can use an approach similar to database connection pools: HTTP connection pool (one quickly found link: HTTP connection pooling using HttpClient).
But maybe another valid approach would be the processing of smaller subsets of data ? So instead of taking 30000 rows to process, you can split it into different smaller micro-batches (if it's a streaming job). It should give your web service a little bit more "rest".
Otherwise you can also try to send bulk requests (Elasticsearch does it to index/delete multiple documents at once https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html). But it's up to the web service to allow you to do so.

Akka streams - can it be scale like regular actors or other way?

I'm having a code that executing a pipeline using Akka streams.
My question is what is the best way of scale it out? Can it be done using Akka streams also?
Or it need to be converted into actors/other way?
The code snippet is:
val future = SqsSource(sqsEndpoint)(awsSqsClient)
.takeWhile(_=>true)
.map { m: Message =>
(m, Ack())
}.runWith(SqsAckSink(sqsEndpoint)(awsSqsClient))
If you modify your code a bit then your stream will be materialized into multiple Actor values. These materialized Actors will get you the concurrency you are looking for:
val future =
SqsSource(sqsEnpoint)(awsSqsClient) //Actor 1
.via(Flow[Message] map (m => (m, Ack()))) //Actor 2
.to(SqsAckSink(sqsEndpoint)(awsSqsClient)) //Actor 3
.run()
Note the use of via and to. These are important because they indicate that those stages of the stream should be materialized into separate Actors. In your example code you are using map and runWith on the Source which would result in only 1 Actor being created because of operator fusion.
Flows that Ask External Actors
If you're looking to extend to even more Actors then you can use Flow#mapAsync to query an external Actor to do more work, similar to this example.

Using Futures within Spark

A Spark job makes a remote web service for every element in an RDD. A simple implementation might look something like this:
def webServiceCall(url: String) = scala.io.Source.fromURL(url).mkString
rdd2 = rdd1.map(x => webServiceCall(x.field1))
(The above example has been kept simple and does not handle timeouts).
There is no interdependency between any of the results for different elements of the RDD.
Would the above be improved by using Futures to optimise performance by making parallel calls to the web service for each element of the RDD? Or does Spark itself have that level of optimization built in, so that it will run the operations on each element in the RDD in parallel?
If the above can be optimized by using Futures, does anyone have some code examples showing the correct way to use Futures within a function passed to a Spark RDD.
Thanks
Or does Spark itself have that level of optimization built in, so that it will run the operations on each element in the RDD in parallel?
It doesn't. Spark parallelizes tasks at the partition level but by default every partition is processed sequentially in a single thread.
Would the above be improved by using Futures
It could be an improvement but is quite hard to do it right. In particular:
every Future has to be completed in the same stage before any reshuffle takes place.
given lazy nature of the Iterators used to expose partition data you cannot do it high level primitives like map (see for example Spark job with Async HTTP call).
you can build your custom logic using mapPartitions but then you have to deal with all the consequences of non-lazy partition evaluation.
I couldnt find an easy way to achieve this. But after several iteration of retries this is what I did and its working for a huge list of queries. Basically we used this to do a batch operation for a huge query into multiple sub queries.
// Break down your huge workload into smaller chunks, in this case huge query string is broken
// down to a small set of subqueries
// Here if needed to optimize further down, you can provide an optimal partition when parallelizing
val queries = sqlContext.sparkContext.parallelize[String](subQueryList.toSeq)
// Then map each one those to a Spark Task, in this case its a Future that returns a string
val tasks: RDD[Future[String]] = queries.map(query => {
val task = makeHttpCall(query) // Method returns http call response as a Future[String]
task.recover {
case ex => logger.error("recover: " + ex.printStackTrace()) }
task onFailure {
case t => logger.error("execution failed: " + t.getMessage) }
task
})
// Note:: Http call is still not invoked, you are including this as part of the lineage
// Then in each partition you combine all Futures (means there could be several tasks in each partition) and sequence it
// And Await for the result, in this way you making it to block untill all the future in that sequence is resolved
val contentRdd = tasks.mapPartitions[String] { f: Iterator[Future[String]] =>
val searchFuture: Future[Iterator[String]] = Future sequence f
Await.result(searchFuture, threadWaitTime.seconds)
}
// Note: At this point, you can do any transformations on this rdd and it will be appended to the lineage.
// When you perform any action on that Rdd, then at that point,
// those mapPartition process will be evaluated to find the tasks and the subqueries to perform a full parallel http requests and
// collect those data in a single rdd.
I'm reposting it from my original answer here

ExecutionContext to use with mapAsync in Akka-Streams

I am just getting started with Akka Stream and I am trying to figure something out:
Currently, in my flows I am using mapAsync() to integrate with my rest services, as recommended here.
I have been wondering, what execution context should the mapAsync() be using?
Should it be the dispatcher of my ActorSystem? The global?
Are there any non-obvious consequences in either case?
I realize it's probably a silly question, but I've never dealt with Akka previously, and in any scala apps involving Futures, I've only ever used the global execution context.
The mapAsync stage doesn't need an execution context, it only requires you to map the current stream element to a Future. The future's execution context depends on who creates it, the flow doesn't know anything about it.
More generally, a Future[A] is an abstraction that doesn't require you to know where it's running. It could even be a precomputed value that doesn't need an execution context:
def mappingFunction(x: Int) = Future.successful(x * 2)
Source(List(1, 2, 3)).mapAsync(1)(mappingFunction)
You only need to worry about ExecutionContexts when you create the Future, but in the case of mapAsync you're just returning one from a function. How to create the future is the function's responsibility. As far as the mapAsync stage is concerned, it just gets the future as the return value of the function, i.e. it doesn't create it.
Flows have a Materializer. Its current implementation is the ActorMaterializer, which materializes streams using an ActorSystem (and its dispatchers). You are not required to know the details of stream materialization though, streams work on a more abstract level and hypothetically you could have a different Materializer that doesn't work with an ActorSystem