spark broadcast isn't being saved at the executors memory - scala

I used spark-shell on EMR - Spark version 2.2.0 / 2.1.0.
While trying to broadcast simple object (my CSV file contain only 1 column and it's less then 2 MB) I noticed it isn't being kept on each executor memory and just in the driver memory although it should be as suggested in the documentation https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-TorrentBroadcast.html
Attached print screen before the broadcast (i.e. sc.broadcast(arr_collected) ) and after the broadcast which shows my conclusion. Additionally I checked the worker's machine memory usage and same as in Spark UI, it's not being change after the broadcasting.
1- print screen before broadcast
2- print screen after broadcast
Attached the log for the broadcasting process after adding 'log4j.logger.org.apache.spark.storage.BlockManager=TRACE' like suggested here -
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-blockmanager.html
3- print screen broadcast logging
Below there is the code -
val input = "s3://bucketName/pathToFile.csv"
val df = spark.read.format("com.databricks.spark.csv").option("header", "true").option("delimiter", ",").load(input)
val df_2 = df_read_for_bc.withColumn("is_exist",lit("true").cast("Boolean"))
val arr_collected = df_2.collect()
val broadcast_map_fraud_locations4 = sc.broadcast(arr_collected)
Any ideas?

Can you please use the broadcast varialables to join the data or do some kind of operation. It might be lazy so not using any memory

Related

Akka stream hangs when starting more than 15 external processes using ProcessBuilder

I'm building an app that has the following flow:
There is a source of items to process
Each item should be processed by external command (it'll be ffmpeg in the end but for this simple reproducible use case it is just cat to have data be passed through it)
In the end, the output of such external command is saved somewhere (again, for the sake of this example it just saves it to a local text file)
So I'm doing the following operations:
Prepare a source with items
Make an Akka graph that uses Broadcast to fan-out the source items into individual flows
Individual flows uses ProcessBuilder in conjunction with Flow.fromSinkAndSource to build flow out of this external process execution
End the individual flows with a sink that saves the data to a file.
Complete code example:
import akka.actor.ActorSystem
import akka.stream.scaladsl.GraphDSL.Implicits._
import akka.stream.scaladsl._
import akka.stream.ClosedShape
import akka.util.ByteString
import java.io.{BufferedInputStream, BufferedOutputStream}
import java.nio.file.Paths
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, ExecutionContext, Future}
object MyApp extends App {
// When this is changed to something above 15, the graph just stops
val PROCESSES_COUNT = Integer.parseInt(args(0))
println(s"Running with ${PROCESSES_COUNT} processes...")
implicit val system = ActorSystem("MyApp")
implicit val globalContext: ExecutionContext = ExecutionContext.global
def executeCmdOnStream(cmd: String): Flow[ByteString, ByteString, _] = {
val convertProcess = new ProcessBuilder(cmd).start
val pipeIn = new BufferedOutputStream(convertProcess.getOutputStream)
val pipeOut = new BufferedInputStream(convertProcess.getInputStream)
Flow
.fromSinkAndSource(StreamConverters.fromOutputStream(() ⇒ pipeIn), StreamConverters.fromInputStream(() ⇒ pipeOut))
}
val source = Source(1 to 100)
.map(element => {
println(s"--emit: ${element}")
ByteString(element)
})
val sinksList = (1 to PROCESSES_COUNT).map(i => {
Flow[ByteString]
.via(executeCmdOnStream("cat"))
.toMat(FileIO.toPath(Paths.get(s"process-$i.txt")))(Keep.right)
})
val graph = GraphDSL.create(sinksList) { implicit builder => sinks =>
val broadcast = builder.add(Broadcast[ByteString](sinks.size))
source ~> broadcast.in
for (i <- broadcast.outlets.indices) {
broadcast.out(i) ~> sinks(i)
}
ClosedShape
}
Await.result(Future.sequence(RunnableGraph.fromGraph(graph).run()), Duration.Inf)
}
Run this using following command:
sbt "run PROCESSES_COUNT"
i.e.
sbt "run 15"
This all works quite well until I raise the amount of "external processes" (PROCESSES_COUNT in the code). When it's 15 or less, all goes well but when it's 16 or more then the following things happen:
Whole execution just hangs after emitting the first 16 items (this amount of 16 items is Akka's default buffer size AFAIK)
I can see that cat processes are started in the system (all 16 of them)
When I manually kill one of these cat processes in the system, something frees up and processing continues (of course in the result, one file is empty because I killed its processing command)
I checked that this is caused by the external execution for sure (not i.e. limit of Akka Broadcast itself).
I recorded a video showing these two situations (first, 15 items working fine and then 16 items hanging and freed up by killing one process) - link to the video
Both the code and video are in this repo
I'd appreciate any help or suggestions where to look solution for this one.
It is an interesting problem and it looks like that the stream is dead-locking. The increase of threads may be fixing the symptom but not the underlying problem.
The problem is following code
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn),
StreamConverters.fromInputStream(() => pipeOut)
)
Both fromInputStream and fromOutputStream will be using the same default-blocking-io-dispatcher as you correctly noticed. The reason for using a dedicated thread pool is that both perform Java API calls that are blocking the running thread.
Here is a part of a thread stack trace of fromInputStream that shows where blocking is happening.
at java.io.FileInputStream.readBytes(java.base#11.0.13/Native Method)
at java.io.FileInputStream.read(java.base#11.0.13/FileInputStream.java:279)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.lang.ProcessImpl$ProcessPipeInputStream)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(java.base#11.0.13/FilterInputStream.java:107)
at akka.stream.impl.io.InputStreamSource$$anon$1.onPull(InputStreamSource.scala:63)
Now, you're running 16 simultaneous Sinks that are connected to a single Source. To support back-pressure, a Source will only produce an element when all Sinks send a pull command.
What happens next is that you have 16 calls to method FileInputStream.readBytes at the same time and they immediately block all threads of default-blocking-io-dispatcher. And there are no threads left for fromOutputStream to write any data from the Source or perform any kind of work. Thus, you have a dead-lock.
The problem can be fixed if you increase the threads in the pool. But this just removes the symptom.
The correct solution is to run fromOutputStream and fromInputStream in two separate thread pools. Here is how you can do it.
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn).async("blocking-1"),
StreamConverters.fromInputStream(() => pipeOut).async("blocking-2")
)
with following config
blocking-1 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
blocking-2 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
Because they don't share the pools anymore, both fromOutputStream and fromInputStream can perform their tasks independently.
Also note that I just assigned 2 threads per pool to show that it's not about the thread count but about the pool separation.
I hope this helps to understand akka streams better.
Turns out this was limit on Akka configuration level of blocking IO dispatchers:
So changing that value to something bigger than the amount of streams fixed the issue:
akka.actor.default-blocking-io-dispatcher.thread-pool-executor.fixed-pool-size = 50

Can Flink Map be Called on and When Required (Not activated on input Stream)

I have a map in flink that gets activated once data comes through a stream.
I want to call that map even if no data come through.
I moved the map into a function (infinite function call) but then the flink job never runs. And if I add it within a map it will only get activated if and when data comes through.
The Idea is, have 1 map in an infinte loop, checking some shared variable and another flink stream monitoring a kafka queue, if data comes in it process it changes a shared variable that effects the infinite loop in some way and continues.
How Do I call an infinite loop map and run the flink maps together?
I tried creating a CollectionMap with random data to activate the stream and map to call the infinite loop, but exits almost immediately even though there is a while(true) condition within the map
In the IDE it works, when I push it to Flink.local it exits almost immediately not staying in loop
Stream 1
val data_stream = env.addSource(myConsumer)
.map(x => {process(x)})
Stream 2
val elements = List[String]("Start")
var read = env.fromElements(elements).map(x => ProcessData.infinteLoop())
How Do I call an infinite loop map and run the flink maps together?
You can create a window and a trigger and call the map every x seconds.
You can find the documentation heare: https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html
Example:
import org.apache.flink.streaming.api.windowing.assigners.GlobalWindows
import org.apache.flink.streaming.api.windowing.triggers.{CountTrigger, PurgingTrigger}
import org.apache.flink.streaming.api.windowing.windows.GlobalWindow
val data_stream = env.addSource(myConsumer)
.map(x => {process(x)})
val window: DataStream[String] = data_stream
.windowAll(GlobalWindows.create())
.trigger(PurgingTrigger.of(CountTrigger.of[GlobalWindow](5)))
.apply((w: GlobalWindow, x: Iterable[(Integer, String)], y: Collector[String]) => {})

Refresh cached values in spark streaming without reboot the batch

Maybe the question is too simple, at least look like that, but I have the following problem:
A. Execute spark submit in spark streaming process.
ccc.foreachRDD(rdd => {
rdd.repartition(20).foreachPartition(p => {
val repo = getReposX
t foreach (g => {
.................
B. getReposX is a function which one make a query in mongoDB recovering a Map wit a key/value necessary in every executor of the process.
C. Into each g in foreach I manage this map "cached"
The problem or the question is when in the collection of mongo anything change, I don't watch or I don't detect the change, therefore I am managing a Map not updated. My question is: How can I get it? Yes, I know If I reboot the spark-submit and the driver is executed again is OK but otherwise I will never see the update in my Map.
Any ideas or suggestion?
Regards.
Finally I Developed a solution. First I explain the question more in detail because what I really wanted to know is how to implement an object or "cache", which was refreshed every so often, or by some kind of order, without the need to restart the spark streaming process, that is, it would refresh in alive.
In my case this "cache" or refreshed object is an object (Singleton) that connected to a collection of mongoDB to recover a HashMap that was used by each executor and was cached in memory as a good Singleton. The problem with this was that once the spark streaming submit is executed it was cached that object in memory but it was not refreshed unless the process was restarted. Think of a broadcast as a counter mode to refresh when the variable reaches 1000, but these are read only and can not be modified. Think of an counter, but these can only be read by the driver.
Finally my solution is, within the initialization block of the object that loads the mongo collection and the cache, I implement this:
//Initialization Block
{
val ex = new ScheduledThreadPoolExecutor(1)
val task = new Runnable {
def run() = {
logger.info("Refresh - Inicialization")
initCache
}
}
val f = ex.scheduleAtFixedRate(task, 0, TIME_REFRES, TimeUnit.SECONDS)
}
initCache is nothing more than a function that connects mongo and loads a collection:
var cache = mutable.HashMap[String,Description]()
def initCache():mutable.HashMap[String, Description]={
val serverAddresses = Functions.getMongoServers(SERVER, PORT)
val mongoConnectionFactory = new MongoCollectionFactory(serverAddresses,DATABASE,COLLECTION,USERNAME,PASSWORD)
val collection = mongoConnectionFactory.getMongoCollection()
val docs = collection.find.iterator()
cache.clear()
while (docs.hasNext) {
var doc = docs.next
cache.put(...............
}
cache
}
In this way, once the spark streaming submit has been started, each task will open one more task, which will make every X time (1 or 2 hours in my case) refresh the value of the singleton collection, and always recover that value instantiated:
def getCache():mutable.HashMap[String, Description]={
cache
}

How to continuous reading resources(configs) by spark

The back ground is that we need to read file which is used as global configs to do data calculation and the file will be changed each hour, so it is necessary to reload the file. our confusion is about how to reload the config if the 'for-loop' goes to the end and how to notify the main process that file is changing, if spark engine could complete it independently? Sample codes like that:
// init streamingContext
val alertConfigRDD: RDD[String] = sc.textFile("alert-config.json")
val alertConfig = alertConfigRDD.collect()
for (config <- alertConfigs) {
// spark streaming process: select window duration according to config details.
}
streamingContext.start()
streamingContext.awaitTermination()
Thanks for solutions in advance.
If it's vital to have this resource being updated for data processing then load if from one place (being it HDFS, S3 or any other storage accessible from all executors) on each batch before actual processing.

Consuming RabbitMQ messages with Spark streaming

I'm new to scala and trying to hack my way around sending serialized Java objects over a RabbitMQ queue to a Spark Streaming application.
I can successfully enqueue my objects which have been serialized with an ObjectOutputStream. To receive my objects on the Spark end I have downloaded a custom RabbitMQ InputDStream and Receiver implementation from here - https://github.com/Stratio/rabbitmq-receiver
However, in my understanding that codebase only supports String messages, not binary. Thus I started hacking on that code in order to make it support being able to read a binary message and store it as a byte array, so that I can deserialize it on the Spark end. That attempt is here - https://github.com/llevar/rabbitmq-receiver
I then have the following code in my Spark driver program:
val conf = new SparkConf().setMaster("local[6]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(1))
val receiverStream: ReceiverInputDStream[scala.reflect.ClassTag[AnyRef]] =
RabbitMQUtils.createStreamFromAQueue(ssc,
"localhost",
5672,
"mappingQueue",
StorageLevel.MEMORY_AND_DISK_SER_2)
val parsedStream = receiverStream.map{ m =>
SerializationUtils.deserialize(m.asInstanceOf[Array[Byte]]).asInstanceOf[SAMRecord]
}
parsedStream.print()
ssc.start()
Unfortunately this does not seem to work. The data is consumed off the queue. I don't get any errors but I don't get any of the output that I expect either.
This is all I get.
2015-07-24 23:33:38 WARN BlockManager:71 - Block input-0-1437795218845 replicated to only 0 peer(s) instead of 1 peers
2015-07-24 23:33:38 WARN BlockManager:71 - Block input-0-1437795218846 replicated to only 0 peer(s) instead of 1 peers
2015-07-24 23:33:38 WARN BlockManager:71 - Block input-0-1437795218847 replicated to only 0 peer(s) instead of 1 peers
2015-07-24 23:33:38 WARN BlockManager:71 - Block input-0-1437795218848 replicated to only 0 peer(s) instead of 1 peers
I was able to successfully deserialize my objects before calling the store() method here - https://github.com/llevar/rabbitmq-receiver/blob/master/src/main/scala/com/stratio/receiver/RabbitMQInputDStream.scala#L106
just by invoking SerializationUtils on the data from the delivery.getBody call, but I don't seem to be able to get the same data from the DStream in my main program.
Any help is appreciated.