Which design pattern is my Scala application using? - scala

I have a Scala App that has a trait that implements some function(s) and a class which extends that trait.
The class mentioned above also has a function which calls the function that is defined in the parent trait using it's parameter.
I observed this in Spark + Kafka implementation using Scala. I'm guessing this is some kind of design pattern but I don't know which one. Is it Cake Pattern? Dependency Injection Pattern? Or something else?
Below is the code I'm referring to:
trait SparkApplication {
def sparkConfig: Map[String, String]
def withSparkContext(f: SparkContext => Unit): Unit = {
val conf = new SparkConf()
sparkConfig.foreach { case (k, v) => conf.setIfMissing(k, v) }
val sc = new SparkContext(conf)
f(sc)
}
}
trait SparkStreamingApplication extends SparkApplication {
def withSparkStreamingContext(f: (SparkContext, StreamingContext) => Unit): Unit = {
withSparkContext { sc =>
val ssc = new StreamingContext(sc, Seconds(streamingBatchDuration.toSeconds))
ssc.checkpoint(streamingCheckpointDir)
f(sc, ssc)
ssc.start()
ssc.awaitTermination()
}
}
}

What is being used here (albeit, with a possible error) is the so-called Loan Pattern, called in such way because it's useful when you want to manage the lifecycle of a resource (in your case a SparkContext), while allowing the user to define how the resource is going to be used.
A classical example of this is files: you want to open a file, read it's content and then close it as soon as you are done, without allowing the user to make some mistake and forget to close the resource. You may implement this as follows:
import scala.io.Source
// Read a file at `path` and allow to pass a function that iterates over lines
def consume[A](path: String)(f: Iterator[String] => A): A = {
val source = Source.fromFile(path)
try {
f(source.getLines)
} finally {
source.close()
}
}
Then you'd use this as follows (in the example, to just print all the lines paired with their numbers):
consume("/path/to/some/file")(_.zipWithIndex.foreach(println))
As you may notice, there is something very close to this going on in your code, with the only difference that the resource whose lifecycle you are managing is a SparkContext.
Regarding the possible error I mentioned initially, it regards the fact that you are loaning a SparkContext that you never close. That is probably ok, but the main aspect of the Loan Pattern is precisely that of minimizing the error surface when it comes to managing resources. You may be interested in doing something like the following (you want to check the last line in the method):
def withSparkContext(f: SparkContext => Unit): Unit = {
val conf = new SparkConf()
sparkConfig.foreach { case (k, v) => conf.setIfMissing(k, v) }
val sc = new SparkContext(conf)
f(sc)
sc.stop() // shutdown the context after the user is done
}
You may read more regarding this pattern here.
As a side note, you may be interested in this project that creates a very nice and idiomatic interface around managed resources.

Related

Understanding closures or best way to take udf registrations' code out of main and put in utils

This is more of a Scala concept doubt than Spark. I have this Spark initialization code :
object EntryPoint {
val spark = SparkFactory.createSparkSession(...
val funcsSingleton = ContextSingleton[CustomFunctions] { new CustomFunctions(Some(hashConf)) }
lazy val funcs = funcsSingleton.get
//this part I want moved to another place since there are many many UDFs
spark.udf.register("funcName", udf {funcName _ })
}
The other class, CustomFunctions looks like this
class CustomFunctions(val hashConfig: Option[HashConfig], sark: Option[SparkSession] = None) {
val funcUdf = udf { funcName _ }
def funcName(colValue: String) = withDefinedOpt(hashConfig) { c =>
...}
}
^ class is wrapped in Serializable interface using ContextSingleton which is defined like so
class ContextSingleton[T: ClassTag](constructor: => T) extends AnyRef with Serializable {
val uuid = UUID.randomUUID.toString
#transient private lazy val instance = ContextSingleton.pool.synchronized {
ContextSingleton.pool.getOrElseUpdate(uuid, constructor)
}
def get = instance.asInstanceOf[T]
}
object ContextSingleton {
private val pool = new TrieMap[String, Any]()
def apply[T: ClassTag](constructor: => T): ContextSingleton[T] = new ContextSingleton[T](constructor)
def poolSize: Int = pool.size
def poolClear(): Unit = pool.clear()
}
Now to my problem, I want to not have to explicitly register the udfs as done in the EntryPoint app. I create all udfs as needed in my CustomFunctions class and then register dynamically only the ones that I read from user provided config. What would be the best way to achieve it? Also, I want to register the required udfs outside the main app but that throws my the infamous TaskNotSerializable exception. Serializing the big CustomFunctions is not a good idea, hence wrapped it up in ContextSingleton but my problem of registering udfs outside cannot be solved that way. Please suggest the right approach.

Logging within Akka TestKit outside Actors

I have been trying to Log things within my scalaTest as such:
class ChangeSetActorTest extends PersistenceSpec(ActorSystem("Persistent-test-System")) with PersistenceCleanup {
val log = Logging(system, this)
Basically let's just say that ChangesetActorTest inherit from TestKit(system)
Unfortunately Logging(system, this) does not work with the this.
I get the following error:
[error]
/Users/maatary/Dev/IdeaProjects/PoolpartyConnector/src/test/scala/org/iadb/poolpartyconnector/changepropagation/ChangeSetActorTest.scala:22:
Cannot find LogSource for
org.iadb.poolpartyconnector.changepropagation.ChangeSetActorTest
please see ScalaDoc for LogSource for how to obtain or construct one.
[error] val log = Logging(system, this)
I believe in the Akka Logging Doc this is the following point:
and in all other cases a compile error occurs unless and implicit LogSource[T] is in scope for the type in question.
In other words there is no LogSource[TestKit]
I would like the simplest solution to deal with that issue, with minimal additional configuration. So far what i did is the following and everything works as expected:
class ChangeSetActorTest extends PersistenceSpec(ActorSystem("Persistent-test-System")) with PersistenceCleanup {
val log = system.log
From there I just go and do things like
val received = chgtFetcher.receiveWhile((requestInterval + ProcessingLag).*(3)) {
case msg:FetchNewChangeSet => log.info(s"received: ${msg}" ) ; chgtFetcher.reply(NoAvailableChangeSet); msg
}
My question, is this recommended approach. So far the order of the message coming from my actor and the one from the test are well ordered.
What is the recommended approach to log in a unified way:
From the Test class (e.g. above) and the Actor at the same time ?
If one uses a system where external class needs to log as well and we need one unified logging (asynchronous) going on.
Have a look at this comment:
https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/event/Logging.scala#L196-L237
I believe a more straight forward approach would be to define your implicit LogSource[ChangeSetActorTest] locally.
I.E:
val log = {
implicit val logSource = new LogSource[ChangeSetActorTest] {
override def genString(t: ChangeSetActorTest) = "ChangeSetActorTest"
}
Logging(system, this)
}
Simplest way to log in a TestKit is either:
Get the logger from underlyingActor:
val mockActor = TestActorRef(new XXXActor)
val log = mockActor.underlyingActor.log
Use FeatureSpecLike
http://doc.scalatest.org/3.0.1-2.12/org/scalatest/FeatureSpecLike.html
class ChangeSetActorTest extends PersistenceSpec(ActorSystem("Persistent-test-System")) with PersistenceCleanup with FeatureSpecLike {
//...
alert("Something like warning")
info("Infos")
note("Green infos")
markup("documents")
}

convert `com.ning.http.client.ListenableFuture[Any]` into `scala.concurrent.Future[Any]`

Is there any way I can convert a variable of type com.ning.http.client.ListenableFuture[A] into a type scala.concurrent.Future[A]
in other words what would be the content of the function
def toFuture[A](a: com.ning.http.client.ListenableFuture[A]):scala.concurrent.Future[A] = ???
I am specifically in the case where A = com.ning.http.client.Response
Note that com.ning.http.client.ListenableFuture[A] is not the same as com.google.common.util.concurrent.ListenableFuture (and hence this proposed duplicate does not solve the issue)
The idea is the same as with guava's ListenableFuture, although a little more constrained due to more constrained signature.
First, you need to get a java.util.concurrent.Executor to add a callback. Since your Scala code interacts with a Java library, I'd suggest to define your pool of scala.concurrent.ExecutorServices based on Java Executors - that way you can retain both the instance of an Executor and an ExecutorService, something like the following:
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
val executor = Executors.newFixedThreadPool(5) // use it for Java futures
implicit val executionContext = ExecutionContext.fromExecutor(executor) // use it for Scala futures
The above steps are not needed if you want to process everything in different pools. In case you want to use an existing ExecutionContext, here's a snippet I googled.
Then, to convert the ListenableFuture into a Future, I'd do something like this (considering some exception semantics of java.util.concurrent.Future):
def toFuture[A](a: ListenableFuture[A]): Future[A] = {
val promise = Promise[A]()
a.addListener(new Runnable {
def run() = {
try {
promise.success(a.get)
} catch {
case ex: ExecutionException => promise.failure(ex.getCause)
case ex => promise.failure(ex)
}
}
}, executor)
promise.future
}

Task not serializable: java.io.NotSerializableException when calling function outside closure only on classes not objects

Getting strange behavior when calling function outside of a closure:
when function is in a object everything is working
when function is in a class get :
Task not serializable: java.io.NotSerializableException: testing
The problem is I need my code in a class and not an object. Any idea why this is happening? Is a Scala object serialized (default?)?
This is a working code example:
object working extends App {
val list = List(1,2,3)
val rddList = Spark.ctx.parallelize(list)
//calling function outside closure
val after = rddList.map(someFunc(_))
def someFunc(a:Int) = a+1
after.collect().map(println(_))
}
This is the non-working example :
object NOTworking extends App {
new testing().doIT
}
//adding extends Serializable wont help
class testing {
val list = List(1,2,3)
val rddList = Spark.ctx.parallelize(list)
def doIT = {
//again calling the fucntion someFunc
val after = rddList.map(someFunc(_))
//this will crash (spark lazy)
after.collect().map(println(_))
}
def someFunc(a:Int) = a+1
}
RDDs extend the Serialisable interface, so this is not what's causing your task to fail. Now this doesn't mean that you can serialise an RDD with Spark and avoid NotSerializableException
Spark is a distributed computing engine and its main abstraction is a resilient distributed dataset (RDD), which can be viewed as a distributed collection. Basically, RDD's elements are partitioned across the nodes of the cluster, but Spark abstracts this away from the user, letting the user interact with the RDD (collection) as if it were a local one.
Not to get into too many details, but when you run different transformations on a RDD (map, flatMap, filter and others), your transformation code (closure) is:
serialized on the driver node,
shipped to the appropriate nodes in the cluster,
deserialized,
and finally executed on the nodes
You can of course run this locally (as in your example), but all those phases (apart from shipping over network) still occur. [This lets you catch any bugs even before deploying to production]
What happens in your second case is that you are calling a method, defined in class testing from inside the map function. Spark sees that and since methods cannot be serialized on their own, Spark tries to serialize the whole testing class, so that the code will still work when executed in another JVM. You have two possibilities:
Either you make class testing serializable, so the whole class can be serialized by Spark:
import org.apache.spark.{SparkContext,SparkConf}
object Spark {
val ctx = new SparkContext(new SparkConf().setAppName("test").setMaster("local[*]"))
}
object NOTworking extends App {
new Test().doIT
}
class Test extends java.io.Serializable {
val rddList = Spark.ctx.parallelize(List(1,2,3))
def doIT() = {
val after = rddList.map(someFunc)
after.collect().foreach(println)
}
def someFunc(a: Int) = a + 1
}
or you make someFunc function instead of a method (functions are objects in Scala), so that Spark will be able to serialize it:
import org.apache.spark.{SparkContext,SparkConf}
object Spark {
val ctx = new SparkContext(new SparkConf().setAppName("test").setMaster("local[*]"))
}
object NOTworking extends App {
new Test().doIT
}
class Test {
val rddList = Spark.ctx.parallelize(List(1,2,3))
def doIT() = {
val after = rddList.map(someFunc)
after.collect().foreach(println)
}
val someFunc = (a: Int) => a + 1
}
Similar, but not the same problem with class serialization can be of interest to you and you can read on it in this Spark Summit 2013 presentation.
As a side note, you can rewrite rddList.map(someFunc(_)) to rddList.map(someFunc), they are exactly the same. Usually, the second is preferred as it's less verbose and cleaner to read.
EDIT (2015-03-15): SPARK-5307 introduced SerializationDebugger and Spark 1.3.0 is the first version to use it. It adds serialization path to a NotSerializableException. When a NotSerializableException is encountered, the debugger visits the object graph to find the path towards the object that cannot be serialized, and constructs information to help user to find the object.
In OP's case, this is what gets printed to stdout:
Serialization stack:
- object not serializable (class: testing, value: testing#2dfe2f00)
- field (class: testing$$anonfun$1, name: $outer, type: class testing)
- object (class testing$$anonfun$1, <function1>)
Grega's answer is great in explaining why the original code does not work and two ways to fix the issue. However, this solution is not very flexible; consider the case where your closure includes a method call on a non-Serializable class that you have no control over. You can neither add the Serializable tag to this class nor change the underlying implementation to change the method into a function.
Nilesh presents a great workaround for this, but the solution can be made both more concise and general:
def genMapper[A, B](f: A => B): A => B = {
val locker = com.twitter.chill.MeatLocker(f)
x => locker.get.apply(x)
}
This function-serializer can then be used to automatically wrap closures and method calls:
rdd map genMapper(someFunc)
This technique also has the benefit of not requiring the additional Shark dependencies in order to access KryoSerializationWrapper, since Twitter's Chill is already pulled in by core Spark
Complete talk fully explaining the problem, which proposes a great paradigm shifting way to avoid these serialization problems: https://github.com/samthebest/dump/blob/master/sams-scala-tutorial/serialization-exceptions-and-memory-leaks-no-ws.md
The top voted answer is basically suggesting throwing away an entire language feature - that is no longer using methods and only using functions. Indeed in functional programming methods in classes should be avoided, but turning them into functions isn't solving the design issue here (see above link).
As a quick fix in this particular situation you could just use the #transient annotation to tell it not to try to serialise the offending value (here, Spark.ctx is a custom class not Spark's one following OP's naming):
#transient
val rddList = Spark.ctx.parallelize(list)
You can also restructure code so that rddList lives somewhere else, but that is also nasty.
The Future is Probably Spores
In future Scala will include these things called "spores" that should allow us to fine grain control what does and does not exactly get pulled in by a closure. Furthermore this should turn all mistakes of accidentally pulling in non-serializable types (or any unwanted values) into compile errors rather than now which is horrible runtime exceptions / memory leaks.
http://docs.scala-lang.org/sips/pending/spores.html
A tip on Kryo serialization
When using kyro, make it so that registration is necessary, this will mean you get errors instead of memory leaks:
"Finally, I know that kryo has kryo.setRegistrationOptional(true) but I am having a very difficult time trying to figure out how to use it. When this option is turned on, kryo still seems to throw exceptions if I haven't registered classes."
Strategy for registering classes with kryo
Of course this only gives you type-level control not value-level control.
... more ideas to come.
I faced similar issue, and what I understand from Grega's answer is
object NOTworking extends App {
new testing().doIT
}
//adding extends Serializable wont help
class testing {
val list = List(1,2,3)
val rddList = Spark.ctx.parallelize(list)
def doIT = {
//again calling the fucntion someFunc
val after = rddList.map(someFunc(_))
//this will crash (spark lazy)
after.collect().map(println(_))
}
def someFunc(a:Int) = a+1
}
your doIT method is trying to serialize someFunc(_) method, but as method are not serializable, it tries to serialize class testing which is again not serializable.
So make your code work, you should define someFunc inside doIT method. For example:
def doIT = {
def someFunc(a:Int) = a+1
//function definition
}
val after = rddList.map(someFunc(_))
after.collect().map(println(_))
}
And if there are multiple functions coming into picture, then all those functions should be available to the parent context.
I solved this problem using a different approach. You simply need to serialize the objects before passing through the closure, and de-serialize afterwards. This approach just works, even if your classes aren't Serializable, because it uses Kryo behind the scenes. All you need is some curry. ;)
Here's an example of how I did it:
def genMapper(kryoWrapper: KryoSerializationWrapper[(Foo => Bar)])
(foo: Foo) : Bar = {
kryoWrapper.value.apply(foo)
}
val mapper = genMapper(KryoSerializationWrapper(new Blah(abc))) _
rdd.flatMap(mapper).collectAsMap()
object Blah(abc: ABC) extends (Foo => Bar) {
def apply(foo: Foo) : Bar = { //This is the real function }
}
Feel free to make Blah as complicated as you want, class, companion object, nested classes, references to multiple 3rd party libs.
KryoSerializationWrapper refers to: https://github.com/amplab/shark/blob/master/src/main/scala/shark/execution/serialization/KryoSerializationWrapper.scala
I'm not entirely certain that this applies to Scala but, in Java, I solved the NotSerializableException by refactoring my code so that the closure did not access a non-serializable final field.
Scala methods defined in a class are non-serializable, methods can be converted into functions to resolve serialization issue.
Method syntax
def func_name (x String) : String = {
...
return x
}
function syntax
val func_name = { (x String) =>
...
x
}
FYI in Spark 2.4 a lot of you will probably encounter this issue. Kryo serialization has gotten better but in many cases you cannot use spark.kryo.unsafe=true or the naive kryo serializer.
For a quick fix try changing the following in your Spark configuration
spark.kryo.unsafe="false"
OR
spark.serializer="org.apache.spark.serializer.JavaSerializer"
I modify custom RDD transformations that I encounter or personally write by using explicit broadcast variables and utilizing the new inbuilt twitter-chill api, converting them from rdd.map(row => to rdd.mapPartitions(partition => { functions.
Example
Old (not-great) Way
val sampleMap = Map("index1" -> 1234, "index2" -> 2345)
val outputRDD = rdd.map(row => {
val value = sampleMap.get(row._1)
value
})
Alternative (better) Way
import com.twitter.chill.MeatLocker
val sampleMap = Map("index1" -> 1234, "index2" -> 2345)
val brdSerSampleMap = spark.sparkContext.broadcast(MeatLocker(sampleMap))
rdd.mapPartitions(partition => {
val deSerSampleMap = brdSerSampleMap.value.get
partition.map(row => {
val value = sampleMap.get(row._1)
value
}).toIterator
})
This new way will only call the broadcast variable once per partition which is better. You will still need to use Java Serialization if you do not register classes.
I had a similar experience.
The error was triggered when I initialize a variable on the driver (master), but then tried to use it on one of the workers.
When that happens, Spark Streaming will try to serialize the object to send it over to the worker, and fail if the object is not serializable.
I solved the error by making the variable static.
Previous non-working code
private final PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance();
Working code
private static final PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance();
Credits:
https://learn.microsoft.com/en-us/answers/questions/35812/sparkexception-job-aborted-due-to-stage-failure-ta.html ( The answer of pradeepcheekatla-msft)
https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/javaionotserializableexception.html
def upper(name: String) : String = {
var uppper : String = name.toUpperCase()
uppper
}
val toUpperName = udf {(EmpName: String) => upper(EmpName)}
val emp_details = """[{"id": "1","name": "James Butt","country": "USA"},
{"id": "2", "name": "Josephine Darakjy","country": "USA"},
{"id": "3", "name": "Art Venere","country": "USA"},
{"id": "4", "name": "Lenna Paprocki","country": "USA"},
{"id": "5", "name": "Donette Foller","country": "USA"},
{"id": "6", "name": "Leota Dilliard","country": "USA"}]"""
val df_emp = spark.read.json(Seq(emp_details).toDS())
val df_name=df_emp.select($"id",$"name")
val df_upperName= df_name.withColumn("name",toUpperName($"name")).filter("id='5'")
display(df_upperName)
this will give error
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
Solution -
import java.io.Serializable;
object obj_upper extends Serializable {
def upper(name: String) : String =
{
var uppper : String = name.toUpperCase()
uppper
}
val toUpperName = udf {(EmpName: String) => upper(EmpName)}
}
val df_upperName=
df_name.withColumn("name",obj_upper.toUpperName($"name")).filter("id='5'")
display(df_upperName)
My solution was to add a compagnion class that handles all methods that are not seriazable within the class.

Syntactic sugar for compile-time object creation in Scala

Lets say I have
trait fooTrait[T] {
def fooFn(x: T, y: T) : T
}
I want to enable users to quickly declare new instances of fooTrait with their own defined bodies for fooFn. Ideally, I'd want something like
val myFoo : fooTrait[T] = newFoo((x:T, y:T) => x+y)
to work. However, I can't just do
def newFoo[T](f: (x:T, y:T) => T) = new fooTrait[T] { def fooFn(x:T, y:T):T = f(x,y); }
because this uses closures, and so results in different objects when the program is run multiple times. What I really need is to be able to get the classOf of the object returned by newFoo and then have that be constructable on a different machine. What do I do?
If you're interested in the use case, I'm trying to write a Scala wrapper for Hadoop that allows you to execute
IO("Data") --> ((x: Int, y: Int) => (x, x+y)) --> IO("Out")
The thing in the middle needs to be turned into a class that implements a particular interface and can then be instantiated on different machines (executing the same jar file) from just the class name.
Note that Scala does the right thing with the syntactic sugar that converts (x:Int) => x+5 to an instance of Function1. My question is whether I can replicate this without hacking the Scala internals. If this was lisp (as I'm used to), this would be a trivial compile-time macro ... :sniff:
Here's a version that matches the syntax of what you list in the question and serializes/executes the anon-function. Note that this serializes the state of the Function2 object so that the serialized version can be restored on another machine. Just the classname is insufficient, as illustrated below the solution.
You should make your own encode/decode function, if even to just include your own Base64 implementation (not to rely on Sun's Hotspot).
object SHadoopImports {
import java.io._
implicit def functionToFooString[T](f:(T,T)=>T) = {
val baos = new ByteArrayOutputStream()
val oo = new ObjectOutputStream(baos)
oo.writeObject(f)
new sun.misc.BASE64Encoder().encode(baos.toByteArray())
}
implicit def stringToFun(s: String) = {
val decoder = new sun.misc.BASE64Decoder();
val bais = new ByteArrayInputStream(decoder.decodeBuffer(s))
val oi = new ObjectInputStream(bais)
val f = oi.readObject()
new {
def fun[T](x:T, y:T): T = f.asInstanceOf[Function2[T,T,T]](x,y)
}
}
}
// I don't really know what this is supposed to do
// just supporting the given syntax
case class IO(src: String) {
import SHadoopImports._
def -->(s: String) = new {
def -->(to: IO) = {
val IO(snk) = to
println("From: " + src)
println("Applying (4,5): " + s.fun(4,5))
println("To: " + snk)
}
}
}
object App extends Application {
import SHadoopImports._
IO("MySource") --> ((x:Int,y:Int)=>x+y) --> IO("MySink")
println
IO("Here") --> ((x:Int,y:Int)=>x*y+y) --> IO("There")
}
/*
From: MySource
Applying (4,5): 9
To: MySink
From: Here
Applying (4,5): 25
To: There
*/
To convince yourself that the classname is insufficient to use the function on another machine, consider the code below which creates 100 different functions. Count the classes on the filesystem and compare.
object App extends Application {
import SHadoopImports._
for (i <- 1 to 100) {
IO(i + ": source") --> ((x:Int,y:Int)=>(x*i)+y) --> IO("sink")
}
}
Quick suggestion: why don't you try to create an implicit def transforming FunctionN object to the trait expected by the --> method.
I do hope you won't have to use any macro for this!