memory error in standalone spark cluster as "shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[Remote]" - scala

I got the following memory error in my standalone spark cluster, after 140 iterations of my code. How shall I run my code without memory fault?
I am having 7 nodes with 8GB RAM out of which 6GB is allocated to all the workers. The master is also having 8GB RAM.
[error] application - Remote calculator (Actor[akka.tcp://Remote#127.0.0.1:44545/remote/akka.tcp/NotebookServer#127.0.0.1:50778/user/$c/$a#872469007]) has been terminated !!!!!
[info] application - View notebook 'kamaruddin/PSOAANN_BreastCancer_optimized.snb', presentation: 'None'
[info] application - Closing websockets for kernel 6c8e8090-cbeb-430e-9d45-5710ce60b984
Uncaught error from thread [Remote-akka.actor.default-dispatcher-6] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[Remote]
Exception in thread "Thread-36" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.jar.Attributes.read(Attributes.java:394)
at java.util.jar.Manifest.read(Manifest.java:199)
at java.util.jar.Manifest.<init>(Manifest.java:69)
at java.util.jar.JarFile.getManifestFromReference(JarFile.java:186)
at java.util.jar.JarFile.getManifest(JarFile.java:167)
at sun.misc.URLClassPath$JarLoader$2.getManifest(URLClassPath.java:779)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:416)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.bindError(SparkIMain.scala:1041)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1347)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at notebook.kernel.Repl$$anonfun$3.apply(Repl.scala:173)
at notebook.kernel.Repl$$anonfun$3.apply(Repl.scala:173)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at scala.Console$.withOut(Console.scala:126)
at notebook.kernel.Repl.evaluate(Repl.scala:172)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$24.apply(ReplCalculator.scala:364)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$24.apply(ReplCalculator.scala:361)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
Uncaught error from thread [Remote-akka.remote.default-remote-dispatcher-445] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[Remote]
java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535)
at java.lang.StringBuffer.append(StringBuffer.java:322)
at java.io.StringWriter.write(StringWriter.java:94)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._flushBuffer(WriterBasedJsonGenerator.java:1879)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeString(WriterBasedJsonGenerator.java:916)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeFieldName(WriterBasedJsonGenerator.java:213)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.writeFieldName(WriterBasedJsonGenerator.java:104)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$2.apply(JsValue.scala:319)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$2.apply(JsValue.scala:318)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at play.api.libs.json.JsValueSerializer.serialize(JsValue.scala:318)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$1.apply(JsValue.scala:312)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$1.apply(JsValue.scala:311)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.libs.json.JsValueSerializer.serialize(JsValue.scala:311)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$2.apply(JsValue.scala:320)
at play.api.libs.json.JsValueSerializer$$anonfun$serialize$2.apply(JsValue.scala:318)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.libs.json.JsValueSerializer.serialize(JsValue.scala:318)
at play.api.libs.json.JsValueSerializer.serialize(JsValue.scala:302)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at play.api.libs.json.JacksonJson$.generateFromJsValue(JsValue.scala:494)
at play.api.libs.json.Json$.stringify(Json.scala:51)
at play.api.libs.json.JsValue$class.toString(JsValue.scala:80)
at play.api.libs.json.JsObject.toString(JsValue.scala:166)
at java.util.Formatter$FormatSpecifier.printString(Formatter.java:2838)
at java.util.Formatter$FormatSpecifier.print(Formatter.java:2718)
Uncaught error from thread [Remote-akka.remote.default-remote-dispatcher-446] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[Remote]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "appclient-receive-and-reply-threadpool-0" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "appclient-receive-and-reply-threadpool-2" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "appclient-receive-and-reply-threadpool-4" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "appclient-receive-and-reply-threadpool-6" java.lang.OutOfMemoryError: GC overhead limit exceeded
[error] application - Process exited with an error: 255 (Exit value: 255)
org.apache.commons.exec.ExecuteException: Process exited with an error: 255 (Exit value: 255)
at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404)
at org.apache.commons.exec.DefaultExecutor.access$200(DefaultExecutor.java:48)
at org.apache.commons.exec.DefaultExecutor$1.run(DefaultExecutor.java:200)
at java.lang.Thread.run(Thread.java:745)

Maybe you can try to use checkpointing.
Data checkpointing - Saving of the generated RDDs to reliable storage.
This is necessary in some stateful transformations that combine data
across multiple batches. In such transformations, the generated RDDs
depend on RDDs of previous batches, which causes the length of the
dependency chain to keep increasing with time. To avoid such unbounded
increases in recovery time (proportional to dependency chain),
intermediate RDDs of stateful transformations are periodically
checkpointed to reliable storage (e.g. HDFS) to cut off the dependency
chain

Related

Program takes a lot of time to end because of this warning Executor: Issue communicating with driver in heartbeater

I have the standalone spark cluster with one master and four workers which must read the large oracle table and process it. Each node has 30G Ram. In average program time is about 1.7 Hours, but sometimes it takes a lot of time because of this warning:
WARN Executor: Issue communicating with driver in heartbeater
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10000 milliseconds]. This timeout is controlled by spark.executor.heartbeatInterval
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:103)
at org.apache.spark.executor.Executor.reportHeartBeat(Executor.scala:996)
at org.apache.spark.executor.Executor.$anonfun$heartbeater$1(Executor.scala:212)
at org.apache.spark.executor.Executor$$Lambda$356/321695195.apply$mcV$sp(Unknown Source)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
at org.apache.spark.Heartbeater$$anon$1.run(Heartbeater.scala:46)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:259)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:263)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:293)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
... 14 more
I search and added these options to the spark_submit command.But, still receive the same warning.
--conf spark.sql.broadcastTimeout=3600 --conf spark.rpc.message.maxSize=1024 --conf spark.rpc.askTimeout=600s
Also, I set 28G for driver memory and 24G for executor memory. Moreover, I read this post:
https://stackoverflow.com/a/54038675/6640504
As this comment said, it is not right to increase spark.network.timeout and spark.executor.heartbeatInterval to 10000000 with the same value. I am sure the program takes more time to run with this setting.
Would you please guide me how to solve this problem and improve speed of the program?
Any help is really appreciated.

spring-kafka java.lang.OutOfMemoryError: GC overhead limit exceeded

We haven't upgraded the kafka client library version in a while.
kafka-clients-2.0.1.jar
Stacktrace:
2022-01-20 12:06:23,937 ERROR [kafka-coordinator-heartbeat-thread | prod] internals.AbstractCoordinator$HeartbeatThread (AbstractCoordinator.java:1083) - [Consumer clientId=consumer-2, groupId=prod] Heartbeat thread failed due to unexpected error
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:3332) ~[?:1.8.0_271]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~[?:1.8.0_271]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) ~[?:1.8.0_271]
at java.lang.StringBuilder.append(StringBuilder.java:136) ~[?:1.8.0_271]
at org.apache.kafka.common.utils.LogContext$AbstractKafkaLogger.addPrefix(LogContext.java:66) ~[kafka-clients-2.0.1.jar:?]
at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.writeLog(LogContext.java:434) [kafka-clients-2.0.1.jar:?]
at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.info(LogContext.java:382) [kafka-clients-2.0.1.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.markCoordinatorUnknown(AbstractCoordinator.java:729) ~[kafka-clients-2.0.1.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.markCoordinatorUnknown(AbstractCoordinator.java:724) ~[kafka-clients-2.0.1.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1031) [kafka-clients-2.0.1.jar:?]
Could there be a memory leak in this version of kafka-clients? Is an upgrade needed?
In the redundant tomcat server (which become primary since the primary server crashed due to OOM), we noticed that the number of blocked-count on the kafka-consumer threads is really huge in the order 4580000, the duration of this JVM is ~5 days 12 h. We got these numbers from Java Mission Control.
Is it normal to see such numbers for the blocked-count?

Spark streaming error: Issue communicating with driver in heartbeater

I'm experimenting an issue with heartbeating when I running my Spark Streaming app.
I know the meaning of heartbeating, I have tried to increase its value in "spark.executor.heartbeatInterval", but the issue it still remaing.
My config is:
4 executors
4 cores per executor
6GB RAM per executor
Spark streaming time window: 30s
Each batch takes between 2s and 28s to complete
In the logs I can see how, suddenly, executors start to log "Issue communicating with driver in heartbeater" and when the it happen X times, the executor shutdown (as the spark doc says).
In the logs I can't see any exception (such as OOM or something about GC). Simply, some time (some hours after starting), heartbeater fails.
I have read about to repartition data to try to solve the issue, but I can't because it is a Kafka Direct appication and each partition is partial ordered so I don't do repartition anytime.
This is the trace I can see:
2018/12/16 13:44:26:317 WARN org.apache.spark.executor.Executor: Issue communicating with driver in heartbeater
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:785)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply$mcV$sp(Executor.scala:814)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:814)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:814)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988)
at org.apache.spark.executor.Executor$$anon$2.run(Executor.scala:814)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
... 14 more

Getting out of memory error while reading parquet file in spark submit job

[Stage 0:> (0 + 0) / 8]SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[Stage 1:=====================================================> (43 + 3) / 46]17/11/16 13:11:18 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 54)
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/11/16 13:11:18 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-4,5,main]
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/11/16 13:11:18 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 54, localhost): java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This is my code-
val sqlContext = new SQLContext(sc)
//sqlContext.setConf("spark.sql.inMemoryColumnarStorage.compressed", "true")
log.setLevel(Level.INFO)
val config = HBaseConfiguration.create()
val newDataDF = sqlContext.read.parquet(file)
newDataDF.registerTempTable("newDataDF")
//sqlContext.cacheTable("newDataDF")
val result = sqlContext.sql("SELECT rec FROM newDataDF")
val rows = result.map(t => t(0)).collect()
//val rows = result.map(t => t.getAs[String]("rec"))
It throws out of memory at below line
//val rows = result.map(t => t(0)).collect()
Have tried all options of memory tuning and increasing executor/driver memory increase, but nothing seems to work.
Any advise would be greatly appreciated.
Well, by calling collect on your DataFrame, you tell Spark to gather ALL data onto the driver. For larger datasets this will indeed drown the driver and cause OOMs.
Spark is a framework for distributed computing intended to be used on large dataset that will not fit on a single machine. Only in very few cases do you ever want to call collect on a DataFrame and that is when you do debugging (on small datasets) or you know that you dataset has been reduced vastly in size due to some filtering or aggregation transformations.
you have to increase spark.driver.memory which default value is 1gb. you can check the driver and executor memory using --verbose command. For more information check this link and set the memory as per your requirement. https://spark.apache.org/docs/latest/configuration.html

ERROR ContextCleaner: Error in cleaning thread

I have a project with spark 1.4.1 and scala 2.11, when I run it with sbt run ( sbt 0.13.12) it display an error is the following:
16/12/22 15:36:43 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
16/12/22 15:36:43 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:996)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
Exception: sbt.TrapExitSecurityException thrown from the UncaughtExceptionHandler in thread "run-main-0"
16/12/22 15:36:43 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
Knowing that I stopped the object of spark (sc.stop() ) at the end of my code, but I still got the same error. May be there is insufficient memory, I changed the configuration to a executor memory than the driver memory, in the following:
val conf = new SparkConf().setAppName("Simple project").setMaster("local[*]").set("spark.executor.memory", "2g")
val sc = new SparkContext(conf)
But always I have the same error.
Can you help me by an ideas, where's exactly my error, in the configuration of the memory or another thing ?
Knowing that I stopped the object of spark (sc.stop() ) at the end of my code, but I still got the same error.
Stopping the spark context (sc.stop()) without waiting for the job to complete could be the reason for this. Make sure you call sc.stop() only after calling all your spark actions.