I found that mock object in Mockito would throw ClassNotFoundException when used in Spark. Here is a minimal example:
import org.apache.spark.{SparkConf, SparkContext}
import org.mockito.{Matchers, Mockito}
import org.scalatest.FlatSpec
import org.scalatest.mockito.MockitoSugar
trait MyTrait {
def myMethod(a: Int): Int
}
class MyTraitTest extends FlatSpec with MockitoSugar {
"Mock" should "work in Spark" in {
val m = mock[MyTrait](Mockito.withSettings().serializable())
Mockito.when(m.myMethod(Matchers.any())).thenReturn(1)
val conf = new SparkConf().setAppName("testApp").setMaster("local")
val sc = new SparkContext(conf)
assert(sc.makeRDD(Seq(1, 2, 3)).map(m.myMethod).first() == 1)
}
}
which would throw the following exception:
[info] MyTraitTest:
[info] Mock
[info] - should work in Spark *** FAILED ***
[info] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ClassNotFoundException: MyTrait$$EnhancerByMockitoWithCGLIB$$6d9e95a8
[info] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[info] at java.lang.Class.forName0(Native Method)
[info] at java.lang.Class.forName(Class.java:348)
[info] at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
[info] at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
[info] at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
[info] at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
[info] at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
[info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
[info] at org.apache.spark.scheduler.Task.run(Task.scala:99)
[info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
The stacktrace hints this is related to dynamic class loading, but I don't know how to fix it.
Update:
Apparently, change
val m = mock[MyTrait](Mockito.withSettings().serializable())
to
val m = mock[MyTrait](Mockito.withSettings().serializable(SerializableMode.ACROSS_CLASSLOADERS))
makes exception disappear. However I am not following why this fix is necessary. I thought in spark local mode, a single JVM is running that hosts both driver and executor. So it must be a different ClassLoader is used to load the deserialized class on executor?
Related
I am upgrading a Spark 2.4 project to Spark 3.x. We are hitting a snag with some existing Spark-ml code:
var stringIndexers = Array[StringIndexer]()
for (featureColumn <- FEATURE_COLS) {
stringIndexers = stringIndexers :+ new StringIndexer().setInputCol(featureColumn).setOutputCol(featureColumn + "_index")
}
val pipeline = new Pipeline().setStages(stringIndexers)
val dfWithNumericalFeatures = pipeline.fit(decoratedDf).transform(decoratedDf)
Specifically, this line: val dfWithNumericalFeatures = pipeline.fit(decoratedDf).transform(decoratedDf) now results in this cryptic exception in Spark 3:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 238.0 failed 1 times, most recent failure: Lost task 0.0 in stage 238.0 (TID 5589) (executor driver): com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.util.collection.OpenHashMap$mcJ$sp$$Lambda$13346/2134122295
[info] Serialization trace:
[info] org$apache$spark$util$collection$OpenHashMap$$grow (org.apache.spark.util.collection.OpenHashMap$mcJ$sp)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
[info] at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
[info] at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)
[info] at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
[info] at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
[info] at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:396)
[info] at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:307)
[info] at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
[info] at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:397)
[info] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
[info] at org.apache.spark.sql.execution.aggregate.ComplexTypedAggregateExpression.deserialize(TypedAggregateExpression.scala:271)
[info] at org.apache.spark.sql.catalyst.expressions.aggregate.TypedImperativeAggregate.merge(interfaces.scala:568)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1.$anonfun$applyOrElse$3(AggregationIterator.scala:199)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1.$anonfun$applyOrElse$3$adapted(AggregationIterator.scala:199)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator.$anonfun$generateProcessRow$7(AggregationIterator.scala:213)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator.$anonfun$generateProcessRow$7$adapted(AggregationIterator.scala:207)
[info] at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.processInputs(ObjectAggregationIterator.scala:151)
[info] at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.<init>(ObjectAggregationIterator.scala:77)
[info] at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2(ObjectHashAggregateExec.scala:107)
[info] at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2$adapted(ObjectHashAggregateExec.scala:85)
[info] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:885)
[info] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:885)
[info] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[info] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[info] at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[info] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[info] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[info] at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
[info] at org.apache.spark.scheduler.Task.run(Task.scala:131)
[info] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
[info] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
[info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info] at java.lang.Thread.run(Thread.java:750)
[info] Caused by: java.lang.ClassNotFoundException: org.apache.spark.util.collection.OpenHashMap$mcJ$sp$$Lambda$13346/2134122295
[info] at java.lang.Class.forName0(Native Method)
[info] at java.lang.Class.forName(Class.java:348)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
[info] ... 36 more
I have searched around and the only relevant issue I've found is this un-answered SO question with the same issue: Spark Kryo Serialization issue.
OpenHashMap is not used in my code, seems likely that there is a bug with the KryoSerializer during this Pipeline.fit() function. Any ideas how to get around this? Thanks!
EDIT: I also just attempted removing usage of the KryoSerializer during my unit tests:
spark = SparkSession
.builder
.master("local[*]")
.appName("UnitTest")
.config("spark.serializer", "org.apache.spark.serializer.JavaSerializer")
.config("spark.driver.bindAddress", "127.0.0.1")
.getOrCreate()
Confirmed that I am using the JavaSerializer:
println(spark.conf.get("spark.serializer")) outputs org.apache.spark.serializer.JavaSerializer. Still same issue however, even when not using the KryoSerializer.
I'm trying this scala microbenchmark plugin, sbt-jmh, and I am getting an error.
package play.twirl.benchmarks
import play.twirl.parser._
import play.twirl.parser.TreeNodes._
import org.openjdk.jmh.annotations.Benchmark
class TwirlBenchmark {
#Benchmark
def simpleParse(): Template = {
val parser = new TwirlParser(false)
val template = "<h1>hello</h1>#someVar"
parser.parse(template) match {
case parser.Success(tmpl, input) =>
if (!input.atEnd) sys.error("Template parsed but not at source end")
tmpl
case parser.Error(_, _, errors) =>
sys.error("Template failed to parse: " + errors.head.str)
}
}
}
It compiles fine, but when running the benchmark:
jmh:run
I get these errors:
[info] # Warmup Iteration 1: <failure>
[info] java.lang.NoClassDefFoundError: scala/util/parsing/input/Position
[info] at play.twirl.benchmarks.TwirlBenchmark.simpleParse(TwirlBenchmarks.scala:23)
[info] at play.twirl.benchmarks.generated.TwirlBenchmark_simpleParse_jmhTest.simpleParse_thrpt_jmhStub(TwirlBenchmark_simpleParse_jmhTest.java:119)
[info] at play.twirl.benchmarks.generated.TwirlBenchmark_simpleParse_jmhTest.simpleParse_Throughput(TwirlBenchmark_simpleParse_jmhTest.java:83)
[info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[info] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[info] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[info] at java.lang.reflect.Method.invoke(Method.java:498)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
[info] at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
[info] Caused by: java.lang.ClassNotFoundException: scala.util.parsing.input.Position
[info] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[info] ... 15 more
[info] # Run complete. Total time: 00:00:02
Not sure how to proceed. Help?
You may be missing org.scala-lang.modules:scala-parser-combinators: https://mvnrepository.com/artifact/org.scala-lang.modules/scala-parser-combinators
Be sure to include the appropriate version based on Scala version and Play version
I need some help understanding errors that are being generated through Scala class for the RandomForestAlgorithm.scala (https://github.com/PredictionIO/PredictionIO/blob/develop/examples/scala-parallel-classification/custom-attributes/src/main/scala/RandomForestAlgorithm.scala).
I am building the project as is (custom-attributes for classification template) in PredictionIO and am getting a pio build error:
hduser#hduser-VirtualBox:~/PredictionIO/classTest$ pio build --verbose
[INFO] [Console$] Using existing engine manifest JSON at /home/hduser/PredictionIO/classTest/manifest.json
[INFO] [Console$] Using command '/home/hduser/PredictionIO/sbt/sbt' at the current working directory to build.
[INFO] [Console$] If the path above is incorrect, this process will fail.
[INFO] [Console$] Uber JAR disabled. Making sure lib/pio-assembly-0.9.5.jar is absent.
[INFO] [Console$] Going to run: /home/hduser/PredictionIO/sbt/sbt package assemblyPackageDependency
[INFO] [Console$] [info] Loading project definition from /home/hduser/PredictionIO/classTest/project
[INFO] [Console$] [info] Set current project to template-scala-parallel-classification (in build file:/home/hduser/PredictionIO/classTest/)
[INFO] [Console$] [info] Compiling 1 Scala source to /home/hduser/PredictionIO/classTest/target/scala-2.10/classes...
[INFO] [Console$] [error] /home/hduser/PredictionIO/classTest/src/main/scala/RandomForestAlgorithm.scala:28: class RandomForestAlgorithm **needs to be abstract**, since method train in class P2LAlgorithm of type (sc: org.apache.spark.SparkContext, pd: com.test1.PreparedData)com.test1.**PIORandomForestModel is not defined**
[INFO] [Console$] [error] class RandomForestAlgorithm(val ap: RandomForestAlgorithmParams) // CHANGED
[INFO] [Console$] [error] ^
[INFO] [Console$] [error] one error found
[INFO] [Console$] [error] (compile:compile) Compilation failed
[INFO] [Console$] [error] Total time: 6 s, completed Jun 8, 2016 4:37:36 PM
[ERROR] [Console$] Return code of previous step is 1. Aborting.
so when I address the line causing the error and make it an abstract object:
// extends P2LAlgorithm because the MLlib's RandomForestModel doesn't
// contain RDD.
abstract class RandomForestAlgorithm(val ap: RandomForestAlgorithmParams) // CHANGED
extends P2LAlgorithm[PreparedData, PIORandomForestModel, // CHANGED
Query, PredictedResult] {
def train(data: PreparedData): PIORandomForestModel = { // CHANGED
// CHANGED
// Empty categoricalFeaturesInfo indicates all features are continuous.
val categoricalFeaturesInfo = Map[Int, Int]()
val m = RandomForest.trainClassifier(
data.labeledPoints,
ap.numClasses,
categoricalFeaturesInfo,
ap.numTrees,
ap.featureSubsetStrategy,
ap.impurity,
ap.maxDepth,
ap.maxBins)
new PIORandomForestModel(
gendersMap = data.gendersMap,
educationMap = data.educationMap,
randomForestModel = m
)
}
pio build is successful but training fails because it can't instantiate the new assignments for the model:
[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(6))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[WARN] [Utils] Your hostname, hduser-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface eth0)
[WARN] [Utils] Set SPARK_LOCAL_IP if you need to bind to another address
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver#10.0.2.15:59444]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
**Exception in thread "main" java.lang.InstantiationException**
at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(InstantiationExceptionConstructorAccessorImpl.java:48)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at io.prediction.core.Doer$.apply(AbstractDoer.scala:52)
at io.prediction.controller.Engine$$anonfun$1.apply(Engine.scala:171)
at io.prediction.controller.Engine$$anonfun$1.apply(Engine.scala:170)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at io.prediction.controller.Engine.train(Engine.scala:170)
at io.prediction.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:65)
at io.prediction.workflow.CreateWorkflow$.main(CreateWorkflow.scala:247)
at io.prediction.workflow.CreateWorkflow.main(CreateWorkflow.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
So two questions:
1. Why is the following model not considered defined during building:
class PIORandomForestModel(
val gendersMap: Map[String, Double],
val educationMap: Map[String, Double],
val randomForestModel: RandomForestModel
) extends Serializable
How can I define PIORandomForestModel in a way that does not throw a pio build error and lets training re-assign attributes to the object?
I have posted this question in the PredictionIO Google group but have not gotten a response.
Thanks in advance for your help.
So, i study play 2 framework + slick. code is simple query to db with slick. And get exception. And I don't understand what to do.
my controller:
class IndexController #Inject()(taskRepo: TaskRepo) extends Controller {
def index = Action.async { implicit rs =>
taskRepo.all().map(tasks => Ok(views.html.index(tasks)))
}
}
and exception:
[info] ! #6pp163f7m - Internal server error, for (GET) [/] ->
[info]
[info] play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[PrivilegedActionException: null]]
[info] at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:269)
[info] at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:195)
[info] at play.core.server.Server$class.logExceptionAndGetResult$1(Server.scala:45)
[info] at play.core.server.Server$class.getHandlerFor(Server.scala:65)
[info] at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:45)
[info] at play.core.server.netty.PlayRequestHandler.handle(PlayRequestHandler.scala:81)
[info] at play.core.server.netty.PlayRequestHandler.channelRead(PlayRequestHandler.scala:162)
[info] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:307)
[info] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:293)
[info] at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:129)
[info] Caused by: java.security.PrivilegedActionException: null
[info] at java.security.AccessController.doPrivileged(Native Method)
[info] at play.runsupport.Reloader$.play$runsupport$Reloader$$withReloaderContextClassLoader(Reloader.scala:39)
[info] at play.runsupport.Reloader.reload(Reloader.scala:336)
[info] at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$get$1.apply(DevServerStart.scala:118)
[info] at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$get$1.apply(DevServerStart.scala:116)
[info] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
[info] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
[info] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
[info] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
[info] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
[info] Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300000 milliseconds]
[info] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
[info] at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
[info] at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
[info] at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
[info] at scala.concurrent.Await$.result(package.scala:190)
[info] at play.forkrun.ForkRun$$anonfun$askForReload$1.apply(ForkRun.scala:128)
[info] at play.forkrun.ForkRun$$anonfun$askForReload$1.apply(ForkRun.scala:126)
[info] at play.runsupport.Reloader$$anonfun$reload$1.apply(Reloader.scala:338)
[info] at play.runsupport.Reloader$$anon$3.run(Reloader.scala:43)
[info] at java.security.AccessController.doPrivileged(Native Method)
what i do wrong?
Problem was in Futures timed out after [300000 milliseconds]
in build.sbt change fork in run := true to fork in run := false
This code just getPackage and getName of a class (not use any mock techniques yet), but it failed.
Anyone see this problem before?
Code:
import mai.MyScala1
import org.junit.Test
import org.junit.runner.RunWith
import org.powermock.modules.junit4.PowerMockRunner
import org.scalatest.junit.JUnitSuite
#RunWith(classOf[PowerMockRunner])
class MyTest extends JUnitSuite {
#Test def test1() {
classOf[MyScala1].getPackage // this one returns null
classOf[MyScala1].getPackage.getName // raise java.lang.NullPointerException
}
}
Error logs:
[info] - test1 *** FAILED ***
[info] java.lang.NullPointerException:
[info] at org.apache.tmp.MyTest.test1(MyTest.scala:15)
[info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[info] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[info] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[info] at java.lang.reflect.Method.invoke(Method.java:497)
[info] at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
[info] at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
[info] at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
[info] at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
[info] at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
[info] ...
I think I found the answer, but I don't understand why it works?
Answer: Inside sbt, we should first "set fork := true", then everything will work fine.
The problem might be related to process vs thread problem.