Spray-Json java.lang.OutOfMemoryError when calling parseJson - scala

I'm using spray-json 1.3.0. I have a simple piece of Json that I am asking spray to parse, here it is
import org.scalatest.{FlatSpec, MustMatchers}
import spray.json._
class BlockCypherOutputMarshallerTest extends FlatSpec with MustMatchers {
val expectedOutput = """{"value":7454642,
|"script":"76a9148d5968ad26f9e277849ff9f8f39920f28944467388ac",
|"addresses":["mtQLgLiqmytKkgE9sVGwypAFsLvkxBQ6XX"],
|"script_type":"pay-to-pubkey-hash}""".stripMargin
val json = expectedOutput.parseJson
"BlockCypherOutputMarshaller" must "parse an output from the blockcypher api" in {
//test case
}
}
however I am getting an error message on the line that val json = expectedOutput.parseJson is called. Here is the error message
> last test:testOnly
[debug] Running TaskDef(com.blockcypher.api.marshallers.BlockCypherOutputMarshallerTest, org.scalatest.tools.Framework$$anon$1#4178c07b, false, [SuiteSelector])
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:622)
at java.lang.StringBuilder.append(StringBuilder.java:202)
at spray.json.JsonParser.appendSB(JsonParser.scala:179)
at spray.json.JsonParser.char(JsonParser.scala:138)
at spray.json.JsonParser.string(JsonParser.scala:129)
at spray.json.JsonParser.value(JsonParser.scala:62)
at spray.json.JsonParser.members$1(JsonParser.scala:80)
at spray.json.JsonParser.object(JsonParser.scala:84)
at spray.json.JsonParser.value(JsonParser.scala:59)
at spray.json.JsonParser.parseJsValue(JsonParser.scala:43)
at spray.json.JsonParser$.apply(JsonParser.scala:28)
at spray.json.PimpedString.parseJson(package.scala:45)
at com.blockcypher.api.marshallers.BlockCypherOutputMarshallerTest.<init>(BlockCypherOutputMarshallerTest.scala:16)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:442)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:641)
at sbt.TestRunner.runTest$1(TestFramework.scala:76)
at sbt.TestRunner.run(TestFramework.scala:85)
at sbt.TestFramework$$anon$2$$anonfun$$init$$1$$anonfun$apply$8.apply(TestFramework.scala:202)
at sbt.TestFramework$$anon$2$$anonfun$$init$$1$$anonfun$apply$8.apply(TestFramework.scala:202)
at sbt.TestFramework$.sbt$TestFramework$$withContextLoader(TestFramework.scala:185)
at sbt.TestFramework$$anon$2$$anonfun$$init$$1.apply(TestFramework.scala:202)
at sbt.TestFramework$$anon$2$$anonfun$$init$$1.apply(TestFramework.scala:202)
at sbt.TestFunction.apply(TestFramework.scala:207)
at sbt.Tests$$anonfun$9.apply(Tests.scala:216)
at sbt.Tests$$anonfun$9.apply(Tests.scala:216)
[error] Could not run test com.blockcypher.api.marshallers.BlockCypherOutputMarshallerTest: java.lang.OutOfMemoryError: Java heap space
[debug] Summary for ScalaCheck not available.
[debug] Summary for specs2 not available.
[info] ScalaTest
[info] Run completed in 1 second, 541 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[error] Error: Total 0, Failed 0, Errors 0, Passed 0
[error] Error during tests:
[error] com.blockcypher.api.marshallers.BlockCypherOutputMarshallerTest
[error] (test:testOnly) sbt.TestsFailedException: Tests unsuccessful
I'm unsure of why I am running out of heap space, it seems that the piece of json is simple enough and I am not having any issues on any other similar size json test cases.

You need to close quote in your last line "pay-to-pubkey-hash
"script_type":"pay-to-pubkey-hash}"""
should be
"script_type":"pay-to-pubkey-hash"}"""

Related

Spark 3 KryoSerializer issue - Unable to find class: org.apache.spark.util.collection.OpenHashMap

I am upgrading a Spark 2.4 project to Spark 3.x. We are hitting a snag with some existing Spark-ml code:
var stringIndexers = Array[StringIndexer]()
for (featureColumn <- FEATURE_COLS) {
stringIndexers = stringIndexers :+ new StringIndexer().setInputCol(featureColumn).setOutputCol(featureColumn + "_index")
}
val pipeline = new Pipeline().setStages(stringIndexers)
val dfWithNumericalFeatures = pipeline.fit(decoratedDf).transform(decoratedDf)
Specifically, this line: val dfWithNumericalFeatures = pipeline.fit(decoratedDf).transform(decoratedDf) now results in this cryptic exception in Spark 3:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 238.0 failed 1 times, most recent failure: Lost task 0.0 in stage 238.0 (TID 5589) (executor driver): com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.util.collection.OpenHashMap$mcJ$sp$$Lambda$13346/2134122295
[info] Serialization trace:
[info] org$apache$spark$util$collection$OpenHashMap$$grow (org.apache.spark.util.collection.OpenHashMap$mcJ$sp)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
[info] at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
[info] at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)
[info] at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
[info] at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
[info] at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:396)
[info] at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:307)
[info] at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
[info] at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:397)
[info] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
[info] at org.apache.spark.sql.execution.aggregate.ComplexTypedAggregateExpression.deserialize(TypedAggregateExpression.scala:271)
[info] at org.apache.spark.sql.catalyst.expressions.aggregate.TypedImperativeAggregate.merge(interfaces.scala:568)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1.$anonfun$applyOrElse$3(AggregationIterator.scala:199)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$1.$anonfun$applyOrElse$3$adapted(AggregationIterator.scala:199)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator.$anonfun$generateProcessRow$7(AggregationIterator.scala:213)
[info] at org.apache.spark.sql.execution.aggregate.AggregationIterator.$anonfun$generateProcessRow$7$adapted(AggregationIterator.scala:207)
[info] at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.processInputs(ObjectAggregationIterator.scala:151)
[info] at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.<init>(ObjectAggregationIterator.scala:77)
[info] at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2(ObjectHashAggregateExec.scala:107)
[info] at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2$adapted(ObjectHashAggregateExec.scala:85)
[info] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:885)
[info] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:885)
[info] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[info] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[info] at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[info] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[info] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[info] at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
[info] at org.apache.spark.scheduler.Task.run(Task.scala:131)
[info] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
[info] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
[info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info] at java.lang.Thread.run(Thread.java:750)
[info] Caused by: java.lang.ClassNotFoundException: org.apache.spark.util.collection.OpenHashMap$mcJ$sp$$Lambda$13346/2134122295
[info] at java.lang.Class.forName0(Native Method)
[info] at java.lang.Class.forName(Class.java:348)
[info] at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
[info] ... 36 more
I have searched around and the only relevant issue I've found is this un-answered SO question with the same issue: Spark Kryo Serialization issue.
OpenHashMap is not used in my code, seems likely that there is a bug with the KryoSerializer during this Pipeline.fit() function. Any ideas how to get around this? Thanks!
EDIT: I also just attempted removing usage of the KryoSerializer during my unit tests:
spark = SparkSession
.builder
.master("local[*]")
.appName("UnitTest")
.config("spark.serializer", "org.apache.spark.serializer.JavaSerializer")
.config("spark.driver.bindAddress", "127.0.0.1")
.getOrCreate()
Confirmed that I am using the JavaSerializer:
println(spark.conf.get("spark.serializer")) outputs org.apache.spark.serializer.JavaSerializer. Still same issue however, even when not using the KryoSerializer.

sbt test fails due to "Forked test harness failed"

I've got a Scala project which I'm building with sbt. When running sbt test, the tests themselves pass, but then the command fails with "Forked test harness failed: java.io.EOFException".
The build.sbt file does not specify fork in Test.
Example of the error it fails with after running sbt test:
[info] Run completed in 5 seconds, 494 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[error] Error during tests:
[error] Forked test harness failed: java.io.EOFException
[error] at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2959)
[error] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1539)
[error] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
[error] at sbt.React.react(ForkTests.scala:177)
[error] at sbt.ForkTests$Acceptor$1$.run(ForkTests.scala:108)
[error] at java.lang.Thread.run(Thread.java:748)
[error] (serverTests / Test / test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 14 s, completed Mar 12, 2020 4:35:26 PM
Minimal example of test which fails:
package com.example
import akka.http.scaladsl.testkit.ScalatestRouteTest
import org.scalatest.FreeSpecLike
class ForkedTestHarnessFailedForNoReasonSpec extends FreeSpecLike with ScalatestRouteTest {
"This test" - {
"should not fail" in {
assert("Foo" == "Foo")
}
}
}
What does this error indicate and how should one resolve it?
The cause in my case was that AKKA is shutting down JVM on coordinated shutdown. Put this to your test config (src/test/resources/reference.conf in my case):
akka.coordinated-shutdown.exit-jvm = off

Scala test with sbt cross compile platforms (Native,JVM,JS)

I have sbt cross project and I'm trying to run test and testOnlycommands to run scala tests,currently I have jvm test and native one , always Jvm test running successfully and native fails I got this exception any idea ?
[IJ]> test
[info] Processing resources
[info] Linking (5527 ms)
[info] Discovered 2325 classes and 15319 methods
[info] JvmTest:
[info] name
[info] - should the name is set correctly in constructor
[info] JvmTest2:
[info] age
[info] - should be 28
[info] Run completed in 11 seconds, 488 milliseconds.
[info] Total number of tests run: 2
[info] Suites: completed 2, aborted 0
[info] Tests: succeeded 2, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Optimizing (debug mode) (8718 ms)
[info] Generating intermediate code (1943 ms)
[info] Produced 68 files
[info] Compiling to native code (9007 ms)
[info] Linking native code (boehm gc) (538 ms)
[info] Starting process '/home/naseem/Documents/SFreely NEW/CrossCompilePlatforms3June/myProject/nativeUbuntu18/target/scala-2.11/myprojectnativeubuntu18-out' on port '30651'.
[error] Uncaught exception when running tests: java.io.InvalidClassException: sbt.testing.TaskDef; local class incompatible: stream classdesc serialVersionUID = -7417691495999416204, local class serialVersionUID = 2663134200025980977
[info] Run completed in 531 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[error] Error during tests:
[error] Forked test harness failed: java.net.SocketException: Connection reset
[error] at java.net.SocketInputStream.read(SocketInputStream.java:210)
[error] at java.net.SocketInputStream.read(SocketInputStream.java:141)
[error] at java.net.SocketInputStream.read(SocketInputStream.java:224)
[error] at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2631)
[error] at java.io.ObjectInputStream$BlockDataInputStream.readBlockHeader(ObjectInputStream.java:2825)
[error] at java.io.ObjectInputStream$BlockDataInputStream.refill(ObjectInputStream.java:2895)
[error] at java.io.ObjectInputStream$BlockDataInputStream.skipBlockData(ObjectInputStream.java:2797)
[error] at java.io.ObjectInputStream.skipCustomData(ObjectInputStream.java:2229)
[error] at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1871)
[error] at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
[error] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2033)
[error] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
[error] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
[error] at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:557)
[error] at java.lang.Throwable.readObject(Throwable.java:914)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[error] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[error] at java.lang.reflect.Method.invoke(Method.java:498)
[error] at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
[error] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
[error] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
[error] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
[error] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
[error] at sbt.React.react(ForkTests.scala:122)
[error] at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:76)
[error] at java.lang.Thread.run(Thread.java:748)
[error] (myProjectNativeUbuntu18/nativetest:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 35 s, completed Jul 10, 2019 11:57:19 AM

Mockito's mock throw ClassNotFoundException in Spark application

I found that mock object in Mockito would throw ClassNotFoundException when used in Spark. Here is a minimal example:
import org.apache.spark.{SparkConf, SparkContext}
import org.mockito.{Matchers, Mockito}
import org.scalatest.FlatSpec
import org.scalatest.mockito.MockitoSugar
trait MyTrait {
def myMethod(a: Int): Int
}
class MyTraitTest extends FlatSpec with MockitoSugar {
"Mock" should "work in Spark" in {
val m = mock[MyTrait](Mockito.withSettings().serializable())
Mockito.when(m.myMethod(Matchers.any())).thenReturn(1)
val conf = new SparkConf().setAppName("testApp").setMaster("local")
val sc = new SparkContext(conf)
assert(sc.makeRDD(Seq(1, 2, 3)).map(m.myMethod).first() == 1)
}
}
which would throw the following exception:
[info] MyTraitTest:
[info] Mock
[info] - should work in Spark *** FAILED ***
[info] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ClassNotFoundException: MyTrait$$EnhancerByMockitoWithCGLIB$$6d9e95a8
[info] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[info] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[info] at java.lang.Class.forName0(Native Method)
[info] at java.lang.Class.forName(Class.java:348)
[info] at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
[info] at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
[info] at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
[info] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
[info] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
[info] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
[info] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
[info] at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
[info] at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
[info] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
[info] at org.apache.spark.scheduler.Task.run(Task.scala:99)
[info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] at java.lang.Thread.run(Thread.java:745)
The stacktrace hints this is related to dynamic class loading, but I don't know how to fix it.
Update:
Apparently, change
val m = mock[MyTrait](Mockito.withSettings().serializable())
to
val m = mock[MyTrait](Mockito.withSettings().serializable(SerializableMode.ACROSS_CLASSLOADERS))
makes exception disappear. However I am not following why this fix is necessary. I thought in spark local mode, a single JVM is running that hosts both driver and executor. So it must be a different ClassLoader is used to load the deserialized class on executor?

Scalatest Suites do not have detailed test status output

When I run the following simple test using sbt I get the output I would expect:
import org.scalatest.{FlatSpec, Matchers, Suites}
class TestSimple extends FlatSpec with Matchers {
"a" should "do" in {
Array(1,3) should equal (Array(1,2))
}
}
Output:
[info] TestSimple:
[info] a
[info] - should do *** FAILED ***
[info] Array(1, 3) did not equal Array(1, 2) (SimpleTest.scala:5)
[info] ScalaTest
[info] Run completed in 980 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 0, failed 1, canceled 0, ignored 0, pending 0
[info] *** 1 TEST FAILED ***
[error] Failed: Total 1, Failed 1, Errors 0, Passed 0
[error] Failed tests:
[error] TestSimple
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
When the Test is included in a Suite and annotated with DoNotDiscover like so:
import org.scalatest.{DoNotDiscover, FlatSpec, Matchers, Suites}
class FullTestSuite extends Suites(new TestSimple)
#DoNotDiscover
class TestSimple extends FlatSpec with Matchers {
"a" should "do" in {
Array(1,3) should equal (Array(1,2))
}
}
then the output does not include the per test success and failures but instead has just the overall results:
[info] ScalaTest
[info] Run completed in 975 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 2, aborted 0
[info] Tests: succeeded 0, failed 1, canceled 0, ignored 0, pending 0
[info] *** 1 TEST FAILED ***
[error] Failed: Total 1, Failed 1, Errors 0, Passed 0
[error] Failed tests:
[error] FullTestSuite
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
How can I get run tests inside a Suites instance to output where and how they have failed?
Thanks
I guess you are facing a bug #916. You should also try version >=3.0.0-M15 and provide your feedback to developers.