I've written unit tests referring to DataframeGenerator example, which allows you to generate mock dataframes on the fly
After having executed the following commands successfully
sbt clean
sbt update
sbt compile
I get the errors shown in output upon running either of the following commands
sbt assembly
sbt test -- -oF
Output
...
[info] SearchClicksProcessorTest:
17/11/24 14:19:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/24 14:19:07 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
17/11/24 14:19:18 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/11/24 14:19:18 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
17/11/24 14:19:19 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
[info] - testExplodeMap *** FAILED ***
[info] ExceptionInInitializerError was thrown during property evaluation.
[info] Message: "None"
[info] Occurred when passed generated values (
[info]
[info] )
[info] - testFilterByClicks *** FAILED ***
[info] NoClassDefFoundError was thrown during property evaluation.
[info] Message: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
[info] Occurred when passed generated values (
[info]
[info] )
[info] - testGetClicksData *** FAILED ***
[info] NoClassDefFoundError was thrown during property evaluation.
[info] Message: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
[info] Occurred when passed generated values (
[info]
[info] )
...
[info] *** 3 TESTS FAILED ***
[error] Failed: Total 6, Failed 3, Errors 0, Passed 3
[error] Failed tests:
[error] com.company.spark.ml.pipelines.search.SearchClicksProcessorTest
[error] (root/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 73 s, completed 24 Nov, 2017 2:19:28 PM
Things that I've tried unsuccessfully
Running sbt test with F flag to show full stacktrace (no stacktrace output appears as shown above)
Re-build the project in IntelliJ Idea
My questions are
What could be the possible cause of this error?
How can I enable the stack-trace output in SBT to be able to debug it?
EDIT-1
My unit-test class contains several methods like below
class SearchClicksProcessorTest extends FunSuite with Checkers {
import spark.implicits._
test("testGetClicksData") {
val schemaIn = StructType(List(
StructField("rank", IntegerType),
StructField("city_id", IntegerType),
StructField("target", IntegerType)
))
val schemaOut = StructType(List(
StructField("clicked_res_rank", IntegerType),
StructField("city_id", IntegerType),
))
val dataFrameGen = DataframeGenerator.arbitraryDataFrame(spark.sqlContext, schemaIn)
val property = Prop.forAll(dataFrameGen.arbitrary) { dfIn: DataFrame =>
dfIn.cache()
val dfOut: DataFrame = dfIn.transform(SearchClicksProcessor.getClicksData)
dfIn.schema === schemaIn &&
dfOut.schema === schemaOut &&
dfIn.filter($"target" === 1).count() === dfOut.count()
}
check(property)
}
}
while build.sbt looks like this
// core settings
organization := "com.company"
scalaVersion := "2.11.11"
name := "repo-name"
version := "0.0.1"
// cache options
offline := false
updateOptions := updateOptions.value.withCachedResolution(true)
// aggregate options
aggregate in assembly := false
aggregate in update := false
// fork options
fork in Test := true
//common libraryDependencies
libraryDependencies ++= Seq(
scalaTest,
typesafeConfig,
...
scalajHttp
)
libraryDependencies ++= allAwsDependencies
libraryDependencies ++= SparkDependencies.allSparkDependencies
assemblyMergeStrategy in assembly := {
case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
...
case _ => MergeStrategy.first
}
lazy val module-1 = project in file("directory-1")
lazy val module-2 = (project in file("directory-2")).
dependsOn(module-1).
aggregate(module-1)
lazy val root = (project in file(".")).
dependsOn(module-2).
aggregate(module-2)
P.S. Please read comments on original question before reading this answer
Even the popular solution of overriding SBT's transitive dependency over faster-xml.jackson didn't work for me; in that some more changes were required (ExceptionInInitializerError was gone but some other error cropped up)
Finally (in addition to above mentioned fix) I ended up creating DataFrames in a different way (as opposed to StructType used here). I created them as
spark.sparkContext.parallelize(Seq(MyType)).toDF()
where MyType is a case class as per the schema of DataFrame
While implementing this solution, I encountered a small problem that while datatypes of schema generated from case class were correct, the nullability of fields was often mismatching; fix for this issue was found here
Here I'm blatantly admitting that I'm not sure what was the correct fix: faster-xml.jackson dependency or the alternate way of creating DataFrame, so please feel free to fill the lapses in understanding / investigating the issue
i have had a similar problem case, and after investigating I found out, that adding a lazy before a val solved my issue. My estimate is, running a Scala program with Scalatest invokes a little different initializing sequence. Where a normal scala execution initializes vals in an sourecode line numbers top-down order - having nested object {...} blocks initialized in the same way - using the same coding with Scalatest, the execution initializes the valss in nested object { ... } blocks before the vals line-number wise above the object { ... }.
This is absolutely vague I know but deferring initialization with prefixing vals with lazy could solve the test issue here.
The crucial thing here is that it doesn't occur in normal execution, only test execution, and in my case it was only occuring when using lambdas with taps in this form:
...
.tap(x =>
hook_feld_erweiterungen_hook(
abc = theProblematicVal
)
)
...
Related
My problem is somewhat as described here.
Part of Code (actually took from apache site) is as below
val httpHosts = new java.util.ArrayList[HttpHost]
httpHosts.add(new HttpHost("127.0.0.1", 9200, "http"))
httpHosts.add(new HttpHost("10.2.3.1", 9200, "http"))
val esSinkBuilder = new ElasticsearchSink.Builder[String](
httpHosts,
new ElasticsearchSinkFunction[String] {
def createIndexRequest(element: String): IndexRequest = {
val json = new java.util.HashMap[String, String]
json.put("data", element)
return Requests.indexRequest()
.index("my-index")
.`type`("my-type")
.source(json)
If I add these three statements, I am getting error as below
import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction
import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer
import org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
Error I am getting
object elasticsearch is not a member of package org.apache.flink.streaming.connectors
object elasticsearch6 is not a member of package org.apache.flink.streaming.connectors
If I do not add those import statements, I get error as below
Compiling 1 Scala source to E:\sar\scala\practice\readstbdata\target\scala-2.11\classes ...
[error] E:\sar\scala\practice\readstbdata\src\main\scala\example\readcsv.scala:35:25: not found: value ElasticsearchSink
[error] val esSinkBuilder = new ElasticsearchSink.Builder[String](
[error] ^
[error] E:\sar\scala\practice\readstbdata\src\main\scala\example\readcsv.scala:37:7: not found: type ElasticsearchSinkFunction
[error] new ElasticsearchSinkFunction[String] {
[error] ^
[error] two errors found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed 10 Feb, 2020 2:15:04 PM
Stackflow question I referred above, some function has been extended. My understanding is, flink.streaming.connectors.elasticsearch have to be extended into REST libraries. 1) Is my understanding correct 2) if Yes, can I have complete extensions 3)If my understanding is wrong, please give me a solution.
Note: I added the following statements in build.sbt
libraryDependencies += "org.elasticsearch.client" % "elasticsearch-rest-high-level-client" % "7.5.2" ,
libraryDependencies += "org.elasticsearch" % "elasticsearch" % "7.5.2",
The streaming connectors are not part of the flink binary distribution. You have to package them with your application.
For elasticsearch6 you need to add flink-connector-elasticsearch6_2.11, which you can do as
libraryDependencies += "org.apache.flink" %% "flink-connector-elasticsearch6" % "1.6.0"
Once this jar is part of your build, then the compiler will find the missing components. However, I don't know if this ES6 client will work with version 7.5.2.
Flink Elasticsearch Connector 7
Please look at the working and detailed answer which I have provided here, Which is written in Scala.
I am having an application in scala. I need to use AOP for one of the functionality. I used the plugin sbt-aspectj . Everything is working fine when I run using the sbt console. However, I am not able to make it work when using the executable jar. I tried the the sample code provided in the sbt-aspect git page. But, I am getting the errors as
[warn] warning incorrect classpath: D:\source\jvm\modules\scala\frameworks\aspectjTracer\target\scala-2.11\classes
[warn] Missing message: configure.invalidClasspathSection in: org.aspectj.ajdt.ajc.messages
[error] error no sources specified
.
[trace] Stack trace suppressed: run 'last aspectjTracer/aspectj:ajc' for the full output.
[error] (aspectjTracer/aspectj:ajc) org.aspectj.bridge.AbortException: ABORT
[error] Expected project ID
[error] Expected configuration
[error] Expected ':' (if selecting a configuration)
[error] Expected key
[error] Not a valid key: aspectjTracker (similar: aspectjSource, aspectj-source, aspectjDirectory)
[error] last aspectjTracker/aspectj:ajc
[error]
My Build.scala is given below :
object frameworkBuild extends Build {
import Dependencies._
val akkaV = "2.3.6"
val sprayV = "1.3.1"
val musterV = "0.3.0"
val common_settings = Defaults.defaultSettings ++
Seq(version := "1.3-SNAPSHOT",
organization := "com.reactore",
scalaVersion in ThisBuild := "2.11.2",
scalacOptions ++= Seq("-unchecked", "-feature", "-deprecation"),
libraryDependencies := frameworkDependencies ++ testLibraryDependencies,
publishMavenStyle := true,
)
connectInput in run := true
lazy val aspectJTracer = Project(
"aspectjTracer",
file("aspectjTracer"),
settings = common_settings ++ aspectjSettings ++ Seq(
// input compiled scala classes
inputs in Aspectj <+= compiledClasses,
// ignore warnings
lintProperties in Aspectj += "invalidAbsoluteTypeName = ignore",
lintProperties in Aspectj += "adviceDidNotMatch = ignore",
// replace regular products with compiled aspects
products in Compile <<= products in Aspectj
)
)
// test that the instrumentation works
lazy val instrumented = Project(
"instrumented",
file("instrumented"),
dependencies = Seq(aspectJTracer),
settings = common_settings ++ aspectjSettings ++ Seq(
// add the compiled aspects from tracer
binaries in Aspectj <++= products in Compile in aspectJTracer,
// weave this project's classes
inputs in Aspectj <+= compiledClasses,
products in Compile <<= products in Aspectj,
products in Runtime <<= products in Compile
)
)
lazy val frameworks = Project(id = "frameworks", base = file("."), settings = common_settings).aggregate( core, baseDomain,aspectJTracer,instrumented)
lazy val core = Project(id = "framework-core", base = file("framework-core"), settings = common_settings)
lazy val baseDomain = Project(id = "framework-base-domain", base = file("framework-base-domain"), settings = common_settings).dependsOn(core,aspectJTracer,instrumented)
}
Does anyone know how to fix this? I posted this in the sbt-aspectj github page and waiting for a response there as well. But I am in a little hurry to fix this. Your help will be really appreciated.
Finally the problem is resolved. I had added javaagent in the build.scala. But, while running with the sbt-one-jar, it was not taking that jar. So I have manually provided the javaagent as the aspectweaver jar file and it worked. However, it is almost taking 3-4 minutes to start the jar file with the aspect.
Sometime it is even taking 15 min to start the jar file due to the aspectjwaver. I am not sure if this is problem with aspectj or sbt-one-jar, I guess its with the one-jar.
has anyone else faced the same issue ? I don't see any activity in sbt-one-jar, so asking it here.
The following test should pass, but it doesn't
class EngineTest extends FunSuite {
test("engine should not be null") {
val manager: ScriptEngineManager = new ScriptEngineManager
val engine: ScriptEngine = manager.getEngineByName("nashorn")
assert(engine != null)
}
}
The manager.getEngineFactories() seems to be empty. Why? How to init the context?
You need to explicitly pass in a ClassLoader to the ScriptEngineManager constructor. If you don't then it uses Thread.currentThread().getContextClassLoader() which is set to something weird when running under SBT. We just pass in null in our code to get it to work. You can also pass in getClass.getClassLoader:
class EngineTest extends FunSuite {
test("engine should not be null - null classloader") {
val manager: ScriptEngineManager = new ScriptEngineManager(null)
val engine: ScriptEngine = manager.getEngineByName("nashorn")
assert(engine != null)
}
test("engine should not be null - getClass.getClassLoader classloader") {
val manager: ScriptEngineManager = new ScriptEngineManager(getClass.getClassLoader)
val engine: ScriptEngine = manager.getEngineByName("nashorn")
assert(engine != null)
}
}
Both of those tests pass for me:
[info] EngineTest:
[info] - engine should not be null - null classloader
[info] - engine should not be null - getClass.getClassLoader classloader
[info] Run completed in 186 milliseconds.
[info] Total number of tests run: 2
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 2, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
What versions are you using? This is sbt .13.
> console
[info] Starting scala interpreter...
[info]
Welcome to Scala version 2.11.0 (OpenJDK 64-Bit Server VM, Java 1.7.0_25).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import javax.script._
import javax.script._
scala> new ScriptEngineManager().getEngineByName("scala")
res0: javax.script.ScriptEngine = scala.tools.nsc.interpreter.IMain#7078c799
scala> new ScriptEngineManager().getEngineByName("rhino")
res1: javax.script.ScriptEngine = com.sun.script.javascript.RhinoScriptEngine#5c854934
scala> new ScriptEngineManager().getEngineFactories
res2: java.util.List[javax.script.ScriptEngineFactory] = [com.sun.script.javascript.RhinoScriptEngineFactory#454ee4c0, scala.tools.nsc.interpreter.IMain$Factory#354e3bce]
Wait, you asked about test context --
Well, before I lost interest in decoding more sbt, adding to libraryDependencies:
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "test",
enables locating the Scala script engine:
#Test def engines: Unit = {
import javax.script._
val all = new ScriptEngineManager().getEngineFactories
Console println s"Found ${all.size}: $all"
assert(all.size > 0)
}
No doubt there's a simple way to add runtime:full-classpath to test:full-classpath directly. Because it's the simple build tool, right?
For Nashorn on Java 8, note the location:
> set fullClasspath in Test += Attributed.blank(file(s"${util.Properties.javaHome}/lib/ext/nashorn.jar"))
[info] Defining test:fullClasspath
[info] The new value will be used by test:console, test:executeTests and 5 others.
[info] Run `last` for details.
[info] Reapplying settings...
[info] Set current project to goofy (in build file:/home/apm/goofy/)
> test
Found 1: [jdk.nashorn.api.scripting.NashornScriptEngineFactory#7fa2239d]
[info] Passed: Total 10, Failed 0, Errors 0, Passed 10
Update: https://github.com/sbt/sbt/issues/1214
Also I guess it's still considered black art:
// Somehow required to get a js engine in tests (https://github.com/sbt/sbt/issues/1214)
fork in Test := true
The only thing I had to change was to use:
fork in Test := true
as mentioned above - from here: https://github.com/sbt/sbt/issues/1214
I have a play application and I need to ignore my functional tests when I compile and build, then later run only the integration tests.
This is my test code:
ApplicationSpec.scala
#RunWith(classOf[JUnitRunner])
class ApplicationSpec extends Specification {
"Application" should {
"send 404 on a bad request" in new WithApplication {
route(FakeRequest(GET, "/boum")) must beNone
}
}
IntegrationSpec.scala
#RunWith(classOf[JUnitRunner])
class IntegrationSpec extends Specification {
"Application" should {
"work from within a browser" in {
running(TestServer(9000), FIREFOX) { browser =>
browser.goTo("http://localhost:9000/app")
browser.pageSource must contain("Your new application is ready.")
}
}
} section "integration"
}
The docs tells me I can use something like this from the command line:
play "test-only -- exclude integration"
The only problem is that this doesn't actually exclude any tests and my integration tests invoke firefox and start running. What am I doing wrong? How can I exclude the integration tests and then later run them by themselves?
sbt's Testing documentation describes several ways to define separate test configurations.
For instance, the Additional test configurations with shared sources example could be used in this situation:
In this approach, the sources are compiled together using the same classpath and are packaged together. However, different tests are run depending on the configuration.
In a default Play Framework 2.2.2 application created with play new, add a file named project/Build.scala with this content:
import sbt._
import Keys._
object B extends Build {
lazy val root =
Project("root", file("."))
.configs(FunTest)
.settings(inConfig(FunTest)(Defaults.testTasks) : _*)
.settings(
libraryDependencies += specs,
testOptions in Test := Seq(Tests.Filter(unitFilter)),
testOptions in FunTest := Seq(Tests.Filter(itFilter))
)
def itFilter(name: String): Boolean = (name endsWith "IntegrationSpec")
def unitFilter(name: String): Boolean = (name endsWith "Spec") && !itFilter(name)
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.specs2" %% "specs2" % "2.0" % "test"
}
To run standard unit tests:
$ sbt test
Which produces:
[info] Loading project definition from /home/fernando/work/scratch/so23160453/project
[info] Set current project to so23160453 (in build file:/home/fernando/work/scratch/so23160453/)
[info] ApplicationSpec
[info] Application should
[info] + send 404 on a bad request
[info] + render the index page
[info] Total for specification ApplicationSpec
[info] Finished in 974 ms
[info] 2 examples, 0 failure, 0 error
[info] Passed: Total 2, Failed 0, Errors 0, Passed 2
[success] Total time: 3 s, completed Apr 22, 2014 12:42:37 PM
To run tests for the added configuration (here, "fun"), prefix it with the configuration name:
$ sbt fun:test
Which produces:
[info] Loading project definition from /home/fernando/work/scratch/so23160453/project
[info] Set current project to so23160453 (in build file:/home/fernando/work/scratch/so23160453/)
[info] IntegrationSpec
[info] Application should
[info] + work from within a browser
[info] Total for specification IntegrationSpec
[info] Finished in 0 ms
[info] 1 example, 0 failure, 0 error
[info] Passed: Total 1, Failed 0, Errors 0, Passed 1
[success] Total time: 6 s, completed Apr 22, 2014 12:43:17 PM
You can also test only a particular class, like this:
$ sbt "fun:testOnly IntegrationSpec"
See this example in GitHub.
I recently added a dependency on Specs2 to a project and noticed that some existing tests written with ScalaTest and Mockito failed. These tests passed again once Specs2 was removed. Why does this happen?
lazy val scalatestandspecscoexisting = Project(
id = "scalatest-and-specs-coexisting",
base = file("."),
settings = Project.defaultSettings ++
GraphPlugin.graphSettings ++
Seq(
name := "Scalatest-And-Specs-Coexisting",
organization := "com.bifflabs",
version := "0.1",
scalaVersion := "2.9.2",
// libraryDependencies ++= Seq(scalaTest, mockito) //Tests Pass, no-specs2
libraryDependencies ++= Seq(scalaTest, specs2, mockito) //Tests Fail
)
)
The tests that failed all used Mockito and all setup a mock method with two different parameters. One of the calls to the mock does not return value it was set up with. The example below fails. A further requirement was that type must be a Function1 (or have apply method).
import org.scalatest.FunSuite
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito.when
trait MockingBird {
//Behavior only reproduces when input is Function1
def sing(input: Set[String]): String
}
class MockSuite extends FunSuite with MockitoSugar {
val iWannaRock = Set("I wanna Rock")
val rock = "Rock!"
val wereNotGonnaTakeIt = Set("We're not gonna take it")
val no = "No! We ain't gonna take it"
test("A mock should match on parameter but isn't") {
val mockMockingBird = mock[MockingBird]
when(mockMockingBird.sing(iWannaRock)).thenReturn(rock)
//Appears to return this whenever any Set is passed to sing
when(mockMockingBird.sing(wereNotGonnaTakeIt)).thenReturn(no)
// Succeeds because it was set up last
assert(mockMockingBird.sing(wereNotGonnaTakeIt) === no)
// Fails because the mock returns "No! We ain't gonna take it"
assert(mockMockingBird.sing(iWannaRock) === rock)
}
}
Output:
[info] MockSuite:
[info] - A mock should match on parameter but isn't *** FAILED ***
[info] "[No! We ain't gonna take it]" did not equal "[Rock!]" (MockSuite.scala:38)
[error] Failed: : Total 1, Failed 1, Errors 0, Passed 0, Skipped 0
EDIT - according to Eric's comment below, this is a bug in Specs2 ≤ 1.12.2. Should be fixed in 1.12.3.
It turns out that Specs2 redefines some of the behavior in Mockito in order to get by-name parameters to match.
Eric answered my question
"I don't like this, but that's the only way I found to match byname
parameters: http://bit.ly/UF9bVC . You might want that."
From the Specs2 documentation
Byname
Byname parameters can be verified but this will not work if the specs2
jar is not put first on the classpath, before the mockito jar. Indeed
specs2 redefines a Mockito class for intercepting method calls so that
byname parameters are properly handled.
In order to get my tests to pass again, I did the opposite of what was suggested in the specs2 documentation and added Specs2 dependency after Mockito. I have not tried, but I would expect by-name parameter matching to fail.
lazy val scalatestandspecscoexisting = Project(
id = "scalatest-and-specs-coexisting",
base = file("."),
settings = Project.defaultSettings ++
GraphPlugin.graphSettings ++
Seq(
name := "Scalatest-And-Specs-Coexisting",
organization := "com.bifflabs",
version := "0.1",
scalaVersion := "2.9.2",
// libraryDependencies ++= Seq(scalaTest, mockito) //Tests Pass
libraryDependencies ++= Seq(scalaTest, mockito, specs2) //Tests Pass
// libraryDependencies ++= Seq(scalaTest, specs2, mockito) //Tests Fail
)
)
My tests now pass
[info] MockSuite:
[info] - A mock should match on parameter but isn't
[info] Passed: : Total 1, Failed 0, Errors 0, Passed 1, Skipped 0