I am trying to use scalatest to test a class. This is the test I am trying to run:
#RunWith(classOf[JUnitRunner])
class CategorizationSpec extends FlatSpec with BeforeAndAfter with Matchers{
var ss:SparkSession = _
before{
System.setProperty("hadoop.home.dir", "/opt/spark-2.0.0-bin-hadoop2.7");
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host","localhost")
.set("spark.sql.crossJoin.enabled","true")
.set("spark.executor.memory","4g")
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
ss = SparkSession
.builder()
.master("local")
.appName("categorization")
.config(conf)
.getOrCreate()
ss.conf.set("spark.cassandra.connection.host","localhost")
}
"DictionaryPerUser" should "be empty" in {
val dpuc=new DictionaryPerUserController(ss)
dpuc.truncate()
dpuc.getDictionary shouldBe null
}
}
but I get the following error:
[ERROR] /path/project/datasystem/CategorizationSpec.scala:3: error: object controllers is not a member of package path.project.datasystem
[ERROR] import path.project.datasystem.controllers.DictionaryPerUserController
But I have that class in /src/main/scala/path/project and the test class is in /src/test/scala/path/project
Do you know any idea?
(Posted solution on behalf of the OP).
Solved, sorry it was a problem in my pom!
Related
I am trying to test the behaviour of a class which eats and process DataFrames.
Following this previous questions: How to write unit tests in Spark 2.0+? I tried to use the loan pattern to run my tests in the following way:
I have a SparkSession provider trait:
/**
* This trait allows to use spark in Unit tests
* https://stackoverflow.com/questions/43729262/how-to-write-unit-tests-in-spark-2-0
*/
trait SparkSetup {
def withSparkSession(testMethod: SparkSession => Any) {
val conf = new SparkConf()
.setMaster("local")
.setAppName("Spark test")
val sparkSession = SparkSession
.builder()
.config(conf)
.enableHiveSupport()
.getOrCreate()
try {
testMethod(sparkSession)
}
// finally sparkSession.stop()
}
}
Which I use in my test class:
class InnerNormalizationStrategySpec
extends WordSpec
with Matchers
with BeforeAndAfterAll
with SparkSetup {
...
"A correct contact message" should {
"be normalized without errors" in withSparkSession{ ss => {
import ss.implicits._
val df = ss.createDataFrame(
ss.sparkContext.parallelize(Seq[Row](Row(validContact))),
StructType(List(StructField("value", StringType, nullable = false))))
val result = target.innerTransform(df)
val collectedResult: Array[NormalizedContactHistoryMessage] = result
.where(result.col("contact").isNotNull)
.as[NormalizedContactHistoryMessage]
.collect()
collectedResult.isEmpty should be(false) // There should be something
collectedResult.length should be(1) // There should be exactly 1 message...
collectedResult.head.contact.isDefined should be(true) // ... of type contact.
}}
}
...
}
When attempting to run my tests using IntelliJ facility, all tests written in this manner works (running the Spec class at once), however, the sbt test command from terminal makes all the tests fail.
I thought also it was because of parallelism, so i added
concurrentRestrictions in Global += Tags.limit(Tags.Test, 1)
in my sbt settings, but didn't work.
Here is the stack trace I receive: https://pastebin.com/LNTd3KGW
Any help?
Thanks
I have tried to write a transform method from DataFrame to DataFrame.
And I also want to test it by scalatest.
As you know, in Spark 2.x with Scala API, you can create SparkSession object as follows:
import org.apache.spark.sql.SparkSession
val spark = SparkSession.bulider
.config("spark.master", "local[2]")
.getOrCreate()
This code works fine with unit tests.
But, when I run this code with spark-submit, the cluster options did not work.
For example,
spark-submit --master yarn --deploy-mode client --num-executors 10 ...
does not create any executors.
I have found that the spark-submit arguments are applied when I remove config("master", "local[2]") part of the above code.
But, without master setting the unit test code did not work.
I tried to split spark (SparkSession) object generation part to test and main.
But there is so many code blocks needs spark, for example import spark.implicit,_ and spark.createDataFrame(rdd, schema).
Is there any best practice to write a code to create spark object both to test and to run spark-submit?
One way is to create a trait which provides the SparkContext/SparkSession, and use that in your test cases, like so:
trait SparkTestContext {
private val master = "local[*]"
private val appName = "testing"
System.setProperty("hadoop.home.dir", "c:\\winutils\\")
private val conf: SparkConf = new SparkConf()
.setMaster(master)
.setAppName(appName)
.set("spark.driver.allowMultipleContexts", "false")
.set("spark.ui.enabled", "false")
val ss: SparkSession = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
val sc: SparkContext = ss.sparkContext
val sqlContext: SQLContext = ss.sqlContext
}
And your test class header then looks like this for example:
class TestWithSparkTest extends BaseSpec with SparkTestContext with Matchers{
I made a version where Spark will close correctly after tests.
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.spark.{SparkConf, SparkContext}
import org.scalatest.{BeforeAndAfterAll, FunSuite, Matchers}
trait SparkTest extends FunSuite with BeforeAndAfterAll with Matchers {
var ss: SparkSession = _
var sc: SparkContext = _
var sqlContext: SQLContext = _
override def beforeAll(): Unit = {
val master = "local[*]"
val appName = "MyApp"
val conf: SparkConf = new SparkConf()
.setMaster(master)
.setAppName(appName)
.set("spark.driver.allowMultipleContexts", "false")
.set("spark.ui.enabled", "false")
ss = SparkSession.builder().config(conf).getOrCreate()
sc = ss.sparkContext
sqlContext = ss.sqlContext
super.beforeAll()
}
override def afterAll(): Unit = {
sc.stop()
super.afterAll()
}
}
The spark-submit command with parameter --master yarn is setting yarn master.
And this will be conflict with your code master("x"), even using like master("yarn").
If you want to use import sparkSession.implicits._ like toDF ,toDS or other func,
you can just use a local sparkSession variable created like below:
val spark = SparkSession.builder().appName("YourName").getOrCreate()
without setting master("x") in spark-submit --master yarn, not in local machine.
I advice : do not use global sparkSession in your code. That may cause some errors or exceptions.
hope this helps you.
good luck!
How about defining an object in which a method creates a singleton instance of SparkSession, like MySparkSession.get(), and pass it as a paramter in each of your unit tests.
In your main method, you can create a separate SparkSession instance, which can have different configurations.
I'm struggling to write a basic unit test for creation of a data frame, using the example text file provided with Spark, as follows.
class dataLoadTest extends FunSuite with Matchers with BeforeAndAfterEach {
private val master = "local[*]"
private val appName = "data_load_testing"
private var spark: SparkSession = _
override def beforeEach() {
spark = new SparkSession.Builder().appName(appName).getOrCreate()
}
import spark.implicits._
case class Person(name: String, age: Int)
val df = spark.sparkContext
.textFile("/Applications/spark-2.2.0-bin-hadoop2.7/examples/src/main/resources/people.txt")
.map(_.split(","))
.map(attributes => Person(attributes(0),attributes(1).trim.toInt))
.toDF()
test("Creating dataframe should produce data from of correct size") {
assert(df.count() == 3)
assert(df.take(1).equals(Array("Michael",29)))
}
override def afterEach(): Unit = {
spark.stop()
}
}
I know that the code itself works (from spark.implicits._ .... toDF()) because I have verified this in the Spark-Scala shell, but inside the test class I'm getting lots of errors; the IDE does not recognise 'import spark.implicits._, or toDF(), and therefore the tests don't run.
I am using SparkSession which automatically creates SparkConf, SparkContext and SQLContext under the hood.
My code simply uses the example code from the Spark repo.
Any ideas why this is not working? Thanks!
NB. I have already looked at the Spark unit test questions on StackOverflow, like this one: How to write unit tests in Spark 2.0+?
I have used this to write the test but I'm still getting the errors.
I'm using Scala 2.11.8, and Spark 2.2.0 with SBT and IntelliJ. These dependencies are correctly included within the SBT build file. The errors on running the tests are:
Error:(29, 10) value toDF is not a member of org.apache.spark.rdd.RDD[dataLoadTest.this.Person]
possible cause: maybe a semicolon is missing before `value toDF'?
.toDF()
Error:(20, 20) stable identifier required, but dataLoadTest.this.spark.implicits found.
import spark.implicits._
IntelliJ won't recognise import spark.implicits._ or the .toDF() method.
I have imported:
import org.apache.spark.sql.SparkSession
import org.scalatest.{BeforeAndAfterEach, FlatSpec, FunSuite, Matchers}
you need to assign sqlContext to a val for implicits to work . Since your sparkSession is a var, implicits won't work with it
So you need to do
val sQLContext = spark.sqlContext
import sQLContext.implicits._
Moreover you can write functions for your tests so that your test class looks as following
class dataLoadTest extends FunSuite with Matchers with BeforeAndAfterEach {
private val master = "local[*]"
private val appName = "data_load_testing"
var spark: SparkSession = _
override def beforeEach() {
spark = new SparkSession.Builder().appName(appName).master(master).getOrCreate()
}
test("Creating dataframe should produce data from of correct size") {
val sQLContext = spark.sqlContext
import sQLContext.implicits._
val df = spark.sparkContext
.textFile("/Applications/spark-2.2.0-bin-hadoop2.7/examples/src/main/resources/people.txt")
.map(_.split(","))
.map(attributes => Person(attributes(0), attributes(1).trim.toInt))
.toDF()
assert(df.count() == 3)
assert(df.take(1)(0)(0).equals("Michael"))
}
override def afterEach() {
spark.stop()
}
}
case class Person(name: String, age: Int)
There are many libraries for unit testing of spark, one of the mostly used is
spark-testing-base: By Holden Karau
This library have all with sc as the SparkContext below is a simple example
class TestSharedSparkContext extends FunSuite with SharedSparkContext {
val expectedResult = List(("a", 3),("b", 2),("c", 4))
test("Word counts should be equal to expected") {
verifyWordCount(Seq("c a a b a c b c c"))
}
def verifyWordCount(seq: Seq[String]): Unit = {
assertResult(expectedResult)(new WordCount().transform(sc.makeRDD(seq)).collect().toList)
}
}
Here, every thing is prepared with sc as a SparkContext
Another approach is to create a TestWrapper and use for the multiple testcases as below
import org.apache.spark.sql.SparkSession
trait TestSparkWrapper {
lazy val sparkSession: SparkSession =
SparkSession.builder().master("local").appName("spark test example ").getOrCreate()
}
And use this TestWrapper for all the tests with Scala-test, playing with BeforeAndAfterAll and BeforeAndAfterEach.
Hope this helps!
I am writing Test Cases for Spark using ScalaTest.
import org.apache.spark.sql.SparkSession
import org.scalatest.{BeforeAndAfterAll, FlatSpec}
class ClassNameSpec extends FlatSpec with BeforeAndAfterAll {
var spark: SparkSession = _
var className: ClassName = _
override def beforeAll(): Unit = {
spark = SparkSession.builder().master("local").appName("class-name-test").getOrCreate()
className = new ClassName(spark)
}
it should "return data" in {
import spark.implicits._
val result = className.getData(input)
assert(result.count() == 3)
}
override def afterAll(): Unit = {
spark.stop()
}
}
When I try to compile the test suite it gives me following error:
stable identifier required, but ClassNameSpec.this.spark.implicits found.
[error] import spark.implicits._
[error] ^
[error] one error found
[error] (test:compileIncremental) Compilation failed
I am not able to understand why I cannot import spark.implicits._ in a test suite.
Any help is appreciated !
To do an import you need a "stable identifier" as the error message says. This means that you need to have a val, not a var.
Since you defined spark as a var, scala can't import correctly.
To solve this you can simply do something like:
val spark2 = spark
import spark2.implicits._
or instead change the original var to val, e.g.:
lazy val spark: SparkSession = SparkSession.builder().master("local").appName("class-name-test").getOrCreate()
I try to refactor a ScalaTest FunSuite test to avoid boilerplate code to init and destroy Spark session.
The problem is that I need import implicit functions but using before/after approach only variables (var fields) can be use, and to import it is necessary a value (val fields).
The idea is to have a new clean Spark Session every test execution.
I try to do something like this:
import org.apache.spark.SparkContext
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.scalatest.{BeforeAndAfter, FunSuite}
object SimpleWithBeforeTest extends FunSuite with BeforeAndAfter {
var spark: SparkSession = _
var sc: SparkContext = _
implicit var sqlContext: SQLContext = _
before {
spark = SparkSession.builder
.master("local")
.appName("Spark session for testing")
.getOrCreate()
sc = spark.sparkContext
sqlContext = spark.sqlContext
}
after {
spark.sparkContext.stop()
}
test("Import implicits inside the test 1") {
import sqlContext.implicits._
// Here other stuff
}
test("Import implicits inside the test 2") {
import sqlContext.implicits._
// Here other stuff
}
But in the line import sqlContext.implicits._ I have an error
Cannot resolve symbol sqlContext
How to resolve this problem or how to implements the tests class?
You can also use spark-testing-base, which pretty much handles all the boilerplate code.
Here is a blog post by the creator, explaining how to use it.
And here is a simple example from their wiki:
class test extends FunSuite with DatasetSuiteBase {
test("simple test") {
val sqlCtx = sqlContext
import sqlCtx.implicits._
val input1 = sc.parallelize(List(1, 2, 3)).toDS
assertDatasetEquals(input1, input1) // equal
val input2 = sc.parallelize(List(4, 5, 6)).toDS
intercept[org.scalatest.exceptions.TestFailedException] {
assertDatasetEquals(input1, input2) // not equal
}
}
}
Define a new immutable variable for the spark context and assign the var to it before importing implicits.
class MyCassTest extends FlatSpec with BeforeAndAfter {
var spark: SparkSession = _
before {
val sparkConf: SparkConf = new SparkConf()
spark = SparkSession.
builder().
config(sparkConf).
master("local[*]").
getOrCreate()
}
after {
spark.stop()
}
"myFunction()" should "return 1.0 blab bla bla" in {
val sc = spark
import sc.implicits._
// assert ...
}
}