Scala mocking the behaviour - scala

When I mock the particular object method is executing the actual behavior.
The expected output should be 10since I mocked the Calculate.add and returning the result as 10
trait Base {
def add (parm1:Int, parm2:Int): Int
def fetc ():Any
def compute(): Any
}
object Calculate extends Base {
def add(parm1:Int, parm2:Int):Int = {
fetc()
compute
}
def fetc ():Any = {
// Making some api1 call
}
def compute ():Any = {
// Making some api2 call
}
}
object Engine {
def execute():any{
Calculate.add(10, 20)
}
}
Test
class TestEngine extends MockFactory {
it should "compute" in {
val res1:Int = 10
val calculate: Base = stub[Base]
val data = (calculate.add _).when(10, 20).returns(res1);
Engine.execute() // ExpectedOutput should be 10(res1),Since the I mocked the add method and returning the 10 value. Should not call the Calculate object fetch, compute behaviour.
}
}

Though Calculate is mocked & add method has been stubbed, still the actual functionality of add will be executed, since in Engine.execute method Calculate object reference is used to access the add method also Calculate is itself an object, thus the actual code will be executed.
object Engine {
def execute():any{
Calculate.add(10, 20)
}
}
Simple solution to mocked calculate in Engine.execute could be instead of using Calculate object passed a base variable as method parameter.
object Engine {
def execute(base: Base): Int = {
base.add(10, 20)
}
}
class TestEngine extends MockFactory {
it should "compute" in {
val res1: Int = 10
val calculate: Base = stub[Base]
val data = (calculate.add _).when(10, 20).returns(res1);
//mocked_calculate
Engine.execute(mocked_calculate)
}
}

Related

Is there any way to rewrite the below code using Scala value class or other concept?

I need to write two functions to get the output format and the output index for file conversion. As part of this, I wrote a TransformSettings class for these methods and set the default value. And in the transformer class, I created a new object of TransformSettings class to get the default values for each job run. Also, I have another class called ParquetTransformer that extends Transformer where I want to change these default values. So I implemented like below.
class TransformSettings{
def getOuputFormat: String = {
"orc"
}
def getOuputIndex(table: AWSGlueDDL.Table): Option[String] = {
table.StorageDescriptor.SerdeInfo.Parameters.get("orc.column.index.access")
}
}
class Transformer{
def getTransformSettings: TransformSettings = {
new TransformSettings
}
def posttransform(table: AWSGlueDDL.Table):Dateframe ={
val indexAccess = getTransformSettings.getOuputIndex(table: AWSGlueDDL.Table)
........
}
}
class ParquetTransformer extends Transformer{
override def getTransformSettings: TransformSettings = {
val transformSettings = new TransformSettings {
override def getOuputFormat: String = {
"parquet"
}
override def getOuputIndex(table: AWSGlueDDL.Table): Option[String] = {
table.StorageDescriptor.SerdeInfo.Parameters.get("parquet.column.index.access")
}
}
}
}
Is there a way to avoid creating a brand new object of TransformSettings in Transfomer class every time this is called?
Also is there a way to rewrite the code using Scala value class?
As #Dima proposed in the comments try to make TransformSettings a field / constructor parameter (a val) in the class Transformer and instantiate them outside
class TransformSettings{
def getOuputFormat: String = {
"orc"
}
def getOuputIndex(table: AWSGlueDDL.Table): Option[String] = {
table.StorageDescriptor.SerdeInfo.Parameters.get("orc.column.index.access")
}
}
class Transformer(val transformSettings: TransformSettings) {
def posttransform(table: AWSGlueDDL.Table): DataFrame ={
val indexAccess = transformSettings.getOuputIndex(table: AWSGlueDDL.Table)
???
}
}
val parquetTransformSettings = new TransformSettings {
override def getOuputFormat: String = {
"parquet"
}
override def getOuputIndex(table: AWSGlueDDL.Table): Option[String] = {
table.StorageDescriptor.SerdeInfo.Parameters.get("parquet.column.index.access")
}
}
class ParquetTransformer extends Transformer(parquetTransformSettings)
You don't seem to need value classes (... extends AnyVal) now. They are more about unboxing, not about life-cycle management. TransformSettings and Transformer can't be value classes because they are not final (you're extending them in class ParquetTransformer extends Transformer... and new TransformSettings { ... }). By the way, value classes have many limatations
https://failex.blogspot.com/2017/04/the-high-cost-of-anyval-subclasses.html
https://github.com/scala/bug/issues/12271
Besides value classes, there are scala-newtype library in Scala 2 and opaque types in Scala 3.

Mocking scala object called by under another object

I am trying to write unit test for a function under object1.
object Object1 {
def main(sysArgs: Array[String]): Unit = {
val inputDF: DataFrame = UtilObject.getInput()
}
}
object UtilObject {
def getInput(){
...
}
}
To write Unit test, I am using MockitoSugar.
"object1Main" should "should make correct calls" in {
val inputArgs = Array("abc")
val util = mock[UtilObject.type]
when(util.getInput().thenReturn(inputData))
Object1.main(inputArgs)
}
While running the test, it doesn't consider the util mock and just execute the getInput() function.
I think I am missing some sort of injection here. Any ideas?
Thanks in advance!
Mocking Scala objects should be impossible conceptually speaking. An object in Scala is a pure singleton. That means there can only be one member of that type at any time.
mockito-scala can mock Scala objects via reflection. I'll use a result type of String, instead of a DataFrame, but the idea is the same:
object UtilObject {
def getInput(): String = {
// ...
"done"
}
}
object Object1 {
def main(sysArgs: Array[String]): String = {
val inputDF: String = UtilObject.getInput()
inputDF
}
}
// in test file:
"object1Main" should {
"should make correct calls" in {
val inputArgs = Array("abc")
withObjectMocked[UtilObject.type] {
UtilObject.getInput() returns "mocked!"
Object1.main(inputArgs) shouldBe "mocked!"
}
Object1.main(inputArgs) shouldBe "done"
}
}
This mocks the singleton's method only inside the block of withObjectMocked.
Usually such powerful techniques often tend to be overused or misused, so I don't generally recommend them, unless the design cannot be refactored.
Luckily, yours can: the easiest way is to use Dependency Injection with a class or a function. For DI with a class you require to convert the object being mocked into a class:
class UtilObject {
def getInput(): String = {
// ...
"done"
}
}
object Object1 {
def main(sysArgs: Array[String], ut: UtilObject): String = {
val inputDF: String = ut.getInput()
inputDF
}
}
// in test file:
"object1Main" should {
"should make correct calls" in {
val inputArgs = Array("abc")
val util = mock[UtilObject]
when(util.getInput()).thenReturn("mocked!")
Object1.main(inputArgs, util) shouldBe "mocked!"
}
}
For DI with a function you need to lift the method you want to mock into a function:
object UtilObject {
def getInput(): String = {
// ...
"done"
}
}
object Object1 {
def main(sysArgs: Array[String], f: () => String = UtilObject.getInput): String = {
val inputDF: String = f()
inputDF
}
}
// in test file:
"object1Main" should {
"should make correct calls" in {
val inputArgs = Array("abc")
val f = mock[() => String]
when(f()).thenReturn("mocked!")
Object1.main(inputArgs, f) shouldBe "mocked!"
}
}
Since the function takes no arguments, you can convert it into a by-name parameter. I'll leave that to you.
Lastly, another way is to create a trait with the method you want to mock and extend that with the object. But now Object1 requires being a class and have a reference to the object being mocked:
object UtilObject extends Utils {
def getInput(): String = {
// ...
"done"
}
}
trait Utils {
def getInput(): String
}
class Object1 {
val uo: Utils = UtilObject
def main(sysArgs: Array[String]): String = {
val inputDF: String = uo.getInput()
inputDF
}
}
// in test file:
"object1Main" should {
"should make correct calls" in {
val classUnderTest = new Object1 {
override val uo = mock[Utils]
}
val inputArgs = Array("abc")
when(classUnderTest.uo.getInput()).thenReturn("mocked!")
classUnderTest.main(inputArgs) shouldBe "mocked!"
}
}
As you can see, there are a few ways to go. Neither is inherently wrong. It mostly depends on your requirements(e.g. you can't afford adding a dependency just for one UT), needs(e.g. does the object I am testing really needs to be an object or can I make it a class?), guidelines (e.g. your team decided to avoid using powerful testing frameworks that rely on reflection and use DI as much as possible instead) and personal preferences.

Is it possible to extend a method within an extended scala class?

Assume I have the following scala classes, is it possible to extend a function on SecondClass and add more code to it? (possibly chained to another function within the function i'd like to extend)
package io.gatling.http.check
class FirstClass {
def login() = {
exec(http("Login")
.post("anonymous/login")
.body(ElFileBody("rest/UserAnonymousLogin.json")).asJson
}
}
I would like to extend the login function with the following (.check(status is 200)
class SecondClass extends FirstClass {
def login() = {
.check(status is 200))
}
}
Is that possible?
The syntax you're looking for is
class X {
def go(a: Int) = ???
}
class Y extends X {
override def go(a: Int) = {
val u = super.go(a)
// do things with `u` and return whatever
}
}
You'll need to do a little refactoring to get your code in this shape. I envisage
class FirstClass {
def body = ElFileBody("rest/UserAnonymousLogin.json")
// stuff that calls `body`
}
class SecondClass {
override def body = super.body.check(status is 200)
// no need to redefine stuff as it's inherited from `FirstClass`
}
Given checks method accepts a variable number of HttpCheck, that is, HttpCheck*
def check(checks: HttpCheck*): HttpRequestBuilder
consider refactoring FirstClass to
class FirstClass {
def login(checks: HttpCheck*) = {
exec(http("Login")
.post("anonymous/login")
.body(ElFileBody("rest/UserAnonymousLogin.json")).asJson
.check(checks: _*)
)
}
}
Note how by default we pass no checks when calling (new FirstClass).login().
Now derived classes could pass in a checks to be performed like so
class SecondClass extends FirstClass {
def loginWithStatusCheck() = {
super.login(status is 200)
}
}
Note we are not actually overriding FirstClass.login here.
Another approach instead of overriding could be functional composition, for example, we break down the problem into smaller functions
val buildClient: String => Http = http(_)
val buildPostRequest: Http => HttpRequestBuilder = _.post("anonymous/login").body(ElFileBody("rest/UserAnonymousLogin.json")).asJson
val checkStatus: HttpRequestBuilder => HttpRequestBuilder = _.check(status is 200)
and then compose them
exec((buildClient andThen buildPostRequest andThen checkStatus)("Login"))
Now we can simply add further steps to the composition instead of worrying about class hierarchies and overriding.

Access Spark broadcast variable in different classes

I am broadcasting a value in Spark Streaming application . But I am not sure how to access that variable in a different class than the class where it was broadcasted.
My code looks as follows:
object AppMain{
def main(args: Array[String]){
//...
val broadcastA = sc.broadcast(a)
//..
lines.foreachRDD(rdd => {
val obj = AppObject1
rdd.filter(p => obj.apply(p))
rdd.count
}
}
object AppObject1: Boolean{
def apply(str: String){
AnotherObject.process(str)
}
}
object AnotherObject{
// I want to use broadcast variable in this object
val B = broadcastA.Value // compilation error here
def process(): Boolean{
//need to use B inside this method
}
}
Can anyone suggest how to access broadcast variable in this case?
There is nothing particularly Spark specific here ignoring possible serialization issues. If you want to use some object it has to be available in the current scope and you can achieve this the same way as usual:
you can define your helpers in a scope where broadcast is already defined:
{
...
val x = sc.broadcast(1)
object Foo {
def foo = x.value
}
...
}
you can use it as a constructor argument:
case class Foo(x: org.apache.spark.broadcast.Broadcast[Int]) {
def foo = x.value
}
...
Foo(sc.broadcast(1)).foo
method argument
case class Foo() {
def foo(x: org.apache.spark.broadcast.Broadcast[Int]) = x.value
}
...
Foo().foo(sc.broadcast(1))
or even mixed-in your helpers like this:
trait Foo {
val x: org.apache.spark.broadcast.Broadcast[Int]
def foo = x.value
}
object Main extends Foo {
val sc = new SparkContext("local", "test", new SparkConf())
val x = sc.broadcast(1)
def main(args: Array[String]) {
sc.parallelize(Seq(None)).map(_ => foo).first
sc.stop
}
}
Just a short take on performance considerations that were introduced earlier.
Options proposed by zero233 are indeed very elegant way of doing this kind of things in Scala. At the same time it is important to understand implications of using certain patters in distributed system.
It is not the best idea to use mixin approach / any logic that uses enclosing class state. Whenever you use a state of enclosing class within lambdas Spark will have to serialize outer object. This is not always true but you'd better off writing safer code than one day accidentally blow up the whole cluster.
Being aware of this, I would personally go for explicit argument passing to the methods as this would not result in outer class serialization (method argument approach).
you can use classes and pass the broadcast variable to classes
your psudo code should look like :
object AppMain{
def main(args: Array[String]){
//...
val broadcastA = sc.broadcast(a)
//..
lines.foreach(rdd => {
val obj = new AppObject1(broadcastA)
rdd.filter(p => obj.apply(p))
rdd.count
})
}
}
class AppObject1(bc : Broadcast[String]){
val anotherObject = new AnotherObject(bc)
def apply(str: String): Boolean ={
anotherObject.process(str)
}
}
class AnotherObject(bc : Broadcast[String]){
// I want to use broadcast variable in this object
def process(str : String): Boolean = {
val a = bc.value
true
//need to use B inside this method
}
}

Do I have to avoid using objects if I want to unit-test them?

Say, I'm using some json libraries which provides an object Json:
object Json {
def unapply[T](jsonStr:String)(implicit converter:Converter[T]) { ... }
}
I can use it in my scala code like this:
class MyLoginController {
def login(request:Request) = {
val loginInfo = Json.unapply[LoginInfo](request.body)
// check(loginInfo.email)
// check(loginInfo.password)
}
}
It's works perfectly, but soon I found my self can't test it easily with mock. I can't find a way to mock the Json.unapply.
So I have to change my code to provide a trait and use dependency injection:
trait JsonParsable[T] {
def parse(jsonStr:String)(implicit converter:Converter[T])
}
object JsonParser extends JsonParsable[T] {
def parse(jsonStr:String)(implicit converter:Converter[T]) = Json.unapply(jsonStr)
}
class MyLoginController(jsonParser:JsonParsable[T]) {
def login(request:Request) = {
val loginInfo = jsonParser.parse[LoginInfo](request.body)
// check(loginInfo.email)
// check(loginInfo.password)
}
}
And when I write unit test, I will mock a JsonParsable for the MyLoginController:
val fakeParser = mock[JsonParsable]
// ...
val controller = new MyLoginController(fakeParser)
// ...
My question is, do I have to do this to avoid using objects, just in order to make it testable?
I found previous code is simple and easy, but the later one is more complex :(