ScalaMock: Can't handle methods with more than 22 parameters (yet) - scala

Scalamock reject my mocking attempt by saying that it doesnt support yet more than 22 methods.
The reason is because there are more than 22 methods, all together, in the class I am trying to mock (2 are mine, 20+ are mixed in (from Akka Json Support)).
Any ways to get round this limitation that doesnt involve rethinking the mixing part ?
I used it this way, with scalatest 3.0.2 :
override val apiClient: ApiClient = mock[ApiClient]
(apiClient.getById _).when(15538).returns("data")
Thank you !

I assume you don't actually want to test those JSON and other mixin functions, so I would suggest creating an abstract trait that defines your new, testable signatures and mixing it into your new class. This way you wouldn't need to change your design, and your clients of this ApiClient class could even decouple fully by using the trait type.
trait MyFunctionality {
def foo(): Unit
def somethingElse(i: Int): Int
}
class ApiClient extends Baseclass with Stuff with MoreStuff with MyFunctionality {
// ...
}
then
val m = mock[MyFunctionality]
(m.foo _).expects().once()
// etc
That way you additionally guard against any code being run in your class (or baseclass) constructors during unit test time.
Hope that helps.

I came out with the same solution at the end but I dont really like the noise it add to my very concise class. "C'est la vie" :)

Related

Mocking case classes as properties of an object

Consider a case class:
case class configuredThing[A, B](param: string) {
val ...
def ...
}
Test code was able to be partially written for configuredThing which has some methods on it that make some external calls to other services.
This case class is used elsewhere:
object actionObject {
private lazy val thingA = configuredThing[A, B]("A")
private lazy val thingB = configuredThing[C, D]("B")
def ...
}
Here the types A, B, C, and D are actually specified and are not native scala types but defined in a third party package we are leveraging to interface with some external services.
In trying to write tests for this object the requirement is to not have the external calls made so as to test the logic in actionObject. This lead to looking into how to mock out configuredThing within actionObject to be able to make assertions on the different interactions with the configuredThing instances. However, it is unclear how to do this.
In looking at scalatest's scalamock documentation this would need to be done with the Generated mocks system which states "rely on the ScalaMock compiler plugin". However, this seems to have not been released since scala 2.9.2, see here.
So, my question is this: how can this be tested?

How to get name of all methods in a scala trait

I need all the method names in a scala trait I've defined. I know this sounds like a trivial problem but I could not find any answers relating to the trait, they all revolved around classes.
To be specific, I need names for all the abstract methods. But if I can get the name of all methods regardless of abstract or not, that works too.
Say I have this trait A
trait A {
def myDefinedInt: Int = 2
def myAbstractString: String
}
I need a list of all methods (or preferably just the abstract ones)
I'm relatively new to scala so although I get classes and interfaces. Traits are new to me.
Thanks in advance!
You can get all methods with getDeclaredMethods and then just filter for abstract methods:
import java.lang.reflect.Modifier
classOf[A]
.getDeclaredMethods
.filter(m => Modifier.isAbstract(m.getModifiers))
.map(_.getName)
.foreach(println)
It prints: myAbstractString.

Should I create an extra type when using ad-hoc polymorphism in scala?

I'm currently writing some code using ad-hoc polymorphism in Scala.
For example, one of my classes extends a Stream[Boolean] with a method:
implicit class BooleanStreamOps(s: Stream[Boolean]) {
def toByteStream: Stream[Byte] = ???
// ...
}
Now so far this is good, but when the stream gets printed (for example by specs2) it will print the whole stream, if it was already evaluated, spamming the output of the console.
Now I am trying to overwrite toString, but I imagine I'll have to create an extra class for that which holds the Stream, like so:
final case class BooleanStream(unwrap: Stream[Boolean]) {
override def toString: String = s"Stream(${unwrap.size})"
}
implicit class BooleanStreamOps(s: BooleanStream) {
def toByteStream: Stream[Byte] = ???
}
This has the nice effect that users of the library who also use Stream[Boolean] will not accidentally call one of my methods.
The downside is, that whenever I use the actual stream I have to call unwrap on the object, which clutters the code quite a bit.
I think my options are these:
Teach specs2 to use cats' Show[T] instead of using toString - Is it possible? I've seen there is a package specs2-cats - but I can't seem to find it for scala 2.12.
Do as noted above, use unwrap on every usage of the stream
Which of the above options do you think work best my case?
Non-options:
Extend my own class from Stream - it's sealed for good reason
Write all methods of stream mapping to unwrap - too much effort IMO

Self-type traits with abstract classes

I'm facing a design issue that my poor Scala level cannot handle.
I have an abstract class, let's say:
abstract class AbstractJob {
def run: Long
implicit val someVal: String
}
Then, I have "platforms" on which jobs can be ran.
I want to declare subclasses containing real "jobs" (no need to know what it should actually do). Ideally, I want them to be declared this way:
class OneJob with Platform1 with Platform2 {
override def run = {
... some code returning a long result ...
}
override implicit val someVal = "foo"
}
Indeed, jobs can have multiple platforms on which they can be ran.
However, the run method on jobs must be launched by the platforms. Therefore I tried to use a self-type:
trait Platform1 { self: AbstractJob =>
def runOnPlatform = someFunctionRelatedToPlatform(run(someVal))
}
Unfortunately when calling someVal value (in any job extending AbstractJob), I get a null value. I went to the conclusion that self-type in traits are directly related to the defined class (and not the actual class, which is a subtype in my case)
I tried to define another type type Jobs :> AbstractJob for the self-type in the trait but that didn't work.
I have a limited number of backup solutions, but I want to use the "full power" of Scala and avoid developers to write a lot of redundant and "plumber" code (related to platforms and AbstractJob class in my example).
AbstractJob calls directly runOnPlatform with an "abstract" function to be implemented in concrete jobs. In this case my users will only write the business code they need but I'm quite sure I can do better using Scala concepts. I'm feeling that I'm just using Java (and generally OOP) concepts in Scala...
Let the users write a hell lot of redundant code... Obviously I'm avoiding this as much as possible!!
I hope I'm clear enough!

Scala generic: require method to use class's type

I'm pretty new to Scala. I'm trying to write an abstract class whose methods will be required to be implemented on a subclass. I want to use generics to enforce that the method takes a parameter of the current class.
abstract class MySuper{
def doSomething:(MyInput[thisclass]=>MyResult)
}
class MySub extends MySuper{
override def doSomething:(MyInput[MySub]=>MyResult)
}
I know that thisclass above is invalid, but I think it kind of expresses what I want to say. Basically I want to reference the implementing class. What would be the valid way to go about this?
You can do this with a neat little trick:
trait MySuper[A <: MySuper[A]]{
def doSomething(that: A)
}
class Limited extends MySuper[Limited]{
def doSomething(that: Limited)
}
There are other approaches but I find this one works fairly well at expressing what you'd like.