I'm new with Mockito and I'm dealing with a very strange problem with matchers
def fetchById[T: Format](elasticDetails: ESDetails): F[ESResult[T]]
I have this def for my Elastic search client and the problem start with the generic Format : T that i have there to pass.
.fetchById[JsValue](anyObject[ESDetails])
.returns(IO.pure(ESResult(200, value = Some(fakeResponse)))) ```
org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
Invalid use of argument matchers!
2 matchers expected, 1 recorded:
-> at org.specs2.mock.mockito.MockitoMatchers.anyObject(MockitoMatchers.scala:49)
This exception may occur if matchers are combined with raw values:
//incorrect:
someMethod(anyObject(), "raw String");
When using matchers, all arguments have to be provided by matchers.
For example:
//correct:
someMethod(anyObject(), eq("String by matcher"));```
//This the error for matchers that i'm getting after I run my tests.
You are implicitly passing a real instance of Format[T] alongside the fake Matcher instance of ESDetails. Mockito requires that all arguments be either real instances or Matchers and does not allow you to mix them, as you see from this error message.
The simplest fix will be to turn the implicit argument into a Matcher, e.g.
.fetchById[JsValue](anyObject[ESDetails])(eq(implicitly[Format[T]]))
See also What precisely is a scala evidence parameter
and How to stub a method call with an implicit matcher in Mockito and Scala
As mentioned by Mateusz, you will probably be better off using scalamock than Mockito, as it handles this situation much better as it was designed for Scala.
Related
I want to test a function that writes output from in RDD in Scala Spark.
Part of this test is mocking a map on an RDD, using jmock
val customerRdd = mockery.mock(classOf[RDD[Customer]], "rdd1")
val transformedRddToWrite = mockery.mock(classOf[RDD[TransformedCustomer]], "rdd2")
mockery.checking(new Expectations() {{
// ...
oneOf(customerRdd).map(
`with`(Expectations.any(classOf[Customer => TransformedCustomer]))
)
will(Expectations.returnValue(transformedRddToWrite))
// ...
}})
However, whenever I try to run this test, I get the following error:
not all parameters were given explicit matchers: either all parameters must be specified by matchers or all must be specified by values, you cannot mix matchers and values, despite the fact that I have specified matchers for all parameters to .map.
How do I fix this? Can jMock support matching on Scala functional arguments with implicit classtags?
jMock I thought has been abandoned since 2012. But if you like it, then more power to you. One of the issues is that map requires a ClassTag[U] according to the signature :
def map[U: ClassTag](f: T => U): RDD[U] where U is the return type of your function.
I am going to heavily assume that if you were to make this work with a Java mocking framework, go under the assumption that map's signature is public <U> RDD<U> map(scala.Function1<T, U>, scala.reflect.ClassTag<U>);
Hope that would work.
I'm trying to create a list of error listener classes that are instantiated at a later time.
The expression in question is:
import configs.syntax._
import akka.actor.Actor
private val errorListeners = applicationConfig
.get[Seq[Class[_ <: Actor]]]("connectors.event-listeners")
.valueOrElse(Seq.empty)
Which causes the following error upon compilation:
EventListenerProvider.scala:12:33: Seq[Class[_ <: akka.actor.Actor]] is abstract but not sealed
[error] .get[Seq[Class[_ <: Actor]]]("connectors.event-listeners")
This list explicitly enumerates all the types that are supported by get. There is no reason whatsoever to expect that a library that provides some thin layer of syntactic sugar for reading strings using the Config library would all of the sudden return instances of type Class[_]. It can do some basic transformation of stringly-typed config values into Strings, Ints, Lists, and some simple case-classes, but it is not meant as a generic way to deserialize arbitrary classes from strings. I assume that the error message comes from a macro that attempts to interpret Class as a sealed trait, but then fails, because it is abstract but not sealed.
Don't expect too much from the library. It's for reading strings, not for dealing with ClassLoaders. Read the names of classes as list of strings, then map it using Class.forName or something like that.
I'm trying to write a test for the following function in Finatra HttpClient.
def executeJson[T: Manifest](
request: Request,
expectedStatus: Status = Status.Ok): Future[T] = {...}
According to another question answered on StackOverflow. mocking generic scala method in mockito. This is a shorthand for:
def executeJson[T](
request: Request,
expectedStatus: Status = Status.Ok)
(implicit T: Manifest[T]): Futuren[T] = {...}
So I tried,
verify(httpClientMock, times(1)).executeJson[JiraProject]
(argThat(new RequestMatcher(req)))(Matchers.any())
Unfortunately, it didn't solve my problem. I still got the following error.
Invalid use of argument matchers!
0 matchers expected, 1 recorded:
-> at org.specs.mock.MockitoStubs$class.argThat(Mockito.scala:331)
This exception may occur if matchers are combined with raw values:
//incorrect:
someMethod(anyObject(), "raw String");
When using matchers, all arguments have to be provided by matchers.
For example:
//correct:
someMethod(anyObject(), eq("String by matcher"));
I also tried Matchers.eq(Manifest[JiraProject]), it complains that value Manifest of type scala.reflect.ManifestFactory.type does not take type parameters.
I'm new to Scala and Mockito, is there anything I did wrong or I misunderstood?
Thanks in advance for any help!
Found out the problem! So the executeJson actually takes 3 params- request, expectedStatus, and a Manifest. However, because expectedStatus is an optional, I didn't explicit pass it, that's why it complained. So, the final code should be verify(httpClientMock, times(1)).executeJson[JiraProject](argThat(new RequestMatcher(req)), Matchers.any[Status])(Matchers.any())
Your problem is that you're using equality in the first parameter and matcher for the second one. Try using matchers for all arguments.
But a bigger problem that I feel here is that you're trying to mock 3rd party library - this is something you should avoid. This would also solve your problem. Here's some extra read about it - TDD best practices: don't mock others
I am trying to write some use cases for Apache Flink. One error I run into pretty often is
could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[SomeType]
My problem is that I cant really nail down when they happen and when they dont.
The most recent example of this would be the following
...
val largeJoinDataGen = new LargeJoinDataGen(dataSetSize, dataGen, hitRatio)
val see = StreamExecutionEnvironment.getExecutionEnvironment
val newStreamInput = see.addSource(largeJoinDataGen)
...
where LargeJoinDataGen extends GeneratorSource[(Int, String)] and GeneratorSource[T] extends SourceFunction[T], both defined in separate files.
When trying to build this I get
Error:(22, 39) could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[(Int, String)]
val newStreamInput = see.addSource(largeJoinDataGen)
1. Why is there an error in the given example?
2. What would be a general guideline when these errors happen and how to avoid them in the future?
P.S.: first scala project and first flink project so please be patient
You may make an import instead of implicits
import org.apache.flink.streaming.api.scala._
It will also help.
This mostly happens when you have user code, i.e. a source or a map function or something of that nature that has a generic parameter. In most cases you can fix that by adding something like
implicit val typeInfo = TypeInformation.of(classOf[(Int, String)])
If your code is inside another method that has a generic parameter you can also try adding a context bound to the generic parameter of the method, as in
def myMethod[T: TypeInformation](input: DataStream[Int]): DataStream[T] = ...
My problem is that I cant really nail down when they happen and when they dont.
They happen when an implicit parameter is required. If we look at the method definition we see:
def addSource[T: TypeInformation](function: SourceFunction[T]): DataStream[T]
But we don't see any implicit parameter defined, where is it?
When you see a polymorphic method where the type parameter is of the form
def foo[T : M](param: T)
Where T is the type parameter and M is a context bound. It means that the creator of the method is requesting an implicit parameter of type M[T]. It is equivalent to:
def foo[T](param: T)(implicit ev: M[T])
In the case of your method, it is actually expanded to:
def addSource[T](function: SourceFunction[T])(implicit evidence: TypeInformation[T]): DataStream[T]
This is why you see the compiler complaining, as it can't find the implicit parameter the method is requiring.
If we go to the Apache Flink Wiki, under Type Information we can see why this happens :
No Implicit Value for Evidence Parameter Error
In the case where TypeInformation could not be created, programs fail to compile with an error stating “could not find implicit value for evidence parameter of type TypeInformation”.
A frequent reason if that the code that generates the TypeInformation has not been imported. Make sure to import the entire flink.api.scala package.
import org.apache.flink.api.scala._
For generic methods, you'll need to require them to generate a TypeInformation at the call-site as well:
For generic methods, the data types of the function parameters and return type may not be the same for every call and are not known at the site where the method is defined. The code above will result in an error that not enough implicit evidence is available.
In such cases, the type information has to be generated at the invocation site and passed to the method. Scala offers implicit parameters for that.
Note that import org.apache.flink.streaming.api.scala._ may also be necessary.
For your types this means that if the invoking method is generic, it also needs to request the context bound for it's type parameter.
For example Scala versions (2.11, 2.12, etc.) are not binary compatible.
The following is a wrong configuration even if you use import org.apache.flink.api.scala._ :
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.version>2.12.8</scala.version>
<scala.binary.version>2.11</scala.binary.version>
</properties>
Correct configuration in Maven:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.version>2.12.8</scala.version>
<scala.binary.version>2.12</scala.binary.version>
</properties>
Simple question, I have a problem where using mapTo on the result of ask results in a compiler error along the lines of:
not found: value ClassTag
For example:
(job ? "Run").mapTo[Result]
^
I don't understand why it needs a ClassTag to do the cast? If I substitute a standard class from Predef like String as in (job ? "Run").mapTo[String] that compiles OK.
This happens when I define the class right above the line in question, as in:
class Result {}
(job ? "Run").mapTo[Result]
I still get the same problem.
Thanks, Jason.
I should also state that I'm using Scala 2.10.0 and Akka 2.1.0 (if that makes a difference).
This seems to be a particular problem with the Scala 2.10.0 version
After adding
import reflect.ClassTag
the implicitly used ClassTag parameter in mapTo should work.
Either that or updating to a newer Version of Akka/Scala (which should be prefered if possible).