How to override an implicit conversion in my scala test? - scala

I am working on a scala test that tests the code which uses implicit conversion methods. I don't want to use those implicit conversions and would like to mock/override them in the test. Is it possible to do that ?
implicit class Typeconverter(objA: typeA) {
def asTypeB = {
// return a typeB object
}
}
def methodA(request: typeA) {
...............
request.asTypeB
...............
}
While testing methodA, I want to mock/override "asTypeB" instead of the actual one being called.

As with any other dependency, you make m testable by passing it in.
def m(request: A)(implicit cv: A => B) = ???
Then the test can supply arbitrary conversions either explicitly or implicitly.
But an implicit inside the compiled method was resolved at compile time.
To supply a custom test version, supply a binary-compatible version of the conversion selected by implicit search. But that could be tricky and, to quote the other answer, doesn't sound like a good idea. If the implicit is neatly packaged, it might be feasible.

That doesn't sound like a good idea, but if you have a method or function with the same name and type as the original implicit in the current scope, it will override the previous one. This trick is used by e.g. rapture https://github.com/propensive/rapture/blob/dev/json-argonaut/shared/src/main/scala/rapture/json-argonaut/package.scala#L21 https://github.com/propensive/rapture/blob/dev/json-circe/shared/src/main/scala/rapture/json-circe/package.scala#L21

Related

How to obtain a tree for a higher-kinded type parameter in a scala macro

I'm trying to write a macro to simplify some monad-related code (I'm using cats 1.6.0 for the Monads). For now I just want to be able to write lift[F](a) where F is a unary type constructor, and have that expand to a.pure[F]. Seems simple enough, but I can't get it working.
For now I have this code to help with type inference:
object Macros {
class LiftPartiallyApplied[F[_]] {
def apply[A](a: A): F[A] = macro MacroImpl.liftImpl[F, A]
}
def lift[F[_]] = new LiftPartiallyApplied[F]
}
And for the actual implementation of the macro:
object MacroImpl {
def liftImpl[F[_], A](c: blackbox.Context)(a: c.Tree)(implicit tt: c.WeakTypeTag[F[_]]): c.Tree = {
import c.universe._
q"$a.pure[${tt.tpe.typeConstructor}]"
}
}
Now I can call the macro like this lift[List](42) and it'll expand to 42.pure[List], great. But when I call it with a more complicated type, like lift[({type F[A] = Either[String, A]})#F](42), It'll expand to 42.pure[Either], which is obviously broken, because Either is a binary type constructor and not a unary one. The problem is I just don't know what to put there instead of ${tt.tpe.typeConstructor}…
// edit: since people apparently have trouble reproducing the problem, I've made a complete repository:
https://github.com/mberndt123/macro-experiment
I will now try to figure out what the difference between Dmytro's and my own project is.
Don't put Main and Macros to the same compilation unit.
But when I call it with a more complicated type, like lift[({type F[A] = Either[String, A]})#F](42), It'll expand to 42.pure[Either]
Can't reproduce.
For me lift[List](42) produces (with scalacOptions += "-Ymacro-debug-lite")
Warning:scalac: 42.pure[List]
TypeApply(Select(Literal(Constant(42)), TermName("pure")), List(TypeTree()))
at compile time and List(42) at runtime.
lift[({ type F[A] = Either[String, A] })#F](42) produces
Warning:scalac: 42.pure[[A]scala.util.Either[String,A]]
TypeApply(Select(Literal(Constant(42)), TermName("pure")), List(TypeTree()))
at compile time and Right(42) at runtime.
This is my project https://gist.github.com/DmytroMitin/334c230a4f2f1fd3fe9e7e5a3bb10df5
Why do you need macros? Why can't you write
import cats.Applicative
import cats.syntax.applicative._
class LiftPartiallyApplied[F[_]: Applicative] {
def apply[A](a: A): F[A] = a.pure[F]
}
def lift[F[_]: Applicative] = new LiftPartiallyApplied[F]
?
Allright, I found out what the problem was.
Macros need to be compiled separately from their use sites. I thought that this meant that Macros needs to be compiled separately from MacroImpl, so I put those in separate sbt subprojects, and I called the macro in the project where Macros is defined. But what it in fact means is that calls of the macro need to be compiled separately from its definition. So I put MacroImpl and Macros in one subproject and called the macro in another and it worked perfectly.
Thanks to Dmytro for taking the time to demonstrate how to do it right!
// edit: looks like Dmytro beat me to it with his comment :-)

How to Provide Specialized Implementations of Generic Methods in Scala?

I'm working on an existing code base with a wrapper class around Slick 2.1.0 (I know). This wrapper has a method named transaction that is a generic - it takes a (f: => T) (so it's pass-by-name). I need to mock this class for an unit test. We're also using Mockito 1.10.19 (again, I know), which won't let me mock a pass-by-name (I believe...). So I'm stuck implementing the underlying trait that this wrapper class is built on.
The immediate problem is this: I want to mock this transaction method so it does nothing. The code I'm testing passes in a (f: => Unit). So I want to implement this method to return a Future.Done. (Did I mention we're using Finagle and not Scala futures?) But this method is generic. How do I properly specialize?
This is my current attempt:
val mockDBM = new DatabaseManager {
override def transaction[#specialized(Unit) T](f: => T): Future[T] = Future.value(f)
def transaction(f: => Unit): Future[Unit] = Future.Done
}
Of course, I get a have same type after erasure error upon compilation. Obviously I have no idea how #specialized works.
What do I do? Maybe I can use Mockito after all? Or I need to learn what specializing a generic method actually means?
I found this, which probably contains the answer, but I have no formal background in FP, and I don't grok this at all: How can one provide manually specialized implementations with Scala specialization?
#specialized doesn't let you provide specializations, it just generates its own. The answer provided in the linked question would require changing the signature. From the question it looks like you can't change it, in which case you are out of luck. If you can... you may still be out of luck, depending on how exactly this code is going to be called.
OTOH, the solution for "I want to disregard f, but can only return Future.Done if the generic is for a Unit type" is far simpler:
class Default[A] {
var x: A = _
}
object Default {
def apply[A]() = (new Default[A]).x
}
val mockDBM = new DatabaseManager {
override def transaction[T](f: => T): Future[T] = {
Future.value(Default(x))
}
}
Assuming you need a successful future, but don't care about value, that is; if you just need any future, override def transaction[T](f: => T): Future[T] = Future.???.

Can Scala infer the actual type from the return type actually expected by the caller?

I have a following question. Our project has a lot of code, that runs tests in Scala. And there is a lot of code, that fills the fields like this:
production.setProduct(new Product)
production.getProduct.setUuid("b1253a77-0585-291f-57a4-53319e897866")
production.setSubProduct(new SubProduct)
production.getSubProduct.setUuid("89a877fa-ddb3-3009-bb24-735ba9f7281c")
Eventually, I grew tired from this code, since all those fields are actually subclasses of the basic class that has the uuid field, so, after thinking a while, I wrote the auxiliary function like this:
def createUuid[T <: GenericEntity](uuid: String)(implicit m : Manifest[T]) : T = {
val constructor = m.runtimeClass.getConstructors()(0)
val instance = constructor.newInstance().asInstanceOf[T]
instance.setUuid(uuid)
instance
}
Now, my code got two times shorter, since now I can write something like this:
production.setProduct(createUuid[Product]("b1253a77-0585-291f-57a4-53319e897866"))
production.setSubProduct(createUuid[SubProduct]("89a877fa-ddb3-3009-bb24-735ba9f7281c"))
That's good, but I am wondering, if I could somehow implement the function createUuid so the last bit would like this:
// Is that really possible?
production.setProduct(createUuid("b1253a77-0585-291f-57a4-53319e897866"))
production.setSubProduct(createUuid("89a877fa-ddb3-3009-bb24-735ba9f7281c"))
Can scala compiler guess, that setProduct expects not just a generic entity, but actually something like Product (or it's subclass)? Or there is no way in Scala to implement this even shorter?
Scala compiler won't infer/propagate the type outside-in. You could however create implicit conversions like:
implicit def stringToSubProduct(uuid: String): SubProduct = {
val n = new SubProduct
n.setUuid(uuid)
n
}
and then just call
production.setSubProduct("89a877fa-ddb3-3009-bb24-735ba9f7281c")
and the compiler will automatically use the stringToSubProduct because it has applicable types on the input and output.
Update: To have the code better organized I suggest wrapping the implicit defs to a companion object, like:
case class EntityUUID(uuid: String) {
uuid.matches("[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}") // possible uuid format check
}
case object EntityUUID {
implicit def toProduct(e: EntityUUID): Product = {
val p = new Product
p.setUuid(e.uuid)
p
}
implicit def toSubProduct(e: EntityUUID): SubProduct = {
val p = new SubProduct
p.setUuid(e.uuid)
p
}
}
and then you'd do
production.setProduct(EntityUUID("b1253a77-0585-291f-57a4-53319e897866"))
so anyone reading this could have an intuition where to find the conversion implementation.
Regarding your comment about some generic approach (having 30 types), I won't say it's not possible, but I just do not see how to do it. The reflection you used bypasses the type system. If all the 30 cases are the same piece of code, maybe you should reconsider your object design. Now you can still implement the 30 implicit defs by calling some method that uses reflection similar what you have provided. But you will have the option to change it in the future on just this one (30) place(s).

Scala: Why use implicit on function argument?

I have a following function:
def getIntValue(x: Int)(implicit y: Int ) : Int = {x + y}
I see above declaration everywhere. I understand what above function is doing. It is a currying function which takes two arguments. If you omit the second argument, it will invoke implicit definition which returns int instead. So I think it is something very similar to defining a default value for the argument.
implicit val temp = 3
scala> getIntValue(3)
res8: Int = 6
I was wondering what are the benefits of above declaration?
Here's my "pragmatic" answer: you typically use currying as more of a "convention" than anything else meaningful. It comes in really handy when your last parameter happens to be a "call by name" parameter (for example: : => Boolean):
def transaction(conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
conn.startTransaction // start transaction
val booleanResult = codeToExecuteInTransaction //invoke the code block they passed in
//deal with errors and rollback if necessary, or commit
//return connection to connection pool
}
What this is saying is "I have a function called transaction, its first parameter is a Connection and its second parameter will be a code-block".
This allows us to use this method like so (using the "I can use curly brace instead of parenthesis rule"):
transaction(myConn) {
//code to execute in a transaction
//the code block's last executable statement must be a Boolean as per the second
//parameter of the transaction method
}
If you didn't curry that transaction method, it would look pretty unnatural doing this:
transaction(myConn, {
//code block
})
How about implicit? Yes it can seem like a very ambiguous construct, but you get used to it after a while, and the nice thing about implicit functions is they have scoping rules. So this means for production, you might define an implicit function for getting that database connection from the PROD database, but in your integration test you'll define an implicit function that will superscede the PROD version, and it will be used to get a connection from a DEV database instead for use in your test.
As an example, how about we add an implicit parameter to the transaction method?
def transaction(implicit conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
}
Now, assuming I have an implicit function somewhere in my code base that returns a Connection, like so:
def implicit getConnectionFromPool() : Connection = { ...}
I can execute the transaction method like so:
transaction {
//code to execute in transaction
}
and Scala will translate that to:
transaction(getConnectionFromPool) {
//code to execute in transaction
}
In summary, Implicits are a pretty nice way to not have to make the developer provide a value for a required parameter when that parameter is 99% of the time going to be the same everywhere you use the function. In that 1% of the time you need a different Connection, you can provide your own connection by passing in a value instead of letting Scala figure out which implicit function provides the value.
In your specific example there are no practical benefits. In fact using implicits for this task will only obfuscate your code.
The standard use case of implicits is the Type Class Pattern. I'd say that it is the only use case that is practically useful. In all other cases it's better to have things explicit.
Here is an example of a typeclass:
// A typeclass
trait Show[a] {
def show(a: a): String
}
// Some data type
case class Artist(name: String)
// An instance of the `Show` typeclass for that data type
implicit val artistShowInstance =
new Show[Artist] {
def show(a: Artist) = a.name
}
// A function that works for any type `a`, which has an instance of a class `Show`
def showAListOfShowables[a](list: List[a])(implicit showInstance: Show[a]): String =
list.view.map(showInstance.show).mkString(", ")
// The following code outputs `Beatles, Michael Jackson, Rolling Stones`
val list = List(Artist("Beatles"), Artist("Michael Jackson"), Artist("Rolling Stones"))
println(showAListOfShowables(list))
This pattern originates from a functional programming language named Haskell and turned out to be more practical than the standard OO practices for writing a modular and decoupled software. The main benefit of it is it allows you to extend the already existing types with new functionality without changing them.
There's plenty of details unmentioned, like syntactic sugar, def instances and etc. It is a huge subject and fortunately it has a great coverage throughout the web. Just google for "scala type class".
There are many benefits, outside of your example.
I'll give just one; at the same time, this is also a trick that you can use on certain occasions.
Imagine you create a trait that is a generic container for other values, like a list, a set, a tree or something like that.
trait MyContainer[A] {
def containedValue:A
}
Now, at some point, you find it useful to iterate over all elements of the contained value.
Of course, this only makes sense if the contained value is of an iterable type.
But because you want your class to be useful for all types, you don't want to restrict A to be of a Seq type, or Traversable, or anything like that.
Basically, you want a method that says: "I can only be called if A is of a Seq type."
And if someone calls it on, say, MyContainer[Int], that should result in a compile error.
That's possible.
What you need is some evidence that A is of a sequence type.
And you can do that with Scala and implicit arguments:
trait MyContainer[A] {
def containedValue:A
def aggregate[B](f:B=>B)(implicit ev:A=>Seq[B]):B =
ev(containedValue) reduce f
}
So, if you call this method on a MyContainer[Seq[Int]], the compiler will look for an implicit Seq[Int]=>Seq[B].
That's really simple to resolve for the compiler.
Because there is a global implicit function that's called identity, and it is always in scope.
Its type signature is something like: A=>A
It simply returns whatever argument is passed to it.
I don't know how this pattern is called. (Can anyone help out?)
But I think it's a neat trick that comes in handy sometimes.
You can see a good example of that in the Scala library if you look at the method signature of Seq.sum.
In the case of sum, another implicit parameter type is used; in that case, the implicit parameter is evidence that the contained type is numeric, and therefore, a sum can be built out of all contained values.
That's not the only use of implicits, and certainly not the most prominent, but I'd say it's an honorable mention. :-)

How to test type conformance of higher-kinded types in Scala

I am trying to test whether two "containers" use the same higher-kinded type. Look at the following code:
import scala.reflect.runtime.universe._
class Funct[A[_],B]
class Foo[A : TypeTag](x: A) {
def test[B[_]](implicit wt: WeakTypeTag[B[_]]) =
println(typeOf[A] <:< weakTypeOf[Funct[B,_]])
def print[B[_]](implicit wt: WeakTypeTag[B[_]]) = {
println(typeOf[A])
println(weakTypeOf[B[_]])
}
}
val x = new Foo(new Funct[Option,Int])
x.test[Option]
x.print[Option]
The output is:
false
Test.Funct[Option,Int]
scala.Option[_]
However, I expect the conformance test to succeed. What am I doing wrong? How can I test for higher-kinded types?
Clarification
In my case, the values I am testing (the x: A in the example) come in a List[c.Expr[Any]] in a Macro. So any solution relying on static resolution (as the one I have given), will not solve my problem.
It's the mixup between underscores used in type parameter definitions and elsewhere. The underscore in TypeTag[B[_]] means an existential type, hence you get a tag not for B, but for an existential wrapper over it, which is pretty much useless without manual postprocessing.
Consequently typeOf[Funct[B, _]] that needs a tag for raw B can't make use of the tag for the wrapper and gets upset. By getting upset I mean it refuses to splice the tag in scope and fails with a compilation error. If you use weakTypeOf instead, then that one will succeed, but it will generate stubs for everything it couldn't splice, making the result useless for subtyping checks.
Looks like in this case we really hit the limits of Scala in the sense that there's no way for us to refer to raw B in WeakTypeTag[B], because we don't have kind polymorphism in Scala. Hopefully something like DOT will save us from this inconvenience, but in the meanwhile you can use this workaround (it's not pretty, but I haven't been able to come up with a simpler approach).
import scala.reflect.runtime.universe._
object Test extends App {
class Foo[B[_], T]
// NOTE: ideally we'd be able to write this, but since it's not valid Scala
// we have to work around by using an existential type
// def test[B[_]](implicit tt: WeakTypeTag[B]) = weakTypeOf[Foo[B, _]]
def test[B[_]](implicit tt: WeakTypeTag[B[_]]) = {
val ExistentialType(_, TypeRef(pre, sym, _)) = tt.tpe
// attempt #1: just compose the type manually
// but what do we put there instead of question marks?!
// appliedType(typeOf[Foo], List(TypeRef(pre, sym, Nil), ???))
// attempt #2: reify a template and then manually replace the stubs
val template = typeOf[Foo[Hack, _]]
val result = template.substituteSymbols(List(typeOf[Hack[_]].typeSymbol), List(sym))
println(result)
}
test[Option]
}
// has to be top-level, otherwise the substituion magic won't work
class Hack[T]
An astute reader will notice that I used WeakTypeTag in the signature of foo, even though I should be able to use TypeTag. After all, we call foo on an Option which is a well-behaved type, in the sense that it doesn't involve unresolved type parameters or local classes that pose problems for TypeTags. Unfortunately, it's not that simple because of https://issues.scala-lang.org/browse/SI-7686, so we're forced to use a weak tag even though we shouldn't need to.
The following is an answer that works for the example I have given (and might help others), but does not apply to my (non-simplified) case.
Stealing from #pedrofurla's hint, and using type-classes:
trait ConfTest[A,B] {
def conform: Boolean
}
trait LowPrioConfTest {
implicit def ctF[A,B] = new ConfTest[A,B] { val conform = false }
}
object ConfTest extends LowPrioConfTest {
implicit def ctT[A,B](implicit ev: A <:< B) =
new ConfTest[A,B] { val conform = true }
}
And add this to Foo:
def imp[B[_]](implicit ct: ConfTest[A,Funct[B,_]]) =
println(ct.conform)
Now:
x.imp[Option] // --> true
x.imp[List] // --> false