Exclude a specific implicit from a Scala project - scala

How can I prevent the usage of a specific implicit in my scala code?
For example, I was recently bit by the default Codec provided by https://github.com/scala/scala/blob/68bad81726d15d03a843dc476d52cbbaf52fb168/src/library/scala/io/Codec.scala#L76.
Is there a way to ensure that any code that calls for an implicit codec: Codec never uses the one provided by fallbackSystemCodec?
Alternatively, is it possible to block all implicit Codecs?
Is this something that should be doable using scalafix?

Scalafix can inspect implicit arguments using SemanticTree. Here is an example solution by defining a custom scalafix rule.
Given
import scala.io.Codec
object Hello {
def foo(implicit codec: Codec) = 3
foo
}
we can define a custom rule
class ExcludedImplicitsRule(config: ExcludedImplicitsRuleConfig)
extends SemanticRule("ExcludedImplicitsRule") {
...
override def fix(implicit doc: SemanticDocument): Patch = {
doc.tree.collect {
case term: Term if term.synthetic.isDefined => // TODO: Use ApplyTree(func, args)
val struct = term.synthetic.structure
val isImplicit = struct.contains("implicit")
val excludedImplicit = config.blacklist.find(struct.contains)
if (isImplicit && excludedImplicit.isDefined)
Patch.lint(ExcludedImplicitsDiagnostic(term, excludedImplicit.getOrElse(config.blacklist.mkString(","))))
else
Patch.empty
}.asPatch
}
}
and corresponding .scalafix.conf
rule = ExcludedImplicitsRule
ExcludedImplicitsRuleConfig.blacklist = [
fallbackSystemCodec
]
should enable sbt scalafix to raise the diagnostic
[error] /Users/mario/IdeaProjects/scalafix-exclude-implicits/example-project/scalafix-exclude-implicits-example/src/main/scala/example/Hello.scala:7:3: error: [ExcludedImplicitsRule] Attempting to pass excluded implicit fallbackSystemCodec to foo'
[error] foo
[error] ^^^
[error] (Compile / scalafix) scalafix.sbt.ScalafixFailed: LinterError
Note the output of println(term.synthetic.structure)
Some(ApplyTree(
OriginalTree(Term.Name("foo")),
List(
IdTree(SymbolInformation(scala/io/LowPriorityCodecImplicits#fallbackSystemCodec. => implicit lazy val method fallbackSystemCodec: Codec))
)
))
Clearly the above solution is not efficient as it searches strings, however it should give some direction. Perhaps matching on ApplyTree(func, args) would be better.
scalafix-exclude-implicits-example shows how to configure the project to use ExcludedImplicitsRule.

You can do this by using a new type altogether; this way, nobody will be able to override it in your dependencies. It's essentially the answer I posted to create an ambiguous low priority implicit
It may not be practical though, if for example you can't change the type.

Related

Slick 3.1.1 ambiguous implicit values involving filter and implicit CanBeQueryCondition

I have been trying to create a Generic Dao over Slick 3.1.1 and it includes a generic filter that competes with JPA's findByExample, see the following files:
GenericDaoImpl.scala Generic level reusable across all Models
UserDao.scala Generic plus customizations for the User model
UserService.scala Wraps the UserDao into more services level functionality
In this last file I try to use the generic filter function to find a user by its registered email, like this:
// this will implicitly exec and wait indefinitely for the
// db.run Future to complete
import dao.ExecHelper._
def findByEmail(email: String): Option[UserRow] = {
userDao.filter(_.email === email).headOption
}
but this produces the compiler error:
[error] /home/bravegag/code/play-authenticate-usage-scala/app/services/UserService.scala:35: value === is not a member of String
[error] userDao.filter(email === _.email).headOption
[error] ^
[error] /home/bravegag/code/play-authenticate-usage-scala/app/services/UserService.scala:35: ambiguous implicit values:
[error] both value BooleanOptionColumnCanBeQueryCondition in object CanBeQueryCondition of type => slick.lifted.CanBeQueryCondition[slick.lifted.Rep[Option[Boolean]]]
[error] and value BooleanCanBeQueryCondition in object CanBeQueryCondition of type => slick.lifted.CanBeQueryCondition[Boolean]
[error] match expected type slick.lifted.CanBeQueryCondition[Nothing]
[error] userDao.filter(email === _.email).headOption
[error] ^
Can anyone advice on how the implicit declaration of the filter function below can be improved to solve this compiler error?
The implementation of the filter function (found in GenericDaoImpl.scala) is:
// T is defined above as T <: Table[E] with IdentifyableTable[PK]
override def filter[C <: Rep[_]](expr: T => C)
(implicit wt: CanBeQueryCondition[C]) : Future[Seq[E]] =
db.run(tableQuery.filter(expr).result)
As far as I can see you are simply lacking you profile API import in UserService.
Just add there this import: import profile.api._ and it should work.
EDIT: BTW I see many people building their own version of base CRUDs for Slick. Did you try some existing thin libraries doing just that e.g. here: https://github.com/VirtusLab/unicorn ? It's not really related to this question but it may be worth to take a look.

Scala wrapping method of a parametrized class (spark-cassandra-connector)

I am writing a set of methods that extend Spark RDD's API.
I have to implement a general method for storing the RDDs, and for a start I tried to wrap spark-cassandra-connector's saveAsCassandraTable, without success.
Here's the "extending RDD's API" part:
object NewRDDFunctions {
implicit def addStorageFunctions[T](rdd: RDD[T]):
RDDStorageFunctions[T] = new RDDStorageFunctions(rdd)
}
class RDDStorageFunctions[T](rdd: RDD[T]) {
def saveResultsToCassandra() {
rdd.saveAsCassandraTable("ks_name", "table_name") // this line produces errors!
}
}
...and importing the object as: import ...NewRDDFunctions._.
The marked line produces following errors:
Error:(99, 29) could not find implicit value for parameter rwf: com.datastax.spark.connector.writer.RowWriterFactory[T]
rdd.saveAsCassandraTable("ks_name", "table_name")
^
Error:(99, 29) not enough arguments for method saveAsCassandraTable: (implicit connector: com.datastax.spark.connector.cql.CassandraConnector, implicit rwf: com.datastax.spark.connector.writer.RowWriterFactory[T], implicit columnMapper: com.datastax.spark.connector.mapper.ColumnMapper[T])Unit.
Unspecified value parameters rwf, columnMapper.
rdd.saveAsCassandraTable("ks_name", "table_name")
^
I don't get why this doesn't work since saveAsCassandraTable is designed to work on any RDD. Any suggestions?
I had similar problem with the example in spark-cassandra-connector docs:
case class WordCount(word: String, count: Long)
val collection = sc.parallelize(Seq(WordCount("dog", 50), WordCount("cow", 60)))
collection.saveAsCassandraTable("test", "words_new", SomeColumns("word", "count"))
...and the solution was to move case class definition out of "main" function (but I don't really know if this applies to the mentioned problem...).
saveAsCassandraTable needs 3 implicit parameters. The first one (connector) has a default value, the last two (rwf and columnMapper) are not in implicit scope in your saveResultsToCassandra method, as a consequence your method doesn't compile.
Look at this answer on another question, if you need some more information about implicits.
Turning your saveResultsToCassandra into the function below should work, if you have defined your tables (TableDef) before.
def saveResultsToCassandra()(
// implicit parameters as a separate list!
implicit rwf: RowWriterFactory[T],
columnMapper: ColumnMapper[T]
) {
rdd.saveAsCassandraTable("ks_name", "table_name")
}

How to evaluate an expression inside a Scala macro?

I'm trying to evaluate an Expr inside a macro using the Context#eval method:
//Dummy implementation
def evalArrayTree(c: Context)(a: c.Expr[ArrayTree]): c.Expr[Array[Double]] = {
import c.universe._
println( c.eval(a) )
val tree = reify( Array(0.0,0.0,0.0) ).tree
c.Expr[Array[Double]]( tree )
}
However, the compiler complains with:
[error] /home/falcone/prg/sbt-example-paradise/core/src/main/scala/Test.scala:20: exception during macro expansion:
[error] scala.tools.reflect.ToolBoxError: reflective toolbox has failed
If found in the scala-user ML, that the problem could be solved using resetAllAttrs. However
I don't understand how I am supposed to use it.
This function seems to be deprecated.
So is there a way to solve my problem ?
The rest of the code:
object ArrayEval {
import scala.language.experimental.macros
def eval( a: ArrayOps.ArrayTree ): Array[Double] = macro Macros.evalArrayTree
}
object ArrayOps {
sealed trait ArrayTree {
def +( that: ArrayTree ) = Plus( this, that )
}
implicit class Ary( val ary: Array[Double] ) extends ArrayTree
case class Plus( left: ArrayTree, right: ArrayTree ) extends ArrayTree
}
The docs for c.eval indeed tell to use c.resetAllAttrs, however this function has a number of known issues that sometimes make it to irreparably corrupt the tree it processes (that's why we're planning to remove it in Scala 2.11 - I just submitted a pull request that does that: https://github.com/scala/scala/pull/3485).
What you could try instead is c.resetLocalAttrs, which has smaller potential for tree corruption. Unfortunately it's still a bit broken. We plan to fix it (https://groups.google.com/forum/#!topic/scala-internals/TtCTPlj_qcQ), however in Scala 2.10.x and 2.11.0 there's going to be no way to make c.eval work reliably.
Well, I figured out what they meant by using resetAllAttrs. My example is simplified for an Int input, but I was able to replicate and fix the error you described by doing the following:
import scala.language.experimental.macros
import scala.reflect.runtime.universe._
import scala.reflect.macros.BlackboxContext
def _evalMacro(c: BlackboxContext)(a: c.Expr[Int]) = {
import c.universe._
val treeReset = c.resetAllAttrs(a.tree) // Reset the symbols in the tree for 'a'
val newExpr = c.Expr(treeReset) // Construct a new expression for the updated tree
println(c.eval(newExpr)) // Perform evaluation on the newly constructed expression
... // Do what you do
}
def evalMacro(a: Int) = macro _evalMacro
I'm going to make a guess that you're fine for using resetAllAttrs, at least until some future versions of Scala come out. 2.11 doesn't even give a deprecation warning for its use.
Note: I'm using Scala 2.11. I believe this should be identical in 2.10, except you'll be using Context instead of BlackboxContext.

Scala Pickling and type parameters

I'm using Scala Pickling, an automatic serialization framework for Scala.
According to the author's slides, any type T can be pickled as long as there is an implicit Pickler[T] in scope.
Here, I'm assuming she means scala.tools.nsc.io.Pickler.
However, the following does not compile:
import scala.pickling._
import scala.pickling.binary._
import scala.tools.nsc.io.Pickler
object Foo {
def bar[T: Pickler](t: T) = t.pickle
}
The error is:
[error] exception during macro expansion:
[error] scala.ScalaReflectionException: type T is not a class
[error] at scala.reflect.api.Symbols$SymbolApi$class.asClass(Symbols.scala:323)
[error] at scala.reflect.internal.Symbols$SymbolContextApiImpl.asClass(Symbols.scala:73)
[error] at scala.pickling.PickleMacros$class.pickleInto(Macros.scala:381)
[error] at scala.pickling.Compat$$anon$17.pickleInto(Compat.scala:33)
[error] at scala.pickling.Compat$.PickleMacros_pickleInto(Compat.scala:34)
I'm using Scala 2.10.2 with scala-pickling 0.8-SNAPSHOT.
Is this a bug or user error?
EDIT 1: The same error arises with both scala.pickling.SPickler and scala.pickling.DPickler.
EDIT 2: It looks like this is a bug: https://github.com/scala/pickling/issues/31
Yep, as Andy pointed out:
you need either a scala.pickling.SPickler or a scala.pickling.DPickler (static and dynamic, respectively) in order to pickle a particular type.
Those both already come in the scala.pickling package, so it's enough to just use them in your generic method signature.
You're absolutely correct that you can add an SPickler context-bound to your generic method. The only additional thing which you need (admittedly it's a bit ugly, and we're thinking about removing it) is to add a FastTypeTag context bound as well. (This is necessary for the pickling framework to know what type it's trying to pickle, as it handles primitives differently, for example.)
This is what you'd need to do to provide generic pickling/unpickling methods:
Note that for the unbar method, you need to provide an Unpickler context-bound rather than a SPickler context-bound.
import scala.pickling._
import binary._
object Foo {
def bar[T: SPickler: FastTypeTag](t: T) = t.pickle
def unbar[T: Unpickler: FastTypeTag](bytes: Array[Byte]) = bytes.unpickle[T]
}
Testing this in the REPL, you get:
scala> Foo.bar(42)
res0: scala.pickling.binary.BinaryPickle =
BinaryPickle([0,0,0,9,115,99,97,108,97,46,73,110,116,0,0,0,42])
scala> Foo.unbar[Int](res0.value)
res1: Int = 42
Looking at the project, it seems you need either an scala.pickling.SPickler or a scala.pickling.DPickler (static and dynamic, respectively) in order to pickle a particular type.
The pickle methods are macros. I suspect that if you pickle with an SPickler, the macro will require the compile time type of your class to be known.
Thus, you may need to do something similar to:
object Foo {
def bar(t: SomeClass1) = t.pickle
def bar(t: SomeClass2) = t.pickle
def bar(t: SomeClass3) = t.pickle
// etc
}
Alternatively, a DPickler may do the trick. I suspect that you'll still have to write some custom pickling logic for your specific types.

Enumeration and mapping with Scala 2.10

I'm trying to port my application to Scala 2.10.0-M2. I'm seeing some nice improvements with better warnings from compiler. But I also got bunch of errors, all related to me mapping from Enumeration.values.
I'll give you a simple example. I'd like to have an enumeration and then pre-create bunch of objects and build a map that uses enumeration values as keys and then some matching objects as values. For example:
object Phrase extends Enumeration {
type Phrase = Value
val PHRASE1 = Value("My phrase 1")
val PHRASE2 = Value("My phrase 2")
}
class Entity(text:String)
object Test {
val myMapWithPhrases = Phrase.values.map(p => (p -> new Entity(p.toString))).toMap
}
Now this used to work just fine on Scala 2.8 and 2.9. But 2.10.0-M2 gives me following warning:
[ERROR] common/Test.scala:21: error: diverging implicit expansion for type scala.collection.generic.CanBuildFrom[common.Phrase.ValueSet,(common.Phrase.Value, common.Entity),That]
[INFO] starting with method newCanBuildFrom in object SortedSet
[INFO] val myMapWithPhrases = Phrase.values.map(p => (p -> new Entity(p.toString))).toMap
^
What's causing this and how do you fix it?
It's basically a type mismatch error. You can work around it by first converting is to a list:
scala> Phrase.values.toList.map(p => (p, new Entity(p.toString))).toMap
res15: scala.collection.immutable.Map[Phrase.Value,Entity] = Map(My phrase 1 -> Entity#d0e999, My phrase 2 -> Entity#1987acd)
For more information, see the answers to What's a “diverging implicit expansion” scalac message mean? and What is a diverging implicit expansion error?
As you can see from your error, the ValueSet that holds the enums became a SortedSet at some point. It wants to produce a SortedSet on map, but can't sort on your Entity.
Something like this works with case class Entity:
implicit object orderingOfEntity extends Ordering[Entity] {
def compare(e1: Entity, e2: Entity) = e1.text compare e2.text
}