What is the best way to avoid clashing between two typeclass definitions in shapeless - scala

Shapeless has a neat type class derivation mechanism that allows you to define typeclasses and get automatic derivation for any typeclass.
To use the derivation mechanism as a user of a typeclass, you would use the following syntax
import MyTypeClass.auto._
which as far as I understand it is equivalent to
import MyTypeClass.auto.derive
An issue arises when you try and use multiple typeclasses like such within the same scope. It would appear that the Scala compiler only considers the last definition of derive even though there are two versions of the function "overloaded" on their implicit arguments.
There are a couple ways I can think of to fix this. Instead of listing them here, I will mark them as answers that you can vote on to confirm sanity as well as propose any better solution.

I raised this question back in April and proposed two solutions: defining the method yourself (as you suggest):
object AutoCodecJson {
implicit def deriveEnc[T] = macro deriveProductInstance[EncodeJson, T]
implicit def deriveDec[T] = macro deriveProductInstance[DecodeJson, T]
}
Or using aliasing imports:
import AutoEncodeJson.auto.{ derive => deriveEnc }
import AutoDecodeJson.auto.{ derive => deriveDec }
I'd strongly suggest going with aliasing imports—Miles himself said "hadn't anticipated that macro being reused that way: not sure I approve" about the deriveProductInstance approach.

Instead of inheriting from the Companion trait, define the auto object and apply method yourself within your companion object and name them distinctively. A possible drawback to this is that two separate librairies using shapeless could end up defining a derive method with the same name and the user would end up again with a situation where he cannot use the derivation process for both type classes within the same scope in his project.
Another possible drawback is that by dealing with the macro call yourself, you may be more sensitive to shapeless API changes.

Modify/fix the Scala compiler to accept two different methods overloaded on their implicit parameters.
Is there any reason why this is impossible in theory?

Related

When should I use an implicit class?

For me, I would use an implicit class under the following scenarios:
don't have access to the underlying type to be able to add the method I want.
the method I want doesn't make sense in a "global" sense.
i am splitting the functionality into another library of "extensions"
actually converting to a new type adds semantic/readability value (the new type actually means something)
However, I am fairly new to Scala (<6 months) and I'm noticing the developers around me are using implicit classes when it breaks the scenarios above. When I asked why, the answer was "because that's what I've always done".
So my question is, is there an official recommendation for when one should use an implicit class over a normal function added to the class definition? (I couldn't find anything here: https://docs.scala-lang.org/overviews/core/implicit-classes.html)
As per the SIP,
Motivation for the implicit class was that the popular extension method pattern, sometimes called the Pimp My Library pattern was used in Scala to extend pre-existing classes with new methods, fields, and interfaces.
There was also another common ‘extension’ use case known as type traits or type classes (see scala.math.Numeric). Type classes offered an alternative to pure inheritance hierarchies that was very similar to the extension method pattern.
The main drawback to both of these techniques was that they suffered the creation of an extra object at every invocation to gain the convenient syntax. This made these useful patterns unsuitable for use in performance-critical code. In these situations it was common to remove use of the pattern and resort to using an object with static helper methods.
And implicit class syntax was thus added to solve these issues.
The rock. They allow to make your own DSLs. Take a look to the Spray code, one of our classic and beloved projects:
trait TransformerPipelineSupport {
...
implicit class WithTransformation[A](value: A) {
def ~>[B](f: A ⇒ B): B = f(value)
}
...
}
The ~> allows to compose Spray directives... There are many more examples

Is it possible to achieve functionality provided by implicit classes via macros?

We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.

Infer multiple generic types in an abstract class that should be available to the compiler

I am working on an abstract CRUD-DAO for my play2/slick2 project. To have convenient type-safe primary IDs I am using Unicorn as additional abstraction and convenience on top of slicks MappedTo & ColumnBaseType.
Unicorn provides a basic CRUD-DAO class BaseIdRepository which I want to further extend for project specific needs. The signature of the class is
class BaseIdRepository[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
(tableName: String, val query: TableQuery[T])
(implicit val mapping: BaseColumnType[I])
extends BaseIdQueries[I, A, T]
This leads to DAO implementations looking something like
class UserDao extends
BaseIdRepository[UserId, User, Users]("USERS", TableQuery[Users])
This seems awfully redundant to me. I was able to supply tableName and query from T, giving me the following signature on my own Abstract DAO
abstract class AbstractIdDao[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
extends BaseIdRepository[I,A,T](TableQuery[T].baseTableRow.tableName, TableQuery[T])
Is it possible in Scala to somehow infer the types I and A to make a signature like the following possible? (Users is a class extending IdTable)
class UserDao extends AbstractIdDao[Users]
Is this possible without runtime-reflection? If only by runtime-reflection: How do i use the Manifest in a class definition and how big is the performance impact in a reactive Application?
Also, since I am fairly new to the language and work on my own: Is this good practice in scala at all?
Thank you for help. Feel free to criticize my question and english. Improvements will of course be submitted to Unicorn git-repo
EDIT:
Actually, TableQuery[T].baseTableRow.tableName, TableQuery[T] does not work due to the error class type required but T found, IDEA was superficially fine with it, scalac wasn't.
As for your first question, I've encountered this when working with Slick too. But if you think about it, you'll see you cannot do this at compile time. This is because this type information is necessary specify the relations between your type parameters. If you would not, you would be able to construct classes of BaseIdRepository where the types don't make sense, such as IdTables where the table doesn't represent the projection. Since you need names for each of these relations, you need 3 named type parameters. If you omit the first one, it is possible to construct an IdRepository without a projection containing an Id; if you omit the second one it is possible to have a table without an ID column; and if you omit the third one, it is possible to query tables that do not have this combination of a table and a projection with an ID. You might not have the types defined in your application that would break any of these rules presently, but the compiler doesn't know that. Supplying the proper type information is unavoidable.
As for your second question, it is very unadvisable to employ reflection just because you think the syntax is verbose. If you can make guarantees about typesafety by simply by providing type parameters, I would advise you to do so. It is in very bad taste and style to write Scala in such a way. It would be ironic to employ typesafe ID's with Unicorn and later hack around its type safety with reflection.
Furthermore, a Manifest is not what you want: a manifest doesn't allow you to provide less type information to the compiler, it only allows you to be more flexible to specify where you do so. It allows you to leverage the compiler's knowledge of types at compile time to circumvent some issues that type erasure introduces. The problem you face here has nothing to do with type erasure, so Manifests won't work. Lastly, runtime reflection won't help you much here because Slick's internal functions won't allow you to compile if you don't already supply the type information.
So yeah, what you want is impossible. Scala (and Slick) need complete information at compile time and no trick is going to be effective in circumventing that.

Why does the Scala API have two strategies for organizing types?

I've noticed that the Scala standard library uses two different strategies for organizing classes, traits, and singleton objects.
Using packages whose members are them imported. This is, for example, how you get access to scala.collection.mutable.ListBuffer. This technique is familiar coming from Java, Python, etc.
Using type members of traits. This is, for example, how you get access to the Parser type. You first need to mix in scala.util.parsing.combinator.Parsers. This technique is not familiar coming from Java, Python, etc, and isn't much used in third-party libraries.
I guess one advantage of (2) is that it organizes both methods and types, but in light of Scala 2.8's package objects the same can be done using (1). Why have both these strategies? When should each be used?
The nomenclature of note here is path-dependent types. That's the option number 2 you talk of, and I'll speak only of it. Unless you happen to have a problem solved by it, you should always take option number 1.
What you miss is that the Parser class makes reference to things defined in the Parsers class. In fact, the Parser class itself depends on what input has been defined on Parsers:
abstract class Parser[+T] extends (Input => ParseResult[T])
The type Input is defined like this:
type Input = Reader[Elem]
And Elem is abstract. Consider, for instance, RegexParsers and TokenParsers. The former defines Elem as Char, while the latter defines it as Token. That means the Parser for the each is different. More importantly, because Parser is a subclass of Parsers, the Scala compiler will make sure at compile time you aren't passing the RegexParsers's Parser to TokenParsers or vice versa. As a matter of fact, you won't even be able to pass the Parser of one instance of RegexParsers to another instance of it.
The second is also known as the Cake pattern.
It has the benefit that the code inside the class that has a trait mixed in becomes independent of the particular implementation of the methods and types in that trait. It allows to use the members of the trait without knowing what's their concrete implementation.
trait Logging {
def log(msg: String)
}
trait App extends Logging {
log("My app started.")
}
Above, the Logging trait is the requirement for the App (requirements can also be expressed with self-types). Then, at some point in your application you can decide what the implementation will be and mix the implementation trait into the concrete class.
trait ConsoleLogging extends Logging {
def log(msg: String) = println(msg)
}
object MyApp extends App with ConsoleLogging
This has an advantage over imports, in the sense that the requirements of your piece of code aren't bound to the implementation defined by the import statement. Furthermore, it allows you to build and distribute an API which can be used in a different build somewhere else provided that its requirements are met by mixing in a concrete implementation.
However, there are a few things to be careful with when using this pattern.
All of the classes defined inside the trait will have a reference to the outer class. This can be an issue where performance is concerned, or when you're using serialization (when the outer class is not serializable, or worse, if it is, but you don't want it to be serialized).
If your 'module' gets really large, you will either have a very big trait and a very big source file, or will have to distribute the module trait code across several files. This can lead to some boilerplate.
It can force you to have to write your entire application using this paradigm. Before you know it, every class will have to have its requirements mixed in.
The concrete implementation must be known at compile time, unless you use some sort of hand-written delegation. You cannot mix in an implementation trait dynamically based on a value available at runtime.
I guess the library designers didn't regard any of the above as an issue where Parsers are concerned.

How does Scala's apply() method magic work?

In Scala, if I define a method called apply in a class or a top-level object, that method will be called whenever I append a pair a parentheses to an instance of that class, and put the appropriate arguments for apply() in between them. For example:
class Foo(x: Int) {
def apply(y: Int) = {
x*x + y*y
}
}
val f = new Foo(3)
f(4) // returns 25
So basically, object(args) is just syntactic sugar for object.apply(args).
How does Scala do this conversion?
Is there a globally defined implicit conversion going on here, similar to the implicit type conversions in the Predef object (but different in kind)? Or is it some deeper magic? I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions. This initially seems like an exception to me.
I don't think there's anything deeper going on than what you have originally said: it's just syntactic sugar whereby the compiler converts f(a) into f.apply(a) as a special syntax case.
This might seem like a specific rule, but only a few of these (for example, with update) allows for DSL-like constructs and libraries.
It is actually the other way around, an object or class with an apply method is the normal case and a function is way to construct implicitly an object of the same name with an apply method. Actually every function you define is an subobject of the Functionn trait (n is the number of arguments).
Refer to section 6.6:Function Applications of the Scala Language Specification for more information of the topic.
I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions.
Yes. And this rule belongs to this smaller set.