Currently, I am learning generic or meta programming in Shapeless. I am facinated about the feature which can check the logic during compilation instead of runtime. There are 2 amazing examples:
1. Checksum: https://gist.github.com/travisbrown/3763016
2. TowersOfHanoi: https://gist.github.com/jrudolph/66925
However, I think those 2 examples thrust me so quickly into the deeper areas of this feature. Only I know so far is some tricks as follow:
1: define the context bound to use implicit to make sure class has correct constraint. I think that is reason we have to lift some primitive types into class type like int, so compiler know what type is (I am not sure about my argument). Code:
object bla {
implicitly[A <:< B]
}
2: using macros.
Are there other magic tricks to fulfill that ?
Many thanks in advance
Related
I'm facing issues due to two scala compiler limitations
It's not possible to refer to the type of this (this.type is not it for obv reasons) for cases where you want to write traits which implement common behavior which requires construction of a concrete type. The recommended pattern to work around this is
trait Foo[SelfType <: Foo[SelfType]] {
this: SelfType =>
final def foo: SelfType = newInstance(??? /*do some work*/)
protected def newInstance(i: Int): SelfType
}
Scala compiler (fixed in Dotty AFAIK) does not keep track of bounds in existential types (I've found at least 3 bugs filed with the scala github project going back from 2009 and on). This means, given the above type if I were to write a method that take some unknown Foo as such
def process(f: Foo[_])
I may get some weird buggy compiler behavior in certain cases. Instead, I need to manually express to the compiler that the _ here obeys the type bounds to work around its limitations. For that, I'd need to do something like
def process[F <: Foo[F]](f: Foo[F])
and invoke the method without passing any type params which hopefully captures this correctly. However, this can make def signatures quite cumbersome if Foo takes other type params leading to a lot of burden for clients. So my question is, is it possible to use type definitions as a shorthand to express such self-referential types correctly. E.g. I've tried all sorts of things.
type SomeFoo = Foo[SelfType] with SelfType forSome { type SelfType <: Foo[SelfType] }
but that doesn't seem to quite do it. Somehow the compiler doesn't realize that SelfType is the same unknown type SomeFoo.
I hope this made sense. Thanks for your help. P.S. feel free to suggest renaming the question as it's not super clear right now.
For me, I would use an implicit class under the following scenarios:
don't have access to the underlying type to be able to add the method I want.
the method I want doesn't make sense in a "global" sense.
i am splitting the functionality into another library of "extensions"
actually converting to a new type adds semantic/readability value (the new type actually means something)
However, I am fairly new to Scala (<6 months) and I'm noticing the developers around me are using implicit classes when it breaks the scenarios above. When I asked why, the answer was "because that's what I've always done".
So my question is, is there an official recommendation for when one should use an implicit class over a normal function added to the class definition? (I couldn't find anything here: https://docs.scala-lang.org/overviews/core/implicit-classes.html)
As per the SIP,
Motivation for the implicit class was that the popular extension method pattern, sometimes called the Pimp My Library pattern was used in Scala to extend pre-existing classes with new methods, fields, and interfaces.
There was also another common ‘extension’ use case known as type traits or type classes (see scala.math.Numeric). Type classes offered an alternative to pure inheritance hierarchies that was very similar to the extension method pattern.
The main drawback to both of these techniques was that they suffered the creation of an extra object at every invocation to gain the convenient syntax. This made these useful patterns unsuitable for use in performance-critical code. In these situations it was common to remove use of the pattern and resort to using an object with static helper methods.
And implicit class syntax was thus added to solve these issues.
The rock. They allow to make your own DSLs. Take a look to the Spray code, one of our classic and beloved projects:
trait TransformerPipelineSupport {
...
implicit class WithTransformation[A](value: A) {
def ~>[B](f: A ⇒ B): B = f(value)
}
...
}
The ~> allows to compose Spray directives... There are many more examples
I am reading Foundations of path dependent types. On the first page, on the right column it is written:
Our motivation is twofold. First, we believe objects with type members
are not fully understood. It is not clear what causes the complexity,
which pieces of complexity are essential to the concept or accidental
to a language implementation or calculus that tries to achieve
something else. Second, we believe objects with type members are
really useful. They can encode a variety of other, usually separate
type system features. Most importantly, they unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems.
Could someone clarify/explain what does "object vs module" system mean?
Or in general, what does
"they (objects with type members) unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems."
mean ?
What concepts? From where ?
Nominality in the object names / values ?
Structure in the types ? Or the other way around?
Where do type members here belong to ? To module system ? Object system ? How? Why?
EDIT:
How does this unification relate to path dependent types ? It seems to me that they allow this unification to happen (objects with type members). Is that so ?
If yes, how ?
Could you give a simple example what that means ? (I.e. path dependent types allowing the unification of module and object systems vs. why would the unification not be possible happen if we would not have path dependent types?)
EDIT 2:
From the paper:
To make any use of type members, programmers need a way to refer to
them. This means that types must be able to refer to objects, i.e.
contain terms that serve as static approximation of a set of dynamic
objects. In other words, some level of dependent types is required;
the usual notion is that of path-dependent types.
So my understanding so far (with the help of Jesper's answer) :
This paragraph above partially answers some of the questions above. The main seems to be to have objects with type members and to have that path dependent types are needed because objects are dynamic/runtime dependent but types are static (defined at compile time) so just by having objects that lead to type members would not work because then those type members would not be defined clearly at compile time.
Path dependent types help here by pinning down the path leading to a type member at compile time (by requiring that the objects are already known/defined at compile time), so even if the path goes via objects (that can change during compile time) but if those objects are fixed already at compile time then their type members can have a clear meaning at compile time too.
I'm not sure I fully understand what your question is, but I'll take a stab at it. :) I think the authors mainly are referring to ML style modules where a signature corresponds to a Scala trait and a structure corresponds to a Scala object. Scala unifies the concepts of record values, objects and modules which in most other languages (like ML, Rust etc.) are separate concepts. The main benefit is that in Scala modules/objects can be passed around as normal function arguments (while in ML you have to use special functors for this).
In ML a module is checked for compatibility with a signature (trait in Scala) based on its structure (similar to structural typing in Scala), but in Scala the module must implement the trait by name (nominal typing). So even if two modules/objects have the same structure in Scala they might not be compatible with each other depending on their super type hierarchy.
A really powerful feature regarding type members in Scala is that you can use a trait even if you don't know the exact type of its type members as long as you do it in a type safe way (I think this is also possible in ML modules), for example:
trait A {
type X
def getX: X
def setX(x: X): Unit
}
def foo(a: A) = a.setX(a.getX)
In foo the Scala compiler doesn't know the exact type of a.X but a value of the type can still be used in a way the compiler knows is safe. This is not possible in Rust for example.
The next version of the Scala compiler, Dotty, will be based on the theory described in the paper you reference. This unification of modules and objects combined with subtyping, traits and type members is one reason that Scala is unique and very powerful.
EDIT: To expand a bit why path dependent types increases the flexibility of Scala's module/object system, let's expand the example above with:
def bar(a: A, b: A) = a.setX(b.getX)
This will result in a compilation error:
error: type mismatch;
found : b.T
required: a.T
def foo(a: A, b: A) = a.setX(b.getX)
^
and correctly so because a.T and b.T could resolve to different types. You can fix it by using a path dependent type:
def bar(a: A)(b: A { type X = a.X }) = a.setX(b.getX)
Or add a type parameter:
def bar[T](a: A { type X = T }, b: A { type X = T }) = a.setX(b.getX)
So, path dependent types eliminates some need of type parameters, and also allows us to express existential types efficiently (corresponding to A[_] or A[T] forSome { type T } if A had a type parameter instead of a type member).
To clarify: I am NOT asking what I can use a Singleton design pattern for. The question is about largely undocumented trait provided in scala.
What is this trait for? The only concrete use case I could find so far was to limit a trait to objects only, as seen in this question: Restricting a trait to objects?
This question sheds some light on the issue Is scala.Singleton pure compiler fiction?, but clearly there was another use case as well!
Is there some obvious use that I can't think of, or is it just mainly compiler magicks?
I think the question is answered by Martin Odersky's comment on the mailing list thread linked to from the linked question:
The type Singleton is essentially an encoding trick for existentials
with values. I.e.
T forSome { val x: T }
is turned into
[x.type := X] T forSome { type X <: T with Singleton }
Singleton types are usually not used directly…
In other words, there is no intended use beyond guiding the typer phase of the compiler. The Scala Language Specification has this bit in §3.2.10, also §3.2.1 indicates that this trait might be used by the compiler to declare that a type is stable.
You can also see this with the following (Scala 2.11):
(new {}).isInstanceOf[Singleton]
<console>:54: warning: fruitless type test: a value of type AnyRef cannot also
be a Singleton
(new {}).isInstanceOf[Singleton]
^
res27: Boolean = true
So you cannot even use that trait in a meaningful test.
(This is not a definite answer, just my observation)
We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.