I am looking over the documentation for Scalatra and noticed an interesting snippet of code for syntax I haven't seen yet on : http://www.scalatra.org/2.2/guides/persistence/introduction.html
Specifically, it's this bit:
trait DatabaseSessionSupport { this: ScalatraBase =>
import DatabaseSessionSupport._
everything here makes sense except for the this: ScalatraBase => segment. What significance does it have here? Is it specific for the import below or for the entire trait?
That is called a "self-type annotation" and it requires that any use of trait DatabaseSessionSupport in an instantiable class must be accompanied by ("mixed in with") a type consistent with ScalatraBase. I have not looked at this specific code, but it is most likely a use of the so-called "Cake Pattern."
You can find many treatments of this concept both in Stack Overflow, in various blogs and in a classic paper by Odersky et. al. titled "Scalable Component Abstractions."
Related
For me, I would use an implicit class under the following scenarios:
don't have access to the underlying type to be able to add the method I want.
the method I want doesn't make sense in a "global" sense.
i am splitting the functionality into another library of "extensions"
actually converting to a new type adds semantic/readability value (the new type actually means something)
However, I am fairly new to Scala (<6 months) and I'm noticing the developers around me are using implicit classes when it breaks the scenarios above. When I asked why, the answer was "because that's what I've always done".
So my question is, is there an official recommendation for when one should use an implicit class over a normal function added to the class definition? (I couldn't find anything here: https://docs.scala-lang.org/overviews/core/implicit-classes.html)
As per the SIP,
Motivation for the implicit class was that the popular extension method pattern, sometimes called the Pimp My Library pattern was used in Scala to extend pre-existing classes with new methods, fields, and interfaces.
There was also another common ‘extension’ use case known as type traits or type classes (see scala.math.Numeric). Type classes offered an alternative to pure inheritance hierarchies that was very similar to the extension method pattern.
The main drawback to both of these techniques was that they suffered the creation of an extra object at every invocation to gain the convenient syntax. This made these useful patterns unsuitable for use in performance-critical code. In these situations it was common to remove use of the pattern and resort to using an object with static helper methods.
And implicit class syntax was thus added to solve these issues.
The rock. They allow to make your own DSLs. Take a look to the Spray code, one of our classic and beloved projects:
trait TransformerPipelineSupport {
...
implicit class WithTransformation[A](value: A) {
def ~>[B](f: A ⇒ B): B = f(value)
}
...
}
The ~> allows to compose Spray directives... There are many more examples
I was reading up on best practices for implementing Slick, and was examining this example. In it, there is this construct:
trait BankRepository extends BankTable { this: DBComponent =>
... //A bunch of code
}
I don't understand the this: DBComponent => part. In this case, DBComponent is a simple trait defined elsewhere (you can find it in the above link). What I don't understand is:
What does the this: DBComponent => construct do. My IDE doesn't complain, but it also doesn't link to the function being executed by the =>. My intuition is that it's saying the rest of the code is a value that is returned, but I'm not clear on what is invoking it, or what the value returned exactly.
What do I even call this construct? As with many symbol-heavy constructs it's hard to look up/find documentation of, because it's clearly dependent on context. But even describing the context is difficult. What is this construct called?
It's called a self type. It's basically a contract that says any class extending this trait (mixing it in) must include DBComponent. And, as such, the compiler should assume DBCompenent elements are in scope for the following code.
Here's a link to a description of it from Programming in Scala, Odersky et al, 1st Edition (a little dated but still accurate on most topics).
We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.
I found this code example in Programming in Scala, 2nd Ed. (Chapter 25, Listing 25.11):
object PrefixMap extends {
def empty[T] = ...
def apply[T](kvs: (String, T)*): PrefixMap[T] = ...
...
}
Why is the extends clause there without a superclass name? It looks like extending an anonymous class, but for what purpose? The accompanying text doesn't explain or even mention this construct anywhere. The code actually compiles and apparently works perfectly with or without it.
OTOH I found the exact same code on several web pages, including this (which looks like the original version of the chapter in the book). I doubt that a typo could have passed below the radars of so many readers up to now... so am I missing something?
I tried to google it, but struggled even to find proper search terms for it. So could someone explain whether this construct has a name and/or practical use in Scala?
Looks like a print error to me. It will work all the same, though, which probably helped hide it all this time.
Anyway, that object is extending a structural type, though it could also be an early initialization, if you had with XXX at the end. MMmmm. It looks more like an early initialization without any class or trait to be initialized later, actually... structure types do not contain code, I think.
In Martin Odersky's recent post about levels of programmer ability in Scala, in the Expert library designer section, he includes the term "early initializers".
These are not mentioned in Programming in Scala. What are they?
Early initializers are part of the constructor of a subclass that is intended to run before its superclass. For example:
abstract class X {
val name: String
val size = name.size
}
class Y extends {
val name = "class Y"
} with X
If the code was written instead as
class Z extends X {
val name = "class Z"
}
then a null pointer exception would occur when Z got initialized, because size is initialized before name in the normal ordering of initialization (superclass before class).
As far as I can tell, the motivation (as given in the link above) is:
"Naturally when a val is overridden, it is not initialized more than once. So though x2 in the above example is seemingly defined at every point, this is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val."
I don't see why this is natural at all. It is completely possible that the r.h.s. of an assignment might have a side effect. Note that such code structure is completely impossible in either C++ or Java (and I will guess Smalltalk, although I can't speak for that language). In fact you have to make such dual assignments implicit...ticilpmi...EXplicit in those languages via constructors. In the light of the r.h.s. side effect uncertainty, it really doesn't seem like much of a motivation at all: the ability to sidestep superclass side effects (thereby voiding superclass invariants) via ASSIGNMENT? Ick!
Are there other "killer" motivations for allowing such unsafe code structure? Object-oriented languages have done without such a mechanism for about 40 years (30-odd years, if you count from the creation of the language), why include it now?
It...just...seems...dangerous.
On second thought, a year layer...
This is just cake. Literally.
Not an early ANYTHING. Just cake (mixins).
Cake is a term/pattern coined by The Grand Pooh-bah himself, one that employs Scala's trait system, which is halfway between a class and an interface. It is far better than Java's decoration pattern.
The so-called "interface" is merely an unnamed base class, and what used to be the base class is acting as a trait (which I frankly did not know could be done). It is unclear to me if a "with'd" class can take arguments (traits can't), will try it and report back.
This question and its answer has stepped into one of Scala's coolest features. Read up on it and be in awe.