Quasiquotes simplify many things when writing macros in Scala. However I noticed that macros containing quasiquotes could be recompiled every time a compilation in SBT is triggered, even though neither the macro implementation nor any of it's call sites have changed and need recompilation.
This doesn't seem to happen, if the code in quasiquotes is fairly simple, it seems to happen only if there's a dependency on another class. I noticed that rewriting everything with "reify" seems to solve the recompilation problem but I don't manage to rewrite the last part without quasiquotes...
My macro avoids reflection on startup by creating wrapper functions during compilation.
I have the following classes:
object ExportedFunction {
def apply[R: Manifest](f: Function0[R], fd: FunctionDescription): ExportedExcelFunction = new ExcelFunction0[R] {
def apply: R = f()
val functionDescription = fd
}
def apply[T1: Manifest, R: Manifest](f: Function1[T1, R], fd: FunctionDescription): ExportedExcelFunction = new ExcelFunction1[T1, R] {
def apply(t1: T1): R = f(t1)
val functionDescription = fd
}
... and so on... until Function17...
}
I then analyze an object and export any member function using the described interface like so:
def export(registrar: FunctionRegistrar,
root: Object,
<...more args...>) = macro exportImpl
def exportImpl(c: Context)(registrar: c.Expr[FunctionRegistrar],
root: c.Expr[Object],
<...>): c.Expr[Any] = {
import c.universe._
<... the following is simplified ...>
root.typeSignature.members.flatMap {
case x if x.isMethod =>
val method = x.asMethod
val callee = c.Expr(method))
val desc = q"""FunctionDescription(<...result from reflective lookup...>)"""
val export = q"ExportedFunction($callee _, $desc)"
q"$registrar.+=({$export})"
I can rewrite the first and last line with reify but I don't manage to rewrite the second line, my best shot is with quasiquotes:
val export = reify {
...
ExportedFunction(c.Expr(q"""$callee _"""), desc)
...
}.tree
But this results in:
overloaded method value apply with alternatives... cannot be applied to (c.Expr[Nothing], c.universe.Expr[FunctionDescription])
I think the compiler is missing the implicits, or maybe this code will only work for a function with a fixed number of arguments, since it needs to know at macro compile time how many arguments the method has? However it works if everything is written using quasiquotes...
From the description of the SBT problem I can assume that you're using macro paradise for 2.10.x and are facing https://github.com/scalamacros/paradise/issues/11. I was planning to address this issue this week, so the fix should arrive really soon. In the meanwhile you could use a workaround described on the issue page.
As for the reify problem, not all quasiquotes can be rewritten using reify. Limitations such as the one you have faced here were a very strong motivator towards shifting our focus to quasiquotes in Scala 2.11.
For the record, these SBT settings (upgrade to newer version) fixed it:
...
settings = Seq(
libraryDependencies += "org.scala-lang" % "scala-reflect" % scalaVersion.value,
libraryDependencies += "org.scalamacros" % "quasiquotes" % "2.0.0-M3" cross CrossVersion.full,
autoCompilerPlugins := true,
addCompilerPlugin("org.scalamacros" % "paradise" % "2.0.0-M3" cross CrossVersion.full)
)
Related
I'm learning the whole Cats-Effect FP framework, and am wondering about how to best implement logging. I'm struggling with finding an elegant way to use logging in methods that return non-side-effect types, particularly class constructors. For example:
case class Limit(name: String, value: Int)
object Limit {
def max(name: String): Limit = Limit(name, Int.MaxValue)
}
Now, if I want to add logging to the construction, which is obviously a side-effect, I would have to change the method signature to something like:
object Limit {
def max[F[_] : Sync](name: String): F[Limit] = {
for {
_ <- Slf4jLogger.getLogger[F].debug("Creating max limit.")
} yield {
Limit(name, Int.MaxValue)
}
}
}
Such a change breaks code compatibility, and now every place that uses Limit.max needs to be refactored to deal with the type signatures, mapping and/or call .unsafeRun. Quite a radical change for something as small as adding a log line.
Is there some way to do this more elegantly? Do I just drop log4cats and use the regular, unsafe SLF4J? Or do I just need to accept that in PF/Cats-Effect world it's safer to just return F[..] from everything everywhere, no matter how small or trivial the method is?
I think it would be better to use log4cats library. If you use cats-effect, probably your Main class returns IO[Exit] and almost whole your logic wouldn't require any unsafe calls.
But at the same time I saw a lot of projects that uses just a plain logging without wrapping logs inside of IO. It depends on you and/or your team/project. I personally prefer FP way.
I left an example of using log4cats here:
implicit def logger[F[_]: Sync]: Logger[F] = Slf4jLogger.getLogger[F]
case class Limit(name: String, value: Int)
object Limit {
def max[F[_] : Sync: Logger](name: String): F[Limit] = {
for {
_ <- Logger[F].info("Creating max limit.")
} yield {
Limit(name, Int.MaxValue)
}
}
}
build.sbt file:
libraryDependencies ++= Seq(
"org.typelevel" %% "log4cats-core" % "2.5.0", // Only if you want to Support Any Backend
"org.typelevel" %% "log4cats-slf4j" % "2.5.0", // Direct Slf4j Support - Recommended
// and some backend such as logback or smth
)
I'm trying to build a simple typeclass IsEnum[T] using a macro.
I use knownDirectSubclasses to get all the direct subclasses if T, ensure T is a sealed trait, and that all subclasses are of case objects (using subSymbol.asClass.isModuleClass && subSymbol.asClass.isCaseClass).
Now I'm trying to build a Seq with the case objects referred by the subclasses.
It's working, using a workaround:
Ident(subSymbol.asInstanceOf[scala.reflect.internal.Symbols#Symbol].sourceModule.asInstanceOf[Symbol])
But I copied that from some other question, yet it seems hacky and wrong. Why does that work? and is there a cleaner way to achieve that?
In 2.13 you can materialize scala.ValueOf
val instanceTree = c.inferImplicitValue(appliedType(typeOf[ValueOf[_]].typeConstructor, subSymbol.asClass.toType))
q"$instanceTree.value"
Tree will be different
sealed trait A
object A {
case object B extends A
case object C extends A
}
//scalac: Seq(new scala.ValueOf(A.this.B).value, new scala.ValueOf(A.this.C).value)
but at runtime it's still Seq(B, C).
In 2.12 shapeless.Witness can be used instead of ValueOf
val instanceTree = c.inferImplicitValue(appliedType(typeOf[Witness.Aux[_]].typeConstructor, subSymbol.asClass.toType))
q"$instanceTree.value"
//scalac: Seq(Witness.mkWitness[App.A.B.type](A.this.B.asInstanceOf[App.A.B.type]).value, Witness.mkWitness[App.A.C.type](A.this.C.asInstanceOf[App.A.C.type]).value)
libraryDependencies += "com.chuusai" %% "shapeless" % "2.4.0-M1" // in 2.3.3 it doesn't work
In Shapeless they use kind of
subSymbol.asClass.toType match {
case ref # TypeRef(_, sym, _) if sym.isModuleClass => mkAttributedQualifier(ref)
}
https://github.com/milessabin/shapeless/blob/master/core/src/main/scala/shapeless/singletons.scala#L230
or in our case simply
mkAttributedQualifier(subSymbol.asClass.toType)
but their mkAttributedQualifier also uses downcasting to compiler internals and the tree obtained is like Seq(A.this.B, A.this.C).
Also
Ident(subSymbol.companionSymbol)
seems to work (tree is Seq(B, C)) but .companionSymbol is deprecated (in scaladocs it's written "may return unexpected results for module classes" i.e. for objects).
Following approach similar to the one used by #MateuszKubuszok in his library enumz you can try also
val objectName = symbol.fullName
c.typecheck(c.parse(s"$objectName"))
and the tree is Seq(App.A.B, App.A.C).
Finally, if you're interested in the tree Seq(B, C) (and not some more complicated tree) it seems you can replace
Ident(subSymbol.asInstanceOf[scala.reflect.internal.Symbols#Symbol].sourceModule.asInstanceOf[Symbol/*ModuleSymbol*/])
with more conventional
Ident(subSymbol.owner.info.decl(subSymbol.name.toTermName)/*.asModule*/)
or (the shortest option)
Ident(subSymbol.asClass.module/*.asModule*/)
Edit: I updated the question to be more descriptive.
Note: I use the Scala 2.11 compiler, because that is the compiler version used by the LMS tutorial project.
I am porting a DSL written in Haskell to Scala. The DSL is an imperative language, so I used monads with do-notation, namely WriterT [Stmt] (State Label) a. I had trouble porting this to Scala, but worked around it by using the ReaderWriterState monad and just using Unit for the Reader component. I then went about searching for an alternative for the do-notation found in Haskell. For-comprehensions are supposed to be this alternative in Scala, but they are tailored towards sequences, I e.g. could not pattern match a tuple, it would insert a call to filter. So I started looking for alternatives and found multiple libraries: effectful, monadless, and each. I tried effectful first which does exactly what I wanted to achieve, I even prefer it over Haskell's do-notation, and it worked well with the ReaderWriterState monad from ScalaZ I had been using. In my DSL I have actions like Drop() (a case class) that I want to be able to use directly as a statement. I hoped to use implicits for this, but because the ! method of effectful (or monadless equivalent for that matter) is too general, I cannot get Scala to automatically convert my Action case classes to something of type Stmt (the ReaderWriterState that returns Unit).
So if not implicits, would there be a different way to achieve it?
As shown in Main.passes2, I did figure out a workaround that I would not mind using, but I am curious whether I stumbled upon is some limitation of the language, or is simply my lack of experience with Scala.
With Main.fails1 I will get the following error message: Implicit not found: scalaz.Unapply[scalaz.Monad, question.Label]. Unable to unapply type question.Label into atype constructor of kind M[_] that is classified by the type class scalaz.Monad. Check that the type class is defined by compiling implicitly[scalaz.Monad[type constructor]] and review the implicits in object Unapply, which only cover common type 'shapes.'
Which comes from ScalaZ its Unapply: https://github.com/scalaz/scalaz/blob/d2aba553e444f951adc847582226d617abae24da/core/src/main/scala/scalaz/Unapply.scala#L50
And with Main.fails2 I will just get: value ! is not a member of question.Label
I assume it is just a matter of writing the missing implicit definition, but I am not quite sure which one Scala wants me to write.
The most important bits of my build.sbt are the version:
scalaVersion := "2.11.2",
And the dependencies:
libraryDependencies += "org.scala-lang" % "scala-compiler" % "2.11.2",
libraryDependencies += "org.scala-lang" % "scala-library" % "2.11.2",
libraryDependencies += "org.scala-lang" % "scala-reflect" % "2.11.2",
libraryDependencies += "org.pelotom" %% "effectful" % "1.0.1",
Here is the relevant code, containing the things I tried and the code necessary to run those things:
package question
import scalaz._
import Scalaz._
import effectful._
object DSL {
type GotoLabel = Int
type Expr[A] = ReaderWriterState[Unit, List[Action], GotoLabel, A]
type Stmt = Expr[Unit]
def runStmt(stmt: Stmt, startLabel: GotoLabel): (Action, GotoLabel) = {
val (actions, _, updatedLabel) = stmt.run(Unit, startLabel)
val action = actions match {
case List(action) => action
case _ => Seq(actions)
}
(action, updatedLabel)
}
def runStmt(stmt: Stmt): Action = runStmt(stmt, 0)._1
def getLabel(): Expr[GotoLabel] =
ReaderWriterState((_, label) => (Nil, label, label))
def setLabel(label: GotoLabel): Stmt =
ReaderWriterState((_, _) => (Nil, Unit, label))
implicit def actionStmt(action: Action): Stmt =
ReaderWriterState((_, label) => (List(action), Unit, label))
}
import DSL._
final case class Label(label: String) extends Action
final case class Goto(label: String) extends Action
final case class Seq(seq: List[Action]) extends Action
sealed trait Action {
def stmt(): Stmt = this
}
object Main {
def freshLabel(): Expr[String] = effectfully {
val label = getLabel.! + 1
setLabel(label).!
s"ants-$label"
}
def passes0() =
freshLabel()
.flatMap(before => Label(before))
.flatMap(_ => freshLabel())
.flatMap(after => Label(after));
def passes1() = effectfully {
unwrap(actionStmt(Label(unwrap(freshLabel()))))
unwrap(actionStmt(Label(unwrap(freshLabel()))))
}
def fails1() = effectfully {
unwrap(Label(unwrap(freshLabel())))
unwrap(Label(unwrap(freshLabel())))
}
def pasess2() = effectfully {
Label(freshLabel.!).stmt.!
Label(freshLabel.!).stmt.!
}
def fails2() = effectfully {
Label(freshLabel.!).!
Label(freshLabel.!).!
}
def main(args: Array[String]): Unit = {
runStmt(passes0())
}
}
Question quality
I want to start with complaining about the question quality. You provide almost no textual description of what you are trying to achieve and then just show us a wall of code but don't explicitly reference your dependencies. This is far from what could count as a Minimal, Complete, and Verifiable example. And typically you get much more chances to get some answers if you provide a clear problem that is easy to understand and reproduce.
Back to the business
When you write something like
unwrap(Label(unwrap(freshLabel())))
you ask too much from the Scala compiler. Particularly unwrap can unwrap only something that is wrapped into some Monad, but Label is not a monad. It is not just the fact that there is no Monad[Label] instance, it is the fact that it structurally doesn't fit that kills you. Simply speaking an instance of ScalaZ Unapply is an object that allows you to split an applied generic type MonadType[SpecificType] (or other FunctorType[SpecificType]) into an "unapplied"/"partially-applied" MonadType[_] and SpecificType even when (as in your case) MonadType[_] is actually something complicated like ReaderWriterState[Unit, List[Action], GotoLabel, _]. And so the error says that there is no known way to split Label into some MonadType[_] and SpecifictType. You probably expected that your implicit actionStmt would do the trick to auto-convert Label into a Statement but this step is too much for the Scala compiler because for it to work, it also implies splitting the composite type. Note that this conversion is far from obvious for the compiler as the unwrap is itself a generic method that can handle any Monad. Actually in some sense ScalaZ needs Unapply exactly because the compiler can't do such things automatically. Still if you help the compiler just a little bit by specifying generic type, it can do the rest of the work:
def fails1() = effectfully {
// fails
// unwrap(Label(unwrap(freshLabel())))
// unwrap(Label(unwrap(freshLabel())))
// works
unwrap[Stmt](Label(unwrap(freshLabel())))
unwrap[Stmt](Label(unwrap(freshLabel())))
}
There is also another possible solution but it is a quite dirty hack: you can roll out your custom Unapply to persuade the compiler that Label is actually the same as Expr[Unit] which can be split as Expr[_] and Unit:
implicit def unapplyAction[AC <: Action](implicit TC0: Monad[Expr]): Unapply[Monad, AC] {
type M[X] = Expr[X]
type A = Unit
} = new Unapply[Monad, AC] {
override type M[X] = Expr[X]
override type A = Unit
override def TC: Monad[Expr] = TC0
// This can't be implemented because Leibniz really witness only exactly the same types rather than some kind of isomorphism
// Luckily effectful doesn't use leibniz implementation
override def leibniz: AC === Expr[Unit] = ???
}
The obvious reason why this is a dirty hack is that Label is actually not the same as Expr[Unit] and you can see it by the fact that you can't implement leibniz in your Unapply. Anyway, if you import unapplyAction, even your original fails1 will compile and work because effectful doesn't use leibniz inside.
As for your fails2, I don't think you can make it work in any simple way. Probably the only way that you may try is to create another macro that would convert in-place call to ! on your action (or its implicit wrapper) into the effectful's ! on the action.stmt. This might work but I didn't try it.
I am trying to convert the tutorial here http://atnos-org.github.io/eff/org.atnos.site.Introduction.html into a running Scala program inside IntelliJ-IDEA. The code runs in the command line REPL, but not in the IDE.
I have simply copied and pasted all the code into one file, added an object Intro extends App
Here is the code:
class Tutorial{
}
object Intro extends App {
import cats._
import cats.data._
import org.atnos.eff._
type ReaderInt[A] = Reader[Int, A]
type WriterString[A] = Writer[String, A]
type Stack = Fx.fx3[WriterString, ReaderInt, Eval]
import org.atnos.eff.all._
import org.atnos.eff.syntax.all._
// useful type aliases showing that the ReaderInt and the WriterString effects are "members" of R
// note that R could have more effects
type _readerInt[R] = ReaderInt |= R
type _writerString[R] = WriterString |= R
def program[R: _readerInt : _writerString : _eval]: Eff[R, Int] = for {
// get the configuration
n <- ask[R, Int]
// log the current configuration value
_ <- tell("the required power is " + n)
// compute the nth power of 2
a <- delay(math.pow(2, n.toDouble).toInt)
// log the result
_ <- tell("the result is " + a)
} yield a
println(program[Stack].runReader(6).runWriter.runEval.run)
}
The compiler error is Cannot resolve symbol run in the last line.
Here is my build.sbt file, following the instructions for the library:
name := "Tutorial"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "org.typelevel" %% "cats" % "0.9.0"
libraryDependencies += "org.atnos" %% "eff" % "3.1.0"
// to write types like Reader[String, ?]
addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.3")
// to get types like Reader[String, ?] (with more than one type parameter) correctly inferred
// this plugin is not necessary with Scala 2.12
addCompilerPlugin("com.milessabin" % "si2712fix-plugin_2.11.8" % "1.2.0")
Edmund Noble answered my question here: https://github.com/atnos-org/eff/issues/80#issuecomment-287667353
The reason why IntelliJ cannot figure out your code in particular looks to be because IntelliJ is not capable of simulating implicit-directed type inference; the Member implicits have a type member, Out, inside them which represents the remaining effect stack. If the IDE cannot figure it out, it subs in a fresh type variable and thus the run ops constructor cannot be called because the inferred type according to IntelliJ is Eff[m.Out, A] and not Eff[NoFx, A].
FIX: I was able to separately compile the file and then run it, even though the error is still highlighted in the IDE.
Unless there is some feature in IntelliJ IDEA that I haven't enabled, this looks like a limitation and/or bug in IntelliJ IDEA .
I have the following example build.sbt that uses sbt-assembly. (My assembly.sbt and project/assembly.sbt are set up as described in the readme.)
import AssemblyKeys._
organization := "com.example"
name := "hello-sbt"
version := "1.0"
scalaVersion := "2.10.3"
val hello = taskKey[Unit]("Prints hello")
hello := println(s"hello, ${assembly.value.getName}")
val hey = taskKey[Unit]("Prints hey")
hey <<= assembly map { (asm) => println(s"hey, ${asm.getName}") }
//val hi = taskKey[Unit]("Prints hi")
//hi <<= assembly { (asm) => println(s"hi, $asm") }
Both hello and hey are functionally equivalent, and when I run either task from sbt, they run assembly first and print a message with the same filename. Is there a meaningful difference between the two? (It seems like the definition of hello is "slightly magical", since the dependency on assembly is only implied there, not explicit.)
Lastly, I'm trying to understand why hey needs the map call. Obviously it results in a different object getting passed into asm, but I'm not quite sure how to fix this type error in the definition of hi:
sbt-hello/build.sbt:21: error: type mismatch;
found : Unit
required: sbt.Task[Unit]
hi <<= assembly { (asm) => println(s"hi, $asm") }
^
[error] Type error in expression
It looks like assembly here is a [sbt.TaskKey[java.io.File]][2] but I don't see a map method defined there, so I can't quite figure out what's happening in the type of hey above.
sbt 0.12 syntax vs sbt 0.13 syntax
Is there a meaningful difference between the two?
By meaningful difference, if you mean semantic difference as in observable difference in the behavior of the compiled code, they are the same.
If you mean, any intended differences in code, it's about the style difference between sbt 0.12 syntax sbt 0.13 syntax. Conceptually, I think sbt 0.13 syntax makes it easier to learn and code since you deal with T instead of Initialize[T] directly. Using macro, sbt 0.13 expands x.value into sbt 0.12 equivalent.
anatomy of <<=
I'm trying to understand why hey needs the map call.
That's actually one of the difference macro now is able to handle automatically.
To understand why map is needed in sbt 0.12 style, you need to understand the type of sbt DSL expression, which is Setting[_]. As Getting Started guide puts it:
Instead, the build definition creates a huge list of objects with type Setting[T] where T is the type of the value in the map. A Setting describes a transformation to the map, such as adding a new key-value pair or appending to an existing value.
For tasks, the type of DSL expression is Setting[Task[T]]. To turn a setting key into Setting[T], or to turn a task key into Setting[Task[T]], you use <<= method defined on respective keys. This is implemented in Structure.scala (sbt 0.12 code base has simpler implementation of <<= so I'll be using that as the reference.):
sealed trait SettingKey[T] extends ScopedTaskable[T] with KeyedInitialize[T] with Scoped.ScopingSetting[SettingKey[T]] with Scoped.DefinableSetting[T] with Scoped.ListSetting[T, Id] { ... }
sealed trait TaskKey[T] extends ScopedTaskable[T] with KeyedInitialize[Task[T]] with Scoped.ScopingSetting[TaskKey[T]] with Scoped.ListSetting[T, Task] with Scoped.DefinableTask[T] { ... }
object Scoped {
sealed trait DefinableSetting[T] {
final def <<= (app: Initialize[T]): Setting[T] = setting(scopedKey, app)
...
}
sealed trait DefinableTask[T] { self: TaskKey[T] =>
def <<= (app: Initialize[Task[T]]): Setting[Task[T]] = Project.setting(scopedKey, app)
...
}
}
Note the types of app parameters. Setting key's <<= takes Initialize[T] whereas the task key's <<= takes Initialize[Task[T]]. In other words, depending on the the type of lhs of an <<= expression the type of rhs changes. This requires sbt 0.12 users to be aware of the setting/task difference in the keys.
Suppose you have a setting key like description on the lhs, and suppose you want to depend on name setting and create a description. To create a setting dependency expression you use apply:
description <<= name { n => n + " is good." }
apply for a single key is implemented in Settings.scala:
sealed trait Keyed[S, T] extends Initialize[T]
{
def transform: S => T
final def apply[Z](g: T => Z): Initialize[Z] = new GetValue(scopedKey, g compose transform)
}
trait KeyedInitialize[T] extends Keyed[T, T] {
final val transform = idFun[T]
}
Next, instead of description, suppose you want to create a setting for jarName in assembly. This is a task key, so rhs of <<= takes Initialize[Task[T]], so apply is not good. This is where map comes in:
jarName in assembly <<= name map { n => n + ".jar" }
This is implemented in Structure.scala as well:
final class RichInitialize[S](init: Initialize[S]) {
def map[T](f: S => T): Initialize[Task[T]] = init(s => mktask(f(s)) )
}
Because a setting key extends KeyedInitialize[T], which is Initialize[T], and because there's an implicit conversion from Initialize[T] to RichInitialize[T] the above is available to name. This is an odd way of defining map since maps usually preserves the structure.
It might make more sense, if you see similar enrichment class for task keys:
final class RichInitializeTask[S](i: Initialize[Task[S]]) extends RichInitTaskBase[S, Task] {...}
sealed abstract class RichInitTaskBase[S, R[_]] {
def map[T](f: S => T): Initialize[R[T]] = mapR(f compose successM)
}
So for tasks, map maps from a task of type S to T. For settings, we can think of it as: map is not defined on a setting, so it implicitly converts itself to a task and maps that. In any case, this let's sbt 0.12 users to think: Use apply for settings, map for tasks. Note that apply ever goes away for task keys as they extend Keyed[Task[T], Task[T]]. This should explain:
sbt-hello/build.sbt:21: error: type mismatch;
found : Unit
required: sbt.Task[Unit]
Then there's the tuple issue. So far I've discussed dependencies to a single setting. If you want to depend on more, sbt implicitly adds apply and map to Tuple2..N to handle it. Now it's expanded to 15, but it used to be up till only Tuple9. Seeing from a new user's point of view, the idea of invoking map on a Tuple9 of settings so it generates a task-like Initialize[Task[T]] would appear alien. Without changing the underlying mechanism, sbt 0.13 provides much cleaner surface to get started.