get annotations from class in scala 3 macros - scala

i am writing a macro to get annotations from a 'Class'
inline def getAnnotations(clazz: Class[?]): Seq[Any] = ${ getAnnotationsImpl('clazz) }
def getAnnotationsImpl(expr: Expr[Class[?]])(using Quotes): Expr[Seq[Any]] =
import quotes.reflect.*
val cls = expr.valueOrError // error: value value is not a member of quoted.Expr[Class[?]]
val tpe = TypeRepr.typeConstructorOf(cls)
val annotations = tpe.typeSymbol.annotations.map(_.asExpr)
Expr.ofSeq(annotations)
but i get an error when i get class value from expr parameter
#main def test(): Unit =
val cls = getCls
val annotations = getAnnotations(cls)
def getCls: Class[?] = Class.forName("Foo")
is it possible to get annotations of a Class at compile time by this macro ?!

By the way, eval for Class[_] doesn't work even in Scala 2 macros: c.eval(c.Expr[Class[_]](clazz)) produces
java.lang.ClassCastException:
scala.reflect.internal.Types$ClassNoArgsTypeRef cannot be cast to java.lang.Class.
Class[_] is too runtimy thing. How can you extract its value from its tree ( Expr is a wrapper over tree)?
If you already have a Class[?] you should use Java reflection rather than Scala 3 macros (with Tasty reflection).
Actually, you can try to evaluate a tree from its source code (hacking multi-staging programming and implementing our own eval instead of forbidden staging.run). It's a little similar to context.eval in Scala 2 macros (but we evaluate from a source code rather than from a tree).
import scala.quoted.*
object Macro {
inline def getAnnotations(clazz: Class[?]): Seq[Any] = ${getAnnotationsImpl('clazz)}
def getAnnotationsImpl(expr: Expr[Class[?]])(using Quotes): Expr[Seq[Any]] = {
import quotes.reflect.*
val str = expr.asTerm.pos.sourceCode.getOrElse(
report.errorAndAbort(s"No source code for ${expr.show}")
)
val cls = Eval[Class[?]](str)
val tpe = TypeRepr.typeConstructorOf(cls)
val annotations = tpe.typeSymbol.annotations.map(_.asExpr)
Expr.ofSeq(annotations)
}
}
import dotty.tools.dotc.core.Contexts.Context
import dotty.tools.dotc.{Driver, util}
import dotty.tools.io.{VirtualDirectory, VirtualFile}
import java.net.URLClassLoader
import java.nio.charset.StandardCharsets
import dotty.tools.repl.AbstractFileClassLoader
object Eval {
def apply[A](str: String): A = {
val content =
s"""
|package $$generated
|
|object $$Generated {
| def run = $str
|}""".stripMargin
val sourceFile = util.SourceFile(
VirtualFile(
name = "$Generated.scala",
content = content.getBytes(StandardCharsets.UTF_8)),
codec = scala.io.Codec.UTF8
)
val files = this.getClass.getClassLoader.asInstanceOf[URLClassLoader].getURLs
val depClassLoader = new URLClassLoader(files, null)
val classpathString = files.mkString(":")
val outputDir = VirtualDirectory("output")
class DriverImpl extends Driver {
private val compileCtx0 = initCtx.fresh
val compileCtx = compileCtx0.fresh
.setSetting(
compileCtx0.settings.classpath,
classpathString
).setSetting(
compileCtx0.settings.outputDir,
outputDir
)
val compiler = newCompiler(using compileCtx)
}
val driver = new DriverImpl
given Context = driver.compileCtx
val run = driver.compiler.newRun
run.compileSources(List(sourceFile))
val classLoader = AbstractFileClassLoader(outputDir, depClassLoader)
val clazz = Class.forName("$generated.$Generated$", true, classLoader)
val module = clazz.getField("MODULE$").get(null)
val method = module.getClass.getMethod("run")
method.invoke(module).asInstanceOf[A]
}
}
package mypackage
import scala.annotation.experimental
#experimental
class Foo
Macro.getAnnotations(Class.forName("mypackage.Foo")))
// new scala.annotation.internal.SourceFile("/path/to/src/main/scala/mypackage/Foo.scala"), new scala.annotation.experimental()
scalaVersion := "3.1.3"
libraryDependencies += scalaOrganization.value %% "scala3-compiler" % scalaVersion.value
How to compile and execute scala code at run-time in Scala3?
(compile time of the code expanding macros is the runtime of macros)
Actually, there is even a way to evaluate a tree itself (not its source code). Such functionality exists in Scala 3 compiler but is deliberately blocked because of phase consistency principle. So this to work, the code expanding macros should be compiled with a compiler patched
https://github.com/DmytroMitin/dotty-patched
scalaVersion := "3.2.1"
libraryDependencies += scalaOrganization.value %% "scala3-staging" % scalaVersion.value
// custom Scala settings
managedScalaInstance := false
ivyConfigurations += Configurations.ScalaTool
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-library" % "2.13.10",
scalaOrganization.value %% "scala3-library" % "3.2.1",
"com.github.dmytromitin" %% "scala3-compiler-patched-assembly" % "3.2.1" % "scala-tool"
)
import scala.quoted.{Expr, Quotes, staging, quotes}
object Macro {
inline def getAnnotations(clazz: Class[?]): Seq[String] = ${impl('clazz)}
def impl(expr: Expr[Class[?]])(using Quotes): Expr[Seq[String]] = {
import quotes.reflect.*
given staging.Compiler = staging.Compiler.make(this.getClass.getClassLoader)
val tpe = staging.run[Any](expr).asInstanceOf[TypeRepr]
val annotations = Expr(tpe.typeSymbol.annotations.map(_.asExpr.show))
report.info(s"annotations=${annotations.show}")
annotations
}
}
Normally, for expr: Expr[A] staging.run(expr) returns a value of type A. But Class is specific. For expr: Expr[Class[_]] inside macros it returns a value of type dotty.tools.dotc.core.Types.CachedAppliedType <: TypeRepr. That's why I had to cast.
In Scala 2 this also would be c.eval(c.Expr[Any](/*c.untypecheck*/(clazz))).asInstanceOf[Type].typeSymbol.annotations because for Class[_] c.eval returns scala.reflect.internal.Types$ClassNoArgsTypeRef <: Type.
https://github.com/scala/bug/issues/12680

Related

What is Scala 3 equivalent to this Scala 2 code that uses Enumeration and play-json?

I have some code that works in Scala 2.{10,11,12,13} that I'm now trying to convert to Scala 3. Scala 3 does Enumeration differently than Scala 2. I'm trying to figure out how to convert the following code that interacts with play-json so that it will work with Scala 3. Any tips or pointers to code from projects that have already crossed this bridge?
// Scala 2.x style code in EnumUtils.scala
import play.api.libs.json._
import scala.language.implicitConversions
// see: http://perevillega.com/enums-to-json-in-scala
object EnumUtils {
def enumReads[E <: Enumeration](enum: E): Reads[E#Value] =
new Reads[E#Value] {
def reads(json: JsValue): JsResult[E#Value] = json match {
case JsString(s) => {
try {
JsSuccess(enum.withName(s))
} catch {
case _: NoSuchElementException =>
JsError(s"Enumeration expected of type: '${enum.getClass}', but it does not appear to contain the value: '$s'")
}
}
case _ => JsError("String value expected")
}
}
implicit def enumWrites[E <: Enumeration]: Writes[E#Value] = new Writes[E#Value] {
def writes(v: E#Value): JsValue = JsString(v.toString)
}
implicit def enumFormat[E <: Enumeration](enum: E): Format[E#Value] = {
Format(EnumUtils.enumReads(enum), EnumUtils.enumWrites)
}
}
// ----------------------------------------------------------------------------------
// Scala 2.x style code in Xyz.scala
import play.api.libs.json.{Reads, Writes}
object Xyz extends Enumeration {
type Xyz = Value
val name, link, unknown = Value
implicit val enumReads: Reads[Xyz] = EnumUtils.enumReads(Xyz)
implicit def enumWrites: Writes[Xyz] = EnumUtils.enumWrites
}
As an option you can switch to jsoniter-scala.
It supports enums for Scala 2 and Scala 3 out of the box.
Also it has handy derivation of safe and efficient JSON codecs.
Just need to add required libraries to your dependencies:
libraryDependencies ++= Seq(
// Use the %%% operator instead of %% for Scala.js and Scala Native
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-core" % "2.13.5",
// Use the "provided" scope instead when the "compile-internal" scope is not supported
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-macros" % "2.13.5" % "compile-internal"
)
And then derive a codec and use it:
import com.github.plokhotnyuk.jsoniter_scala.core._
import com.github.plokhotnyuk.jsoniter_scala.macros._
implicit val codec: JsonValueCodec[Xyz.Xyz] = JsonCodecMaker.make
println(readFromString[Xyz.Xyz]("\"name\""))
BTW, you can run the full code on Scastie: https://scastie.scala-lang.org/Evj718q6TcCZow9lRhKaPw

Extending DefaultParamsReadable and DefaultParamsWritable not allowing reading of custom model

Good day,
I have been struggling for a few days to save a custom transformer that is part of a large pipeline of stages. I have a transformer that is completely defined by its params. I have an estimator which in it's fit method will generate a matrix and then set the transformer parameters accordingly so that I can use DefaultParamsReadable and DefaultParamsReadable to take advantage of the serialisation/deserialisation already present in util.ReadWrite.scala.
My summarised code is as follows (includes important aspects):
...
import org.apache.spark.ml.util._
...
// trait to implement in Estimator and Transformer for params
trait NBParams extends Params {
final val featuresCol= new Param[String](this, "featuresCol", "The input column")
setDefault(featuresCol, "_tfIdfOut")
final val labelCol = new Param[String](this, "labelCol", "The labels column")
setDefault(labelCol, "P_Root_Code_Index")
final val predictionsCol = new Param[String](this, "predictionsCol", "The output column")
setDefault(predictionsCol, "NBOutput")
final val ratioMatrix = new Param[DenseMatrix](this, "ratioMatrix", "The transformation matrix")
def getfeaturesCol: String = $(featuresCol)
def getlabelCol: String = $(labelCol)
def getPredictionCol: String = $(predictionsCol)
def getRatioMatrix: DenseMatrix = $(ratioMatrix)
}
// Estimator
class CustomNaiveBayes(override val uid: String, val alpha: Double)
extends Estimator[CustomNaiveBayesModel] with NBParams with DefaultParamsWritable {
def copy(extra: ParamMap): CustomNaiveBayes = {
defaultCopy(extra)
}
def setFeaturesCol(value: String): this.type = set(featuresCol, value)
def setLabelCol(value: String): this.type = set(labelCol, value)
def setPredictionCol(value: String): this.type = set(predictionsCol, value)
def setRatioMatrix(value: DenseMatrix): this.type = set(ratioMatrix, value)
override def transformSchema(schema: StructType): StructType = {...}
override def fit(ds: Dataset[_]): CustomNaiveBayesModel = {
...
val model = new CustomNaiveBayesModel(uid)
model
.setRatioMatrix(ratioMatrix)
.setFeaturesCol($(featuresCol))
.setLabelCol($(labelCol))
.setPredictionCol($(predictionsCol))
}
}
// companion object for Estimator
object CustomNaiveBayes extends DefaultParamsReadable[CustomNaiveBayes]{
override def load(path: String): CustomNaiveBayes = super.load(path)
}
// Transformer
class CustomNaiveBayesModel(override val uid: String)
extends Model[CustomNaiveBayesModel] with NBParams with DefaultParamsWritable {
def this() = this(Identifiable.randomUID("customnaivebayes"))
def copy(extra: ParamMap): CustomNaiveBayesModel = {defaultCopy(extra)}
def setFeaturesCol(value: String): this.type = set(featuresCol, value)
def setLabelCol(value: String): this.type = set(labelCol, value)
def setPredictionCol(value: String): this.type = set(predictionsCol, value)
def setRatioMatrix(value: DenseMatrix): this.type = set(ratioMatrix, value)
override def transformSchema(schema: StructType): StructType = {...}
}
def transform(dataset: Dataset[_]): DataFrame = {...}
}
// companion object for Transformer
object CustomNaiveBayesModel extends DefaultParamsReadable[CustomNaiveBayesModel]
When I add this Model as part of a pipeline and fit the pipeline, all runs ok. When I save the pipeline, there are no errors. However, when I attempt to load the pipeline in I get the following error:
NoSuchMethodException: $line3b380bcad77e4e84ae25a6bfb1f3ec0d45.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$$$6fa979eb27fa6bf89c6b6d1b271932c$$$$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CustomNaiveBayesModel.read()
To save the pipeline, which includes a number of other transformers related to NLP pre-processing, I run
fittedModelRootCode.write.save("path")
and to then load it (where the failure occurs) I run
import org.apache.spark.ml.PipelineModel
val fittedModelRootCode = PipelineModel.load("path")
The model itself appears to be working well but I cannot afford to retrain the model on a dataset every time I wish to use it. Does anyone have any ideas why even with the companion object, the read() method appears to be unavailable?
Notes:
I am running on Databricks Runtime 8.3 (Spark 3.1.1, Scala 2.12)
My model is in a separate package so is external to Spark
I have reproduced this based on a number of existing examples all of which appear to work fine so I am unsure why my code is failing
I am aware there is a Naive Bayes model available in Spark ML, however, I have been tasked with making a large number of customizations so it is not worth modifying the existing version (plus I would like to learn how to get this right)
Any help would be greatly appreciated.
Since you extend the CustomNaiveBayesModel companion object by DefaultParamsReadable, I think you should use the companion object CustomNaiveBayesModel for loading the model. Here I write some code for saving and loading models and it works properly:
import org.apache.spark.SparkConf
import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.sql.SparkSession
import path.to.CustomNaiveBayesModel
object SavingModelApp extends App {
val spark: SparkSession = SparkSession.builder().config(
new SparkConf()
.setMaster("local[*]")
.setAppName("Test app")
.set("spark.driver.host", "localhost")
.set("spark.ui.enabled", "false")
).getOrCreate()
val training = spark.createDataFrame(Seq(
(0L, "a b c d e spark", 1.0),
(1L, "b d", 0.0),
(2L, "spark f g h", 1.0),
(3L, "hadoop mapreduce", 0.0)
)).toDF("id", "text", "label")
val fittedModelRootCode: PipelineModel = new Pipeline().setStages(Array(new CustomNaiveBayesModel())).fit(training)
fittedModelRootCode.write.save("path/to/model")
val mod = PipelineModel.load("path/to/model")
}
I think your mistake is using PipelineModel.load for loading the concrete model.
My environment:
scalaVersion := "2.12.6"
scalacOptions := Seq(
"-encoding", "UTF-8", "-target:jvm-1.8", "-deprecation",
"-feature", "-unchecked", "-language:implicitConversions", "-language:postfixOps")
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.1.1",
libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.1.1"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "3.1.1"

Spark Scala - compile errors

I have a script in scala, when I run it in Zeppelin works well, but when I try compile with sbt, it doesnt work. I believe is something related to the versions but Im not being able to identify.
Those three ways returns the same error:
val catMap = catDF.rdd.map((row: Row) => (row.getAs[String](1)->row.getAs[Integer](0))).collect.toMap
val catMap = catDF.select($"description", $"id".cast("int")).as[(String, Int)].collect.toMap
val catMap = catDF.rdd.map((row: Row) => (row.getAs[String](1)->row.getAs[Integer](0))).collectAsMap()
Returning an error: "value rdd is not a member of Unit"
val bizCat = bizCatRDD.rdd.map(t => (t.getAs[String](0),catMap(t.getAs[String](1)))).toDF
Returning an error: "value toDF is not a member of org.apache.spark.rdd.RDD[U]"
Scala version: 2.12
Sbt Version: 1.3.13
UPDATE:
The whole class is:
package importer
import org.apache.spark.sql.{Row, SaveMode, SparkSession}
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import udf.functions._
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.Column
object BusinessImporter extends Importer{
def importa(spark: SparkSession, inputDir: String): Unit = {
import spark.implicits._
val bizDF = spark.read.json(inputDir).cache
// categories
val explode_categories = bizDF.withColumn("categories", explode(split(col("categories"), ",")))
val sort_categories = explode_categories.select(col("categories").as("description"))
.distinct
.coalesce(1)
.orderBy(asc("categories"))
// Create sequence column
val windowSpec = Window.orderBy("description")
val categories_with_sequence = sort_categories.withColumn("id",row_number.over(windowSpec))
val categories = categories_with_sequence.select("id","description")
val catDF = categories.write.insertInto("categories")
// business categories
//val catMap = catDF.rdd.map((row: Row) => (row.getAs[String](1)->row.getAs[Integer](0))).collect.toMap
//val catMap = catDF.select($"description", $"id".cast("int")).as[(String, Int)].collect.toMap
val catMap = catDF.rdd.map((row: Row) => (row.getAs[String](1)->row.getAs[Integer](0))).collectAsMap()
val auxbizCatRDD = bizDF.withColumn("categories", explode(split(col("categories"), ",")))
val bizCatRDD = auxbizCatRDD.select("business_id","categories")
val bizCat = bizCatRDD.rdd.map(t => (t.getAs[String](0),catMap(t.getAs[String](1)))).toDF
bizCat.write.insertInto("business_category")
// Business
val businessDF = bizDF.select("business_id","categories","city","address","latitude","longitude","name","is_open","review_count","stars","state")
businessDF.coalesce(1).write.insertInto("business")
// Hours
val bizHoursDF = bizDF.select("business_id","hours.Sunday","hours.Monday","hours.Tuesday","hours.Wednesday","hours.Thursday","hours.Friday","hours.Saturday")
val bizHoursDF_structs = bizHoursDF
.withColumn("Sunday",struct(
split(col("Sunday"),"-").getItem(0).as("Open"),
split(col("Sunday"),"-").getItem(1).as("Close")))
.withColumn("Monday",struct(
split(col("Monday"),"-").getItem(0).as("Open"),
split(col("Monday"),"-").getItem(1).as("Close")))
.withColumn("Tuesday",struct(
split(col("Tuesday"),"-").getItem(0).as("Open"),
split(col("Tuesday"),"-").getItem(1).as("Close")))
.withColumn("Wednesday",struct(
split(col("Wednesday"),"-").getItem(0).as("Open"),
split(col("Wednesday"),"-").getItem(1).as("Close")))
.withColumn("Thursday",struct(
split(col("Thursday"),"-").getItem(0).as("Open"),
split(col("Thursday"),"-").getItem(1).as("Close")))
.withColumn("Friday",struct(
split(col("Friday"),"-").getItem(0).as("Open"),
split(col("Friday"),"-").getItem(1).as("Close")))
.withColumn("Saturday",struct(
split(col("Saturday"),"-").getItem(0).as("Open"),
split(col("Saturday"),"-").getItem(1).as("Close")))
bizHoursDF_structs.coalesce(1).write.insertInto("business_hour")
}
def singleSpace(col: Column): Column = {
trim(regexp_replace(col, " +", " "))
}
}
sbt file:
name := "yelp-spark-processor"
version := "1.0"
scalaVersion := "2.12.12"
libraryDependencies += "org.apache.spark" % "spark-core_2.12" % "3.0.1"
libraryDependencies += "org.apache.spark" % "spark-sql_2.12" % "3.0.1"
libraryDependencies += "org.apache.spark" % "spark-hive_2.12" % "3.0.1"
Can someone pls give me some orientations about what is wrong?
Many Thanks
Xavy
The issue here is that in scala this line returns type Unit:
val catDF = categories.write.insertInto("categories")
Unit in scala is like void in java, it's returned by functions that don't return anything meaningful. So basically at this point catDF is not a dataframe and you can't treat it as such. So you probably want to keep using categories instead of catDF in the lines that follow.

scala.meta parent of parent of Defn.Object

Let it be the following hierarchy:
object X extends Y{
...
}
trait Y extends Z {
...
}
trait Z {
def run(): Unit
}
I parse the scala file containing the X and
I want to know if its parent or grandparent is Z.
I can check for parent as follows:
Given that x: Defn.Object is the X class I parsed,
x
.children.collect { case c: Template => c }
.flatMap(p => p.children.collectFirst { case c: Init => c }
will give Y.
Question: Any idea how I can get the parent of the parent of X (which is Z in the above example) ?
Loading Y (the same way I loaded X) and finding it's parent doesn't seem like a good idea, since the above is part of a scan procedure where among all files under src/main/scala I'm trying to find all classes which extend Z and implement run, so I don't see an easy and performant way to create a graph with all intermediate classes so as to load them in the right order and check for their parents.
It seems you want Scalameta to process your sources not syntactically but semantically. Then you need SemanticDB. Probably the most convenient way to work with SemanticDB is Scalafix
rules/src/main/scala/MyRule.scala
import scalafix.v1._
import scala.meta._
class MyRule extends SemanticRule("MyRule") {
override def isRewrite: Boolean = true
override def description: String = "My Rule"
override def fix(implicit doc: SemanticDocument): Patch = {
doc.tree.traverse {
case q"""..$mods object $ename extends ${template"""
{ ..$stats } with ..$inits { $self => ..$stats1 }"""}""" =>
val initsParents = inits.collect(_.symbol.info.map(_.signature) match {
case Some(ClassSignature(_, parents, _, _)) => parents
}).flatten
println(s"object: $ename, parents: $inits, grand-parents: $initsParents")
}
Patch.empty
}
}
in/src/main/scala/App.scala
object X extends Y{
override def run(): Unit = ???
}
trait Y extends Z {
}
trait Z {
def run(): Unit
}
Output of sbt out/compile
object: X, parents: List(Y), grand-parents: List(AnyRef, Z)
build.sbt
name := "scalafix-codegen"
inThisBuild(
List(
//scalaVersion := "2.13.2",
scalaVersion := "2.11.12",
addCompilerPlugin(scalafixSemanticdb),
scalacOptions ++= List(
"-Yrangepos"
)
)
)
lazy val rules = project
.settings(
libraryDependencies += "ch.epfl.scala" %% "scalafix-core" % "0.9.16",
organization := "com.example",
version := "0.1",
)
lazy val in = project
lazy val out = project
.settings(
sourceGenerators.in(Compile) += Def.taskDyn {
val root = baseDirectory.in(ThisBuild).value.toURI.toString
val from = sourceDirectory.in(in, Compile).value
val to = sourceManaged.in(Compile).value
val outFrom = from.toURI.toString.stripSuffix("/").stripPrefix(root)
val outTo = to.toURI.toString.stripSuffix("/").stripPrefix(root)
Def.task {
scalafix
.in(in, Compile)
.toTask(s" --rules=file:rules/src/main/scala/MyRule.scala --out-from=$outFrom --out-to=$outTo")
.value
(to ** "*.scala").get
}
}.taskValue
)
project/plugins.sbt
addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.9.16")
Other examples:
https://github.com/olafurpg/scalafix-codegen (semantic)
https://github.com/DmytroMitin/scalafix-codegen (semantic)
https://github.com/DmytroMitin/scalameta-demo (syntactic)
Is it possible to using macro to modify the generated code of structural-typing instance invocation? (semantic)
Scala conditional compilation (syntactic)
Macro annotation to override toString of Scala function (syntactic)
How to merge multiple imports in scala? (syntactic)
You can avoid Scalafix but then you'll have to work with internals of SemanticDB manually
import scala.meta._
import scala.meta.interactive.InteractiveSemanticdb
import scala.meta.internal.semanticdb.{ClassSignature, Range, SymbolInformation, SymbolOccurrence, TypeRef}
val source: String =
"""object X extends Y{
| override def run(): Unit = ???
|}
|
|trait Y extends Z
|
|trait Z {
| def run(): Unit
|}""".stripMargin
val textDocument = InteractiveSemanticdb.toTextDocument(
InteractiveSemanticdb.newCompiler(List(
"-Yrangepos"
)),
source
)
implicit class TreeOps(tree: Tree) {
val occurence: Option[SymbolOccurrence] = {
val treeRange = Range(tree.pos.startLine, tree.pos.startColumn, tree.pos.endLine, tree.pos.endColumn)
textDocument.occurrences
.find(_.range.exists(occurrenceRange => treeRange == occurrenceRange))
}
val info: Option[SymbolInformation] = occurence.flatMap(_.symbol.info)
}
implicit class StringOps(symbol: String) {
val info: Option[SymbolInformation] = textDocument.symbols.find(_.symbol == symbol)
}
source.parse[Source].get.traverse {
case tree#q"""..$mods object $ename extends ${template"""
{ ..$stats } with ..$inits { $self => ..$stats1 }"""}""" =>
val initsParents = inits.collect(_.info.map(_.signature) match {
case Some(ClassSignature(_, parents, _, _)) =>
parents.collect {
case TypeRef(_, symbol, _) => symbol
}
}).flatten
println(s"object = $ename = ${ename.info.map(_.symbol)}, parents = $inits = ${inits.map(_.info.map(_.symbol))}, grand-parents = $initsParents")
}
Output:
object = X = Some(_empty_/X.), parents = List(Y) = List(Some(_empty_/Y#)), grand-parents = List(scala/AnyRef#, _empty_/Z#)
build.sbt
//scalaVersion := "2.13.3"
scalaVersion := "2.11.12"
lazy val scalametaV = "4.3.18"
libraryDependencies ++= Seq(
"org.scalameta" %% "scalameta" % scalametaV,
"org.scalameta" % "semanticdb-scalac" % scalametaV cross CrossVersion.full
)
Semanticdb code seems to be working in Scala 3
https://scastie.scala-lang.org/DmytroMitin/3QQwsDG2Rqm71qa6mMMkTw/36 [copy] (at Scastie -Dscala.usejavacp=true didn't help with object scala.runtime in compiler mirror not found, so I used Coursier to guarantee that scala-library is on path, locally it works without Coursier)

create an ambiguous low priority implicit

Consider the default codec as offered in the io package.
implicitly[io.Codec].name //res0: String = UTF-8
It's a "low priority" implicit so it's easy to override without ambiguity.
implicit val betterCodec: io.Codec = io.Codec("US-ASCII")
implicitly[io.Codec].name //res1: String = US-ASCII
It's also easy to raise its priority level.
import io.Codec.fallbackSystemCodec
implicit val betterCodec: io.Codec = io.Codec("US-ASCII")
implicitly[io.Codec].name //won't compile: ambiguous implicit values
But can we go in the opposite direction? Can we create a low level implicit that disables ("ambiguates"?) the default? I've been looking at the priority equation and playing around with low priority implicits but I've yet to create something ambiguous to the default.
If I understand correctly you want to check at compile time that there is local implicit io.Codec ("higher-priority") or produce compile error otherwise. This can be done with macros (using compiler internals).
import scala.language.experimental.macros
import scala.reflect.macros.{contexts, whitebox}
object Macros {
def localImplicitly[A]: A = macro impl[A]
def impl[A: c.WeakTypeTag](c: whitebox.Context): c.Tree = {
import c.universe._
val context = c.asInstanceOf[contexts.Context]
val global: context.universe.type = context.universe
val analyzer: global.analyzer.type = global.analyzer
val callsiteContext = context.callsiteTyper.context
val tpA = weakTypeOf[A]
val localImplicit = new analyzer.ImplicitSearch(
tree = EmptyTree.asInstanceOf[global.Tree],
pt = tpA.asInstanceOf[global.Type],
isView = false,
context0 = callsiteContext.makeImplicit(reportAmbiguousErrors = true),
pos0 = c.enclosingPosition.asInstanceOf[global.Position]
) {
override def searchImplicit(
implicitInfoss: List[List[analyzer.ImplicitInfo]],
isLocalToCallsite: Boolean
): analyzer.SearchResult = {
if (isLocalToCallsite)
super.searchImplicit(implicitInfoss, isLocalToCallsite)
else analyzer.SearchFailure
}
}.bestImplicit
if (localImplicit.isSuccess)
localImplicit.tree.asInstanceOf[c.Tree]
else c.abort(c.enclosingPosition, s"no local implicit $tpA")
}
}
localImplicitly[io.Codec].name // doesn't compile
// Error: no local implicit scala.io.Codec
implicit val betterCodec: io.Codec = io.Codec("US-ASCII")
localImplicitly[Codec].name // US-ASCII
import io.Codec.fallbackSystemCodec
localImplicitly[Codec].name // UTF-8
import io.Codec.fallbackSystemCodec
implicit val betterCodec: io.Codec = io.Codec("US-ASCII")
localImplicitly[Codec].name // doesn't compile
//Error: ambiguous implicit values:
// both value betterCodec in object App of type => scala.io.Codec
// and lazy value fallbackSystemCodec in trait LowPriorityCodecImplicits of type => //scala.io.Codec
// match expected type scala.io.Codec
Tested in 2.13.0.
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-reflect" % scalaVersion.value,
scalaOrganization.value % "scala-compiler" % scalaVersion.value
)
Still working in Scala 2.13.10.
Scala 3 implementation
import scala.quoted.{Expr, Quotes, Type, quotes}
import dotty.tools.dotc.typer.{Implicits => dottyImplicits}
inline def localImplicitly[A]: A = ${impl[A]}
def impl[A: Type](using Quotes): Expr[A] = {
import quotes.reflect.*
given c: dotty.tools.dotc.core.Contexts.Context =
quotes.asInstanceOf[scala.quoted.runtime.impl.QuotesImpl].ctx
val typer = c.typer
val search = new typer.ImplicitSearch(
TypeRepr.of[A].asInstanceOf[dotty.tools.dotc.core.Types.Type],
dotty.tools.dotc.ast.tpd.EmptyTree,
Position.ofMacroExpansion.asInstanceOf[dotty.tools.dotc.util.SourcePosition].span
)
def eligible(contextual: Boolean): List[dottyImplicits.Candidate] =
if contextual then
if c.gadt.isNarrowing then
dotty.tools.dotc.core.Contexts.withoutMode(dotty.tools.dotc.core.Mode.ImplicitsEnabled) {
c.implicits.uncachedEligible(search.wildProto)
}
else c.implicits.eligible(search.wildProto)
else search.implicitScope(search.wildProto).eligible
val searchImplicitMethod = classOf[typer.ImplicitSearch]
.getDeclaredMethod("searchImplicit", classOf[List[dottyImplicits.Candidate]], classOf[Boolean])
searchImplicitMethod.setAccessible(true)
def implicitSearchResult(contextual: Boolean) =
searchImplicitMethod.invoke(search, eligible(contextual), contextual)
.asInstanceOf[dottyImplicits.SearchResult]
.tree.asInstanceOf[ImplicitSearchResult]
implicitSearchResult(true) match {
case success: ImplicitSearchSuccess => success.tree.asExprOf[A]
case failure: ImplicitSearchFailure =>
report.errorAndAbort(s"no local implicit ${Type.show[A]}: ${failure.explanation}")
}
}
Scala 3.2.0.
Sort of, yes.
You can do this by creating a 'newtype'. I.e. a type that is simply a proxy to io.Codec, and wraps the instance. This means that you also need to change all your implicit arguments from io.Codec to CodecWrapper, which may not be possible.
trait CodecWraper {
def orphan: io.Codec
}
object CodecWrapper {
/* because it's in the companion, this will have the highest implicit resolution priority. */
implicit def defaultInstance: CodecWrapper =
new CodecWrapper {
def orphan = new io.Codec { /* your default implementation here */ }
}
}
}
import io.Codec.fallbackSystemCodec
implicitly[CodecWrapper].orphan // io.Codec we defined above - no ambiguity