I'm using sbt to build some of the riscv boom from the source code, but the sbt complains that it "could not find implicit value for parameter valName: : freechips.rocketchip.diplomacy.ValName". The detailed error message are as below:
[error] F:\hiMCU\my_proj\src\main\scala\freechips\rocketchip\tile\BaseTile.scala:170:42: could not find implicit value for parameter valName: freechips.rocketchip.diplomacy.ValName
[error] Error occurred in an application involving default arguments.
[error] protected val tlMasterXbar = LazyModule(new TLXbar)
The code where sbt complains is as below:
abstract class BaseTile private (val crossing: ClockCrossingType, q: Parameters)
extends LazyModule()(q)
with CrossesToOnlyOneClockDomain
with HasNonDiplomaticTileParameters
{
// Public constructor alters Parameters to supply some legacy compatibility keys
def this(tileParams: TileParams, crossing: ClockCrossingType, lookup: LookupByHartIdImpl, p: Parameters) = {
this(crossing, p.alterMap(Map(
TileKey -> tileParams,
TileVisibilityNodeKey -> TLEphemeralNode()(ValName("tile_master")),
LookupByHartId -> lookup
)))
}
def module: BaseTileModuleImp[BaseTile]
def masterNode: TLOutwardNode
def slaveNode: TLInwardNode
def intInwardNode: IntInwardNode // Interrupts to the core from external devices
def intOutwardNode: IntOutwardNode // Interrupts from tile-internal devices (e.g. BEU)
def haltNode: IntOutwardNode // Unrecoverable error has occurred; suggest reset
def ceaseNode: IntOutwardNode // Tile has ceased to retire instructions
def wfiNode: IntOutwardNode // Tile is waiting for an interrupt
protected val tlOtherMastersNode = TLIdentityNode()
protected val tlSlaveXbar = LazyModule(new TLXbar)
protected val tlMasterXbar = LazyModule(new TLXbar)
protected val intXbar = LazyModule(new IntXbar)
....
}
The LazyModule object code is as below:
object LazyModule
{
protected[diplomacy] var scope: Option[LazyModule] = None
private var index = 0
def apply[T <: LazyModule](bc: T)(implicit valName: ValName, sourceInfo: SourceInfo): T = {
// Make sure the user put LazyModule around modules in the correct order
// If this require fails, probably some grandchild was missing a LazyModule
// ... or you applied LazyModule twice
require (scope.isDefined, s"LazyModule() applied to ${bc.name} twice ${sourceLine(sourceInfo)}")
require (scope.get eq bc, s"LazyModule() applied to ${bc.name} before ${scope.get.name} ${sourceLine(sourceInfo)}")
scope = bc.parent
bc.info = sourceInfo
if (!bc.suggestedNameVar.isDefined) bc.suggestName(valName.name)
bc
}
}
I think the sbt should find some val of type freechips.rocketchip.diplomacy.ValName, but it didn't find such kind of val.
You need to have an object of type ValName in the scope where your LazyModules are instantiated:
implicit val valName = ValName("MyXbars")
For more details on Scala's implicit please see https://docs.scala-lang.org/tutorials/tour/implicit-parameters.html.html
You generally shouldn't need to manually create a ValName, the Scala compiler can materialize them automatically based on the name of the val you're assigning the LazyModule to. You didn't include your imports in your example, but can you try importing ValName?
import freechips.rocketchip.diplomacy.ValName
In most of rocket-chip code, this is imported via wildcard importing everything in the diplomacy package
import freechips.rocketchip.diplomacy._
Related
Scala application use case:
We have a Scala based that module reads the data from global cache (Redis) and save the same into local cache(Caffeine LoadingCache). As we want this data to be refreshed asynchronously, we are using LoadingCache with refreshAfterWrite duration set to refresh window of 2.second.
Question:
Not question but need help with the following code that is giving warning and also compile time errors
Warning: For build method, it gives warning as Implements member load in CacheLoader (com.github.benmanes.caffeine.cache)
Compile time error 1: type arguments [Int,redisToCaffeine.DataObject] conform to the bounds of none of the overloaded alternatives of value build: [K1 <: Object, V1 <: Object](x$1: com.github.benmanes.caffeine.cache.CacheLoader[_ >: K1, V1])com.github.benmanes.caffeine.cache.LoadingCache[K1,V1] <and> [K1 <: Object, V1 <: Object]()com.github.benmanes.caffeine.cache.Cache[K1,V1] .build[Int, DataObject](key => loader(key))
Compile time error 2: wrong number of type parameters for overloaded method value build with alternatives: [K1 <: Object, V1 <: Object](x$1: com.github.benmanes.caffeine.cache.CacheLoader[_ >: K1, V1])com.github.benmanes.caffeine.cache.LoadingCache[K1,V1] <and> [K1 <: Object, V1 <: Object]()com.github.benmanes.caffeine.cache.Cache[K1,V1] .build[Int, DataObject](key => loader(key))
Code:
package redisToCaffeine
import scala.concurrent.duration._
import com.github.benmanes.caffeine.cache.{ CacheLoader, Caffeine, LoadingCache }
import com.twitter.finagle.stats.InMemoryStatsReceiver
import javax.annotation.Nullable
import redisToCaffeine.CacheImplicits.StatsReportingCaffeineCache
class LocalDealService {
class DataObject(data: String) {
override def toString: String = {
"[ 'data': '" + this.data + "' ]"
}
}
val defaultCacheExpireDuration: FiniteDuration = 2.second
val stats: InMemoryStatsReceiver = new InMemoryStatsReceiver
// loader helper
#Nullable
#throws[Exception]
protected def loader(key: Int): DataObject = { // this will replace to read the data from Redis Cache
new DataObject(s"LOADER_HELPER_$key")
}
def initCache(maximumSize: Int = 5): LoadingCache[Int, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Int, DataObject](key => loader(key))
.enableCacheStatsReporting("deal-service", stats)
}
}
I'm new to Scala and Caffeine both so not sure what I'm be doing wrong; I tried different ways mentioned here and here to write loader but nothing worked (mainly they are in Java). Little research around Scala bounds doesn't helped here any way. Kindly help.
Depends on which Scala version is being used here.
Although Scala (2.12 and later) Functions support conversions to Java SAM, these are done only when explicitly required. So if you are using Scala 2.12 or later, you can explicitly ask the compiler to convert the Scala function to SAM,
Also, don't use Int as key for the cache. Although it will work because of implicit conversions to Integer, that is not a good practice.
def initCache(maximumSize: Int = 5): LoadingCache[Integer, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Integer, DataObject]((key => loader(key)): CacheLoader[Integer, DataObject])
.enableCacheStatsReporting("deal-service", stats)
}
And if you are dealing with older Scala versions, then just forget that SAM exists and do it old style.
def initCache(maximumSize: Int = 5): LoadingCache[Integer, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Int, DataObject](new CacheLoader[Integer, DataObject] {
override def load(key: Integer): DataObject = loader(key)
})
.enableCacheStatsReporting("deal-service", stats)
}
I have a situation where I'm trying to use implicit resolution on a singleton type. This works perfectly fine if I know that singleton type at compile time:
object Main {
type SS = String with Singleton
trait Entry[S <: SS] {
type out
val value: out
}
implicit val e1 = new Entry["S"] {
type out = Int
val value = 3
}
implicit val e2 = new Entry["T"] {
type out = String
val value = "ABC"
}
def resolve[X <: SS](s: X)(implicit x: Entry[X]): x.value.type = {
x.value
}
def main(args: Array[String]): Unit = {
resolve("S") //implicit found! No problem
}
}
However, if I don't know this type at compile time, then I run into issues.
def main(args: Array[String]): Unit = {
val string = StdIn.readLine()
resolve(string) //Can't find implicit because it doesn't know the singleton type at runtime.
}
Is there anyway I can get around this? Maybe some method that takes a String and returns the singleton type of that string?
def getSingletonType[T <: SS](string: String): T = ???
Then maybe I could do
def main(args: Array[String]): Unit = {
val string = StdIn.readLine()
resolve(getSingletonType(string))
}
Or is this just not possible? Maybe you can only do this sort of thing if you know all of the information at compile-time?
If you knew about all possible implementations of Entry in compile time - which would be possible only if it was sealed - then you could use a macro to create a map/partial function String -> Entry[_].
Since this is open to extending, I'm afraid at best some runtime reflection would have to scan the whole classpath to find all possible implementations.
But even then you would have to embed this String literal somehow into each implementations because JVM bytecode knows nothing about mappings between singleton types and implementations - only Scala compiler does. And then use that to find if among all implementations there is one (and exactly one) that matches your value - in case of implicits if there are two of them at once in the same scope compilation would fail, but you can have more than one implementation as long as the don't appear together in the same scope. Runtime reflection would be global so it wouldn't be able to avoid conflicts.
So no, no good solution for making this compile-time dispatch dynamic. You could create such dispatch yourself by e.g. writing a Map[String, Entry[_]] yourself and using get function to handle missing pices.
Normally implicits are resolved at compile time. But val string = StdIn.readLine() becomes known at runtime only. Principally, you can postpone implicit resolution till runtime but you'll be able to apply the results of such resolution at runtime only, not at compile time (static types etc.)
object Entry {
implicit val e1 = ...
implicit val e2 = ...
}
import scala.reflect.runtime.universe._
import scala.reflect.runtime
import scala.tools.reflect.ToolBox
val toolbox = ToolBox(runtime.currentMirror).mkToolBox()
def resolve(s: String): Any = {
val typ = appliedType(
typeOf[Entry[_]].typeConstructor,
internal.constantType(Constant(s))
)
val instanceTree = toolbox.inferImplicitValue(typ, silent = false)
val instance = toolbox.eval(toolbox.untypecheck(instanceTree)).asInstanceOf[Entry[_]]
instance.value
}
resolve("S") // 3
val string = StdIn.readLine()
resolve(string)
// 3 if you enter S
// ABC if you enter T
// "scala.tools.reflect.ToolBoxError: implicit search has failed" otherwise
Please notice that I put implicits into the companion object of type class in order to make them available in the implicit scope and therefore in the toolbox scope. Otherwise the code should be modified slightly:
object EntryImplicits {
implicit val e1 = ...
implicit val e2 = ...
}
// val instanceTree = toolbox.inferImplicitValue(typ, silent = false)
// should be replaced with
val instanceTree =
q"""
import path.to.EntryImplicits._
implicitly[$typ]
"""
In your code import path.to.EntryImplicits._ is import Main._.
Load Dataset from Dynamically generated Case Class
OUTLINE
I have an API that looks something like this:
package com.example
object ExternalApi {
def create[T <: SpecialElement](elem: T): TypeConstructor[T] =
TypeConstructor(elem)
def create1[T <: SpecialElement](elem: T): TypeConstructor[T] =
TypeConstructor(elem)
def create2[T <: SpecialElement](elem: T): TypeConstructor[T] =
TypeConstructor(elem)
//...
}
object MyApi {
def process[T <: TypeConstructor[_ <: SpecialElement]](
l: T,
metadata: List[String]): T = {
println("I've been called!")
//do some interesting stuff with the List's type parameter here
l
}
}
case class TypeConstructor[E](elem: E)
trait SpecialElement
The ExternalApi (which is actually external to my lib, so no modifying that) has a series of calls that I'd like to automatically wrap with MyApi.process calls, with the metadata argument derived from the final type of T.
To illustrate, the calls to be wrapped could have any form, including nested calls, and calls within other AST subtree types (such as Blocks), e.g. :
package com.example.test
import com.example.{ExternalApi, SpecialElement}
object ApiPluginTest extends App {
//one possible form
val targetList = ExternalApi.create(Blah("name"))
//and another
ExternalApi.create2(ExternalApi.create1(Blah("sth")).elem)
//and yet another
val t = {
val sub1 = ExternalApi.create(Blah("anything"))
val sub2 = ExternalApi.create1(sub1.elem)
sub2
}
}
case class Blah(name: String) extends SpecialElement
Since compiler plugins handle matching structures within ASTs recursively "for free", I've decided to go with them.
However, due to the fact that I need to match a specific type signature, the plugin follows the typer phase.
Here's the code of the PluginComponent:
package com.example.plugin
import com.example.{SpecialElement, TypeConstructor}
import scala.tools.nsc.Global
import scala.tools.nsc.plugins.PluginComponent
import scala.tools.nsc.transform.Transform
class WrapInApiCallComponent(val global: Global)
extends PluginComponent
with Transform {
protected def newTransformer(unit: global.CompilationUnit) =
WrapInApiCallTransformer
val runsAfter: List[String] = List("typer") //since we need the type
val phaseName: String = WrapInApiCallComponent.Name
import global._
object WrapInApiCallTransformer extends Transformer {
override def transform(tree: global.Tree) = {
val transformed = super.transform(tree)
transformed match {
case call # Apply(_, _) =>
if (call.tpe != null && call.tpe.finalResultType <:< typeOf[
TypeConstructor[_ <: SpecialElement]]) {
println(s"Found relevant call $call")
val typeArguments = call.tpe.typeArgs.map(_.toString).toList
val listSymbOf = symbolOf[List.type]
val wrappedFuncSecondArgument =
q"$listSymbOf.apply(..$typeArguments)"
val apiObjSymbol = symbolOf[com.example.MyApi.type]
val wrappedCall =
q"$apiObjSymbol.process[${call.tpe.finalResultType}]($call, $wrappedFuncSecondArgument)"
//explicit typing, otherwise later phases throw NPEs etc.
val ret = typer.typed(wrappedCall)
println(showRaw(ret))
println("----")
ret
} else {
call
}
case _ => transformed
}
}
}
}
object WrapInApiCallComponent {
val Name = "api_embed_component"
}
This seems to resolve identifiers, as well as types, correctly, outputting e.g.:
Apply(TypeApply(Select(TypeTree().setOriginal(Ident(com.example.MyApi)), TermName("process")), List(TypeTree())), List(Apply(TypeApply(Select(Select(Select(Ident(com), com.example), com.example.MyApi), TermName("create")), List(TypeTree())), List(Apply(Select(Ident(com.example.test.Blah), TermName("apply")), List(Literal(Constant("name")))))), Apply(TypeApply(Select(TypeTree().setOriginal(Ident(scala.collection.immutable.List)), TermName("apply")), List(TypeTree())), List(Literal(Constant("com.example.test.Blah"))))))
Unfortunately, I get an error during compilation starting with:
scala.reflect.internal.FatalError:
[error]
[error] Unexpected tree in genLoad: com.example.MyApi.type/class scala.reflect.internal.Trees$TypeTree at: RangePosition([projectpath]/testPluginAutoWrap/compiler_plugin_test/src/main/scala/com/example/test/ApiPluginTest.scala, 108, 112, 112)
[error] while compiling: [projectpath]/testPluginAutoWrap/compiler_plugin_test/src/main/scala/com/example/test/ApiPluginTest.scala
[error] during phase: jvm
[error] library version: version 2.12.4
[error] compiler version: version 2.12.4
[error] reconstructed args: -Xlog-implicits -classpath [classpath here]
[error]
[error] last tree to typer: TypeTree(class String)
[error] tree position: line 23 of [projectpath]/testPluginAutoWrap/compiler_plugin_test/src/main/scala/com/example/test/ApiPluginTest.scala
QUESTION
It looks I'm screwing something up with type definitions, but what is it?
Specifically:
How do I correctly wrap every ExternalApi.createX call with an MyApi.process call, constrained by the requirements provided above?
NOTES
Given the amount of boilerplate required, I've set up a complete example project. It's available here.
The answer does not have to define a compiler plugin. If you're able to cover all the relevant calls with a macro, that is fine as well.
Originally the wrapped call was to something like: def process[T <: TypeConstructor[_ <: SpecialElement] : TypeTag](l: T): T, the setup here is actually a workaround. So if you are able to generate a wrapped call of this type, i.e. one that includes a runtime TypeTag[T], that's fine as well.
I ran into a weird and puzzling NPE. Consider the following usecase:
Writing a generic algorithm (binary search in my case), where you'd want to generalize the type, but need some extras.
e.g: maybe you want to cut a range in half, and you need a generic half or two "consts".
Integral typeclass is not enough, since it only offers one and zero, so I came up with:
trait IntegralConsts[N] {
val tc: Integral[N]
val two = tc.plus(tc.one,tc.one)
val four = tc.plus(two,two)
}
object IntegralConsts {
implicit def consts[N : Integral] = new IntegralConsts[N] {
override val tc = implicitly[Integral[N]]
}
}
and used it as follows:
def binRangeSearch[N : IntegralConsts]( /* irrelevant args */ ) = {
val consts = implicitly[IntegralConsts[N]]
val math = consts.tc
// some irrelevant logic, which contain expressions like:
val halfRange = math.quot(range, consts.two)
// ...
}
In runtime, this throws a puzzling NullPointerException on this line: val two = tc.plus(tc.one,tc.one).
As a workaround, I just added lazy to the typeclass' vals, and it all worked out:
trait IntegralConsts[N] {
val tc: Integral[N]
lazy val two = tc.plus(tc.one,tc.one)
lazy val four = tc.plus(two,two)
}
But I would want to know why I got this weird NPE. Initialization order should be known, and tc should have already been instantiated when reaching val two ...
Initialization order should be known, and tc should have already been
instantiated when reaching val two
Not according to the specification. What really happens is that while constructing the anonymous class, first IntegralConsts[T] will be initialized, and only then will the overriding of tc be evacuated in the derived anon class, which is why you're experiencing the NullPointerException.
The specification section §5.1 (Templates) says:
Template Evaluation
Consider a template sc with mt1 with mtn { stats }.
If this is the template of a trait then its mixin-evaluation consists of an evaluation of the statement sequence stats.
If this is not a template of a trait, then its evaluation consists of the following steps:
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's superclass denoted by sc are mixin-evaluated. Mixin-evaluation happens in reverse order of occurrence in the linearization.
Finally the statement sequence stats is evaluated.
We can verify this by looking at the compiled code with -Xprint:typer:
final class $anon extends AnyRef with IntegralConsts[N] {
def <init>(): <$anon: IntegralConsts[N]> = {
$anon.super.<init>();
()
};
private[this] val tc: Integral[N] = scala.Predef.implicitly[Integral[N]](evidence$1);
override <stable> <accessor> def tc: Integral[N] = $anon.this.tc
};
We see that first, super.<init> is invoked, and only then is the val tc initialized.
Adding to that, lets look at "Why is my abstract or overridden val null?":
A ‘strict’ or ‘eager’ val is one which is not marked lazy.
In the absence of “early definitions” (see below), initialization of
strict vals is done in the following order:
Superclasses are fully initialized before subclasses.
Otherwise, in declaration order.
Naturally when a val is overridden, it is not initialized more than once ... This is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val.
We can also verify this by passing the -Xcheckinit flag to scalac:
> set scalacOptions := Seq("-Xcheckinit")
[info] Defining *:scalacOptions
[info] The new value will be used by compile:scalacOptions
[info] Reapplying settings...
[info] Set current project to root (in build file:/C:/)
> console
> :pa // paste code here
defined trait IntegralConsts
defined module IntegralConsts
binRangeSearch: [N](range: N)(implicit evidence$2: IntegralConsts[N])Unit
scala> binRangeSearch(100)
scala.UninitializedFieldError: Uninitialized field: <console>: 16
at IntegralConsts$$anon$1.tc(<console>:16)
at IntegralConsts$class.$init$(<console>:9)
at IntegralConsts$$anon$1.<init>(<console>:15)
at IntegralConsts$.consts(<console>:15)
at .<init>(<console>:10)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
As you've noted, since this is an anonymous class, adding the lazy to the definition avoids the initialization quirk altogether. An alternative would be to use early definition:
object IntegralConsts {
implicit def consts[N : Integral] = new {
override val tc = implicitly[Integral[N]]
} with IntegralConsts[N]
}
I am currently reading Scala in depth and I struggle with a point about existential types.
Using those sources : https://github.com/jsuereth/scala-in-depth-source/blob/master/chapter6/existential-types/existential.scala
with openjdk 7 and scala 2.10.3
The following instructions gives me a error :
val x = new VariableStore[Int](12)
val d = new Dependencies {}
val t = x.observe(println)
d.addHandle(t)
<console>:14: error: method addHandle in trait Dependencies cannot be accessed in types.Dependencies
Access to protected method addHandle not permitted because
enclosing object $iw is not a subclass of
trait Dependencies in package types where target is defined
d.addHandle(t)
^
And I can't find out why and how I arrive to this error.
Edit 1 :
I added the following code from Kihyo's answer :
class MyDependencies extends Dependencies {
override def addHandle(handle: Ref) = super.addHandle(handle)
}
val x = new VariableStore[Int](12)
val d = new MyDependencies
val t = x.observe(println)
d.addHandle(t) //compiles
It make addHandle public instead of protected.
Now I have the following error message :
type mismatch; found : x.Handle (which expands to) x.HandleClass required: d.Ref (which
expands to) x.Handle forSome { val x: sid.types.obs.Observable }
HandleClass is a Handle and Ref is a Handle of any Observer (if I get it right) so the value t should be accepted as a correct type for the exception.
In the trait Dependencies, addHandle is defined like that:
protected def addHandle(handle : Ref) : Unit
protected means, only subclasses can access this method and thats why you get the error. (which basically tells you exactly that)
Your code could work when you create a subclass that makes addHandle public:
class MyDependencies extends Dependencies {
override def addHandle(handle: Ref) = super.addHandle(handle)
}
val x = new VariableStore[Int](12)
val d = new MyDependencies
val t = x.observe(println)
d.addHandle(t) //compiles
But I have no idea about that example and what you want to do with it.
#Edit1:
I get the same error as you, but I can't explain why. It worked for me when I extend App instead of having a main-method:
object TestObs extends App {
val x = new VariableStore[Int](12)
val d = new MyDependencies
val t = x.observe(println)
d.addHandle(t)
}
Maybe someone else has some insight on this.