In Scala 3, is it possible to use declared type of an object in runtime? - scala

In Scala 2, most generic type information of an object is erased at runtime. At this moment, all 3 binary execution environments (JVM, javascript, and LLVM) abide this behaviour, they only differs in minor details in metadata formats.
In a rare case if it incurs critical data loss, or if it triggers a rare binary error. A mechanism should be used to preserve declared type information in an adjoint data structure. The following code gave a short example of such data structure in Scala 2:
import scala.reflect.runtime.universe
import scala.collection.concurrent.TrieMap
import scala.language.implicitConversions
case class Unerase[T](self: T)(
implicit
ev: universe.TypeTag[T]
) {
import Unerase._
cache += {
val inMemoryId = System.identityHashCode(this)
inMemoryId -> ev
}
}
object Unerase {
lazy val cache = TrieMap.empty[Int, universe.TypeTag[_]]
def get[T](v: T): Option[universe.TypeTag[T]] = {
val inMemoryId = System.identityHashCode(v)
cache.get(inMemoryId).map { tt =>
tt.asInstanceOf[universe.TypeTag[T]]
}
}
implicit def unbox[T](v: Unerase[T]): T = v.self
implicit def box[T](v: T)(
implicit
ev: universe.TypeTag[T]
): Unerase[T] = Unerase(v)
}
Any variable declared as type Unerase[T] instead of T will be guaranteed to have its full declared type visible at runtime. Unfortunately, this example no longer works in Scala 3:
implicitly[TypeTag[Int]] // works in Scala 2
summon[Type[Int]] // doesn't work in Scala 3: No given instance of type quoted.Quotes was found for parameter x$1 ...
Is there a mechanism that I can use to implement the same mechanism to fully mitigate type erasure?

Related

In Scala 2.13, why is it possible to summon unqualified TypeTag for abstract type?

Considering the following code:
import scala.reflect.api.Universe
object UnqualifiedTypeTag {
val RuntimeUniverse = scala.reflect.runtime.universe
trait HasUniverse {
val universe: Universe with Singleton
def uType: RuntimeUniverse.TypeTag[universe.type] = implicitly
}
object HasRuntime extends HasUniverse {
override val universe: RuntimeUniverse.type = RuntimeUniverse
}
def main(args: Array[String]): Unit = {
println(HasRuntime.uType)
}
}
Ideally, this part of the program should yield TypeTag[HasRuntime.universe.type], or at least fail its compilation, due to implicitly only able to see universe.type, which is not known at call site. (in contrast, WeakTypeTag[universe.type] should totally work)
Surprisingly, the above program yields TypeTag[HasUniverse.this.universe.type]. This apparently breaks many contracts, namely:
TypeTag cannot be initialised from abstract types, unlike WeakTypeTag
TypeTag can always be erased to a Class
What's the purpose of this design, and what contract does TypeTag provide? In addition, is this the reason why ClassTag was supposed to be superseded after Scala 2.11, but instead was kept as-is until now?

Scala resolving Class/Type at runtime + type class constraint

I have a generic function that require a HasMoveCapability implicit instance of the type T (type class pattern)
trait HasMoveCapability[T]
def doLogic[T: TypeTag: HasMoveCapability](): Unit = println(typeTag[T].tpe)
Then I have these two classes which have implicit instances for HasMoveCapability[T]
case class Bird()
object Bird {
implicit val hasMoveCapability = new HasMoveCapability[Bird]{}
}
case class Lion()
object Lion {
implicit val hasMoveCapability = new HasMoveCapability[Lion]{}
}
My question is the following:
I need to resolve the type (Lion or Bird) at runtime depending on an argument and call the function doLogic with the good type.
I tried
val input: String = "bird" // known at runtime
val resolvedType: TypeTag[_] = input match {
case "bird" => typeTag[Bird]
case "lion" => typeTag[Lion]
}
doLogic()(resolvedType) // doesn't compile
// `Unspecified value parameters: hasMoveCapability$T$1: HasMoveCapability[NotInferredT]`
What I would like to do is something like:
val resolvedType: TypeTag[_: HasMoveCapability] = input match{...}
The workaround that I am using so far is to call the function in the pattern match:
input match {
case "bird" => doLogic[Bird]
case "lion" => doLogic[Lion]
}
But by having many functions, the pattern match is getting duplicated and hard to maintain.
I am open to change the design if you have any suggestions :D
You should describe your problem better. Currently your type class HasMoveCapability doesn't seem to do anything useful. Currently what you do seems a hard way to transform the string "bird" into "Bird", "lion" into "Lion".
If you control the code of doLogic you seem not to need TypeTag. TypeTag / ClassTag is a way to persist information from compile time to runtime. You seem to do something in reverse direction.
Type classes / implicits are resolved at compile time. You can't resolve something at compile time based on runtime information (there is no time machine taking you from the future i.e. runtime to the past i.e. compile time). Most probably you need ordinary pattern matching rather than type classes (TypeTag, HasMoveCapability).
In principle you can run compiler at runtime, then you'll have new compile time inside runtime, and you'll be able to infer types, resolve implicits etc.
import scala.tools.reflect.ToolBox
import scala.reflect.runtime.currentMirror
import scala.reflect.runtime.universe.{TypeTag, typeTag}
object App {
trait HasMoveCapability[T]
def doLogic[T: TypeTag: HasMoveCapability](): Unit = println(typeTag[T].tpe)
case class Bird()
object Bird {
implicit val hasMoveCapability = new HasMoveCapability[Bird]{}
}
case class Lion()
object Lion {
implicit val hasMoveCapability = new HasMoveCapability[Lion]{}
}
val input: String = "bird" // known at runtime
val tb = currentMirror.mkToolBox()
tb.eval(tb.parse(s"import App._; doLogic[${input.capitalize}]")) //App.Bird
def main(args: Array[String]): Unit = ()
}
scala get generic type by class

How to write a cache loader for Caffeine LoadingCache in "Scala" for "refreshAfterWrite" to work

Scala application use case:
We have a Scala based that module reads the data from global cache (Redis) and save the same into local cache(Caffeine LoadingCache). As we want this data to be refreshed asynchronously, we are using LoadingCache with refreshAfterWrite duration set to refresh window of 2.second.
Question:
Not question but need help with the following code that is giving warning and also compile time errors
Warning: For build method, it gives warning as Implements member load in CacheLoader (com.github.benmanes.caffeine.cache)
Compile time error 1: type arguments [Int,redisToCaffeine.DataObject] conform to the bounds of none of the overloaded alternatives of value build: [K1 <: Object, V1 <: Object](x$1: com.github.benmanes.caffeine.cache.CacheLoader[_ >: K1, V1])com.github.benmanes.caffeine.cache.LoadingCache[K1,V1] <and> [K1 <: Object, V1 <: Object]()com.github.benmanes.caffeine.cache.Cache[K1,V1] .build[Int, DataObject](key => loader(key))
Compile time error 2: wrong number of type parameters for overloaded method value build with alternatives: [K1 <: Object, V1 <: Object](x$1: com.github.benmanes.caffeine.cache.CacheLoader[_ >: K1, V1])com.github.benmanes.caffeine.cache.LoadingCache[K1,V1] <and> [K1 <: Object, V1 <: Object]()com.github.benmanes.caffeine.cache.Cache[K1,V1] .build[Int, DataObject](key => loader(key))
Code:
package redisToCaffeine
import scala.concurrent.duration._
import com.github.benmanes.caffeine.cache.{ CacheLoader, Caffeine, LoadingCache }
import com.twitter.finagle.stats.InMemoryStatsReceiver
import javax.annotation.Nullable
import redisToCaffeine.CacheImplicits.StatsReportingCaffeineCache
class LocalDealService {
class DataObject(data: String) {
override def toString: String = {
"[ 'data': '" + this.data + "' ]"
}
}
val defaultCacheExpireDuration: FiniteDuration = 2.second
val stats: InMemoryStatsReceiver = new InMemoryStatsReceiver
// loader helper
#Nullable
#throws[Exception]
protected def loader(key: Int): DataObject = { // this will replace to read the data from Redis Cache
new DataObject(s"LOADER_HELPER_$key")
}
def initCache(maximumSize: Int = 5): LoadingCache[Int, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Int, DataObject](key => loader(key))
.enableCacheStatsReporting("deal-service", stats)
}
}
I'm new to Scala and Caffeine both so not sure what I'm be doing wrong; I tried different ways mentioned here and here to write loader but nothing worked (mainly they are in Java). Little research around Scala bounds doesn't helped here any way. Kindly help.
Depends on which Scala version is being used here.
Although Scala (2.12 and later) Functions support conversions to Java SAM, these are done only when explicitly required. So if you are using Scala 2.12 or later, you can explicitly ask the compiler to convert the Scala function to SAM,
Also, don't use Int as key for the cache. Although it will work because of implicit conversions to Integer, that is not a good practice.
def initCache(maximumSize: Int = 5): LoadingCache[Integer, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Integer, DataObject]((key => loader(key)): CacheLoader[Integer, DataObject])
.enableCacheStatsReporting("deal-service", stats)
}
And if you are dealing with older Scala versions, then just forget that SAM exists and do it old style.
def initCache(maximumSize: Int = 5): LoadingCache[Integer, DataObject] = {
Caffeine
.newBuilder()
.maximumSize(maximumSize)
.refreshAfterWrite(defaultCacheExpireDuration.length, defaultCacheExpireDuration.unit)
.recordStats()
.build[Int, DataObject](new CacheLoader[Integer, DataObject] {
override def load(key: Integer): DataObject = loader(key)
})
.enableCacheStatsReporting("deal-service", stats)
}

Calling method via reflection in Scala

I want to call an arbitrary public method of an arbitrary stuff via reflection. I.e. let's say, I want to write method extractMethod to be used like:
class User { def setAvatar(avatar: Avatar): Unit = …; … }
val m = extractMethod(someUser, "setAvatar")
m(someAvatar)
From the Reflection. Overview document from Scala docs, I see the following direct way to do that:
import scala.reflect.ClassTag
import scala.reflect.runtime.universe._
def extractMethod[Stuff: ClassTag: TypeTag](
stuff: Stuff,
methodName: String): MethodMirror =
{
val stuffTypeTag = typeTag[Stuff]
val mirror = stuffTypeTag.mirror
val stuffType = stuffTypeTag.tpe
val methodSymbol = stuffType
.member(TermName(methodName)).asMethod
mirror.reflect(stuff)
.reflectMethod(methodSymbol)
}
However what I'm bothered with this solution is that I need to pass implicit ClassTag[Stuff] and TypeTag[Stuff] parameters (first one is needed for calling reflect, second one — for getting stuffType). Which may be quite cumbersome, especially if extractMethod is called from generics that are called from generics and so on. I'd accept this as necessity for some languages that strongly lack runtime type information, but Scala is based on JRE, which allows to do the following:
def extractMethod[Stuff](
stuff: Stuff,
methodName: String,
parameterTypes: Array[Class[_]]): (Object*) => Object =
{
val unboundMethod = stuff.getClass()
.getMethod(methodName, parameterTypes: _*)
arguments => unboundMethod(stuff, arguments: _*)
}
I understand that Scala reflection allows to get more information that basic Java reflection. Still, here I just need to call a method. Is there a way to somehow reduce requirements (e.g. these ClassTag, TypeTag) of the Scala-reflection-based extractMethod version (without falling back to pure-Java reflection), assuming that performance doesn't matter for me?
Yes, there is.
First, according to this answer, TypeTag[Stuff] is a strictly stronger requirement than ClassTag[Stuff]. Although we don't automatically get implicit ClassTag[Stuff] from implicit TypeTag[Stuff], we can evaluate it manually as ClassTag[Stuff](stuffTypeTag.mirror.runtimeClass(stuffTypeTag.tpe)) and then implicitly or explicitly pass it to reflect that needs it:
import scala.reflect.ClassTag
import scala.reflect.runtime.universe._
def extractMethod[Stuff: TypeTag](
stuff: Stuff,
methodName: String): MethodMirror =
{
val stuffTypeTag = typeTag[Stuff]
val mirror = stuffTypeTag.mirror
val stuffType = stuffTypeTag.tpe
val stuffClassTag = ClassTag[Stuff](mirror.runtimeClass(stuffType))
val methodSymbol = stuffType
.member(TermName(methodName)).asMethod
mirror.reflect(stuff)(stuffClassTag)
.reflectMethod(methodSymbol)
}
Second, mirror and stuffType can be obtained from stuff.getClass():
import scala.reflect.ClassTag
import scala.reflect.runtime.universe._
def extractMethod(stuff: Stuff, methodName: String): MethodMirror = {
val stuffClass = stuff.getClass()
val mirror = runtimeMirror(stuffClass.getClassLoader)
val stuffType = mirror.classSymbol(stuffClass).toType
val stuffClassTag = ClassTag[Stuff](mirror.runtimeClass(stuffType))
val methodSymbol = stuffType
.member(TermName(methodName)).asMethod
mirror.reflect(stuff)(stuffClassTag)
.reflectMethod(methodSymbol)
}
Therefore we obtained Scala-style reflection entities (i.e. finally MethodMirror) without requiring ClassTag and/or TypeTag to be passed explicitly or implicitly from the caller. Not sure, however, how it compares with the ways described in the question (i.e. passing tags from outside and pure Java) in the terms of performance.

Scala: Implicit parameter resolution precedence

Suppose we have implicit parameter lookup concerning only local scopes:
trait CanFoo[A] {
def foos(x: A): String
}
object Def {
implicit object ImportIntFoo extends CanFoo[Int] {
def foos(x: Int) = "ImportIntFoo:" + x.toString
}
}
object Main {
def test(): String = {
implicit object LocalIntFoo extends CanFoo[Int] {
def foos(x: Int) = "LocalIntFoo:" + x.toString
}
import Def._
foo(1)
}
def foo[A:CanFoo](x: A): String = implicitly[CanFoo[A]].foos(x)
}
In the above code, LocalIntFoo wins over ImportedIntFoo.
Could someone explain how it's considered more specific using "the rules of static overloading resolution (§6.26.3)"?
Edit:
The name binding precedence is a compelling argument, but there are several issues unresolved.
First, Scala Language Reference says:
If there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (§6.26.3).
Second, name binding precedence is about resolving a known identifier x to a particular member pkg.A.B.x in case there are several variable/method/object named x in the scope. ImportIntFoo and LocalIntFoo are not named the same.
Third, I can show that name binding precedence alone is not in play as follows:
trait CanFoo[A] {
def foos(x: A): String
}
object Def {
implicit object ImportIntFoo extends CanFoo[Int] {
def foos(x: Int) = "ImportIntFoo:" + x.toString
}
}
object Main {
def test(): String = {
implicit object LocalAnyFoo extends CanFoo[Any] {
def foos(x: Any) = "LocalAnyFoo:" + x.toString
}
// implicit object LocalIntFoo extends CanFoo[Int] {
// def foos(x: Int) = "LocalIntFoo:" + x.toString
// }
import Def._
foo(1)
}
def foo[A:CanFoo](x: A): String = implicitly[CanFoo[A]].foos(x)
}
println(Main.test)
Put this in test.scala and run scala test.scala, and it prints out ImportIntFoo:1.
This is because static overloading resolution (§6.26.3) says more specific type wins.
If we are pretending that all eligible implicit values are named the same, LocalAnyFoo should have masked ImportIntFoo.
Related:
Where does Scala look for implicits?
This is a great summary of implicit parameter resolution, but it quotes Josh's nescala presentation instead of the spec. His talk is what motivated me to look into this.
Compiler Implementation
rankImplicits
I wrote my own answer in the form of a blog post revisiting implicits without import tax.
Update: Furthermore, the comments from Martin Odersky in the above post revealed that the Scala 2.9.1's behavior of LocalIntFoo winning over ImportedIntFoo is in fact a bug. See implicit parameter precedence again.
1) implicits visible to current invocation scope via local declaration, imports, outer scope, inheritance, package object that are accessible without prefix.
2) implicit scope, which contains all sort of companion objects and package object that bear some relation to the implicit's type which we search for (i.e. package object of the type, companion object of the type itself, of its type constructor if any, of its parameters if any, and also of its supertype and supertraits).
If at either stage we find more than one implicit, static overloading rule is used to resolve it.
Update 2: When I asked Josh about Implicits without Import Tax, he explained to me that he was referring to name binding rules for implicits that are named exactly the same.
From http://www.scala-lang.org/docu/files/ScalaReference.pdf, Chapter 2:
Names in Scala identify types, values, methods, and classes which are
collectively called entities. Names are introduced by local definitions
and declarations (§4), inheritance (§5.1.3), import clauses (§4.7), or
package clauses (§9.2) which are collectively called bindings.
Bindings of different kinds have a precedence defined on them:
1. Definitions and declarations that are local, inherited, or made available by a package clause in the same compilation unit where the
definition occurs have highest precedence.
2. Explicit imports have next highest precedence.
3. Wildcard imports have next highest precedence.
4. Definitions made available by a package clause not in the compilation unit where the definition occurs have lowest precedence.
I may be mistaken, but the call to foo(1) is in the same compilation unit as LocalIntFoo, resulting in that conversion taking precedence over ImportedIntFoo.
Could someone explain how it's considered more specific using "the
rules of static overloading resolution (§6.26.3)"?
There's no method overload, so 6.26.3 is utterly irrelevant here.
Overload refers to multiple methods with the same name but different parameters being defined on the same class. For example, method f in the example 6.26.1 is overloaded:
class A extends B {}
def f(x: B, y: B) = . . .
def f(x: A, y: B) = . . .
val a: A
val b: B
Implicit parameter resolution precedence is a completely different rule, and one which has a question and answer already on Stack Overflow.