I ran into a weird and puzzling NPE. Consider the following usecase:
Writing a generic algorithm (binary search in my case), where you'd want to generalize the type, but need some extras.
e.g: maybe you want to cut a range in half, and you need a generic half or two "consts".
Integral typeclass is not enough, since it only offers one and zero, so I came up with:
trait IntegralConsts[N] {
val tc: Integral[N]
val two = tc.plus(tc.one,tc.one)
val four = tc.plus(two,two)
}
object IntegralConsts {
implicit def consts[N : Integral] = new IntegralConsts[N] {
override val tc = implicitly[Integral[N]]
}
}
and used it as follows:
def binRangeSearch[N : IntegralConsts]( /* irrelevant args */ ) = {
val consts = implicitly[IntegralConsts[N]]
val math = consts.tc
// some irrelevant logic, which contain expressions like:
val halfRange = math.quot(range, consts.two)
// ...
}
In runtime, this throws a puzzling NullPointerException on this line: val two = tc.plus(tc.one,tc.one).
As a workaround, I just added lazy to the typeclass' vals, and it all worked out:
trait IntegralConsts[N] {
val tc: Integral[N]
lazy val two = tc.plus(tc.one,tc.one)
lazy val four = tc.plus(two,two)
}
But I would want to know why I got this weird NPE. Initialization order should be known, and tc should have already been instantiated when reaching val two ...
Initialization order should be known, and tc should have already been
instantiated when reaching val two
Not according to the specification. What really happens is that while constructing the anonymous class, first IntegralConsts[T] will be initialized, and only then will the overriding of tc be evacuated in the derived anon class, which is why you're experiencing the NullPointerException.
The specification section §5.1 (Templates) says:
Template Evaluation
Consider a template sc with mt1 with mtn { stats }.
If this is the template of a trait then its mixin-evaluation consists of an evaluation of the statement sequence stats.
If this is not a template of a trait, then its evaluation consists of the following steps:
First, the superclass constructor sc is evaluated.
Then, all base classes in the template's linearization up to the template's superclass denoted by sc are mixin-evaluated. Mixin-evaluation happens in reverse order of occurrence in the linearization.
Finally the statement sequence stats is evaluated.
We can verify this by looking at the compiled code with -Xprint:typer:
final class $anon extends AnyRef with IntegralConsts[N] {
def <init>(): <$anon: IntegralConsts[N]> = {
$anon.super.<init>();
()
};
private[this] val tc: Integral[N] = scala.Predef.implicitly[Integral[N]](evidence$1);
override <stable> <accessor> def tc: Integral[N] = $anon.this.tc
};
We see that first, super.<init> is invoked, and only then is the val tc initialized.
Adding to that, lets look at "Why is my abstract or overridden val null?":
A ‘strict’ or ‘eager’ val is one which is not marked lazy.
In the absence of “early definitions” (see below), initialization of
strict vals is done in the following order:
Superclasses are fully initialized before subclasses.
Otherwise, in declaration order.
Naturally when a val is overridden, it is not initialized more than once ... This is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val.
We can also verify this by passing the -Xcheckinit flag to scalac:
> set scalacOptions := Seq("-Xcheckinit")
[info] Defining *:scalacOptions
[info] The new value will be used by compile:scalacOptions
[info] Reapplying settings...
[info] Set current project to root (in build file:/C:/)
> console
> :pa // paste code here
defined trait IntegralConsts
defined module IntegralConsts
binRangeSearch: [N](range: N)(implicit evidence$2: IntegralConsts[N])Unit
scala> binRangeSearch(100)
scala.UninitializedFieldError: Uninitialized field: <console>: 16
at IntegralConsts$$anon$1.tc(<console>:16)
at IntegralConsts$class.$init$(<console>:9)
at IntegralConsts$$anon$1.<init>(<console>:15)
at IntegralConsts$.consts(<console>:15)
at .<init>(<console>:10)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
As you've noted, since this is an anonymous class, adding the lazy to the definition avoids the initialization quirk altogether. An alternative would be to use early definition:
object IntegralConsts {
implicit def consts[N : Integral] = new {
override val tc = implicitly[Integral[N]]
} with IntegralConsts[N]
}
Related
I am trying to convert from Guice to Macwire as a dependency injection framework. It is going fine apart from this Silhouette Module where I am getting a compilation error. Error at bottom.
Working Module in Guice:
class SilhouetteModule #Inject()(environment: play.api.Environment, configuration:
Configuration) extends AbstractModule with ScalaModule {
override def configure() = {
val iamConfig = IAMConfiguration
.fromConfiguration(configuration)
.fold(throw _, identity)
val htPasswdFile = File.apply(configuration.get[String]("file"))
bind[IdentityService[User]].toInstance(SimpleIdentityService.fromConfig(iamConfig))
bind[Silhouette[BasicAuthEnv]].to[SilhouetteProvider[BasicAuthEnv]]
bind[RequestProvider].to[BasicAuthProvider].asEagerSingleton()
bind[PasswordHasherRegistry].toInstance(PasswordHasherRegistry(new BCryptPasswordHasher()))
bind[AuthenticatorService[DummyAuthenticator]].toInstance(new DummyAuthenticatorService)
bind[AuthInfoRepository].toInstance(HtpasswdAuthInfoRepository.fromFile(htPasswdFile))
bind[SecuredErrorHandler].to[RestHttpSecuredErrorHandler]
}
#Provides
def provideEnvironment(identityService: IdentityService[User], authenticatorService:
AuthenticatorService[DummyAuthenticator], eventBus: EventBus, requestProvider:
RequestProvider): Environment[BasicAuthEnv] =
Environment[BasicAuthEnv](
identityService,
authenticatorService,
Seq(requestProvider),
eventBus
)
}
}
Equivalent attempt in Macwire:
trait SilhouetteModule extends BuiltInComponents {
import com.softwaremill.macwire._
val iamConfig = IAMConfiguration
.fromConfiguration(configuration)
.fold(throw _, identity)
val htPasswdFile = File.apply(configuration.get[String]("file"))
lazy val identityService: IdentityService[User] =
SimpleIdentityService.fromConfig(iamConfig)
lazy val basicAuthEnv: Silhouette[BasicAuthEnv] = wire[SilhouetteProvider[BasicAuthEnv]]
lazy val requestProvider: RequestProvider = wire[BasicAuthProvider]
lazy val passwordHasherRegistry: PasswordHasherRegistry = PasswordHasherRegistry(
new BCryptPasswordHasher())
lazy val authenticatorService: AuthenticatorService[DummyAuthenticator] =
new DummyAuthenticatorService
lazy val authInfoRepo: AuthInfoRepository =
HtpasswdAuthInfoRepository.fromFile(htPasswdFile)
lazy val errorHandler: SecuredErrorHandler = wire[RestHttpSecuredErrorHandler]
lazy val env: Environment[BasicAuthEnv] = Environment[BasicAuthEnv](
identityService,
authenticatorService,
Seq(requestProvider),
eventBus
)
def eventBus: EventBus
}
The Macwire example does not compile: I get an error:
Cannot find a value of type: [com.mohiva.play.silhouette.api.actions.SecuredAction]
lazy val basicAuthEnv: Silhouette[BasicAuthEnv] = wire[SilhouetteProvider[BasicAuthEnv]]
Sorry its a lot of code but I thought a side-by-side comparison would be more helpful.
Any help would be great!
MacWire doesn't magically creates values - if it needs to construct a value, it looks what values are taken by the constructor and - if by looking at all values available in the scope it can unambiguously find all the parameters of the constructor, the macro creates code new Class(resolvedArg1, resolvedArg2, ...).
So
all of these values have to be in Scope - they can be constructed by MacWire, or be abstract members implemented by some mixin, but still you have to write them down explicitly
if you have 2 values of the same type in scope - MacWire cannot generate the code because how it would know which value to pick? (Well, if one of the values is in closer "closer" than the other it can, but if they are equally close it cannot be resolved)
So if you get the error:
Cannot find a value of type: [com.mohiva.play.silhouette.api.actions.SecuredAction]
it means that you haven't declared any value of type com.mohiva.play.silhouette.api.actions.SecuredAction in SilhouetteModule nor in BuiltInComponents.
If this is something that is provided by another trait you can add abstract declaration here
val securedAction: SecuredAction // abstract val
and implement it somewhere else (be careful to avoid circular dependencies!).
I'm using sbt to build some of the riscv boom from the source code, but the sbt complains that it "could not find implicit value for parameter valName: : freechips.rocketchip.diplomacy.ValName". The detailed error message are as below:
[error] F:\hiMCU\my_proj\src\main\scala\freechips\rocketchip\tile\BaseTile.scala:170:42: could not find implicit value for parameter valName: freechips.rocketchip.diplomacy.ValName
[error] Error occurred in an application involving default arguments.
[error] protected val tlMasterXbar = LazyModule(new TLXbar)
The code where sbt complains is as below:
abstract class BaseTile private (val crossing: ClockCrossingType, q: Parameters)
extends LazyModule()(q)
with CrossesToOnlyOneClockDomain
with HasNonDiplomaticTileParameters
{
// Public constructor alters Parameters to supply some legacy compatibility keys
def this(tileParams: TileParams, crossing: ClockCrossingType, lookup: LookupByHartIdImpl, p: Parameters) = {
this(crossing, p.alterMap(Map(
TileKey -> tileParams,
TileVisibilityNodeKey -> TLEphemeralNode()(ValName("tile_master")),
LookupByHartId -> lookup
)))
}
def module: BaseTileModuleImp[BaseTile]
def masterNode: TLOutwardNode
def slaveNode: TLInwardNode
def intInwardNode: IntInwardNode // Interrupts to the core from external devices
def intOutwardNode: IntOutwardNode // Interrupts from tile-internal devices (e.g. BEU)
def haltNode: IntOutwardNode // Unrecoverable error has occurred; suggest reset
def ceaseNode: IntOutwardNode // Tile has ceased to retire instructions
def wfiNode: IntOutwardNode // Tile is waiting for an interrupt
protected val tlOtherMastersNode = TLIdentityNode()
protected val tlSlaveXbar = LazyModule(new TLXbar)
protected val tlMasterXbar = LazyModule(new TLXbar)
protected val intXbar = LazyModule(new IntXbar)
....
}
The LazyModule object code is as below:
object LazyModule
{
protected[diplomacy] var scope: Option[LazyModule] = None
private var index = 0
def apply[T <: LazyModule](bc: T)(implicit valName: ValName, sourceInfo: SourceInfo): T = {
// Make sure the user put LazyModule around modules in the correct order
// If this require fails, probably some grandchild was missing a LazyModule
// ... or you applied LazyModule twice
require (scope.isDefined, s"LazyModule() applied to ${bc.name} twice ${sourceLine(sourceInfo)}")
require (scope.get eq bc, s"LazyModule() applied to ${bc.name} before ${scope.get.name} ${sourceLine(sourceInfo)}")
scope = bc.parent
bc.info = sourceInfo
if (!bc.suggestedNameVar.isDefined) bc.suggestName(valName.name)
bc
}
}
I think the sbt should find some val of type freechips.rocketchip.diplomacy.ValName, but it didn't find such kind of val.
You need to have an object of type ValName in the scope where your LazyModules are instantiated:
implicit val valName = ValName("MyXbars")
For more details on Scala's implicit please see https://docs.scala-lang.org/tutorials/tour/implicit-parameters.html.html
You generally shouldn't need to manually create a ValName, the Scala compiler can materialize them automatically based on the name of the val you're assigning the LazyModule to. You didn't include your imports in your example, but can you try importing ValName?
import freechips.rocketchip.diplomacy.ValName
In most of rocket-chip code, this is imported via wildcard importing everything in the diplomacy package
import freechips.rocketchip.diplomacy._
Let's say i have the following code:
class Context {
def compute() = Array(1.0)
}
val ctx = new Context
val data = ctx.compute
Now we are running this code in Spark:
val rdd = sc.parallelize(List(1,2,3))
rdd.map(_ + data(0)).count()
The code above throws org.apache.spark.SparkException: Task not serializable. I'm not asking how to fix it, by extending Serializable or making a case class, i want to understand why the error happens.
The thing that i don't understand is why it complains about Context class not being a Serializable, though it's not a part of the lambda: rdd.map(_ + data(0)). data here is an Array of values which should be serialized, but it seems that JVM also captures ctx reference as well, which, in my understanding, should not happening.
As i understand, in the shell Spark should clear lambda from the repl context. If we print the tree after delambdafy phase, we would see these pieces:
object iw extends Object {
...
private[this] val ctx: $line11.iw$Context = _;
<stable> <accessor> def ctx(): $line11.iw$Context = iw.this.ctx;
private[this] val data: Array[Double] = _;
<stable> <accessor> def data(): Array[Double] = iw.this.data;
...
}
class anonfun$1 ... {
final def apply(x$1: Int): Double = anonfun$1.this.apply$mcDI$sp(x$1);
<specialized> def apply$mcDI$sp(x$1: Int): Double = x$1.+(iw.this.data().apply(0));
...
}
So the decompiled lambda code that is sent to the worker node is: x$1.+(iw.this.data().apply(0)). Part iw.this belongs to the Spark-Shell session, so, as i understand, it should be cleared by the ClosureCleaner, since has nothing to do with the logic and shouldn't be serialized. Anyway, calling iw.this.data() returns an Array[Double] value of the data variable, which is initialized in the constructor:
def <init>(): type = {
iw.super.<init>();
iw.this.ctx = new $line11.iw$Context();
iw.this.data = iw.this.ctx().compute(); // <== here
iw.this.res4 = ...
()
}
In my understanding ctx value has nothing to do with the lambda, it's not a closure, hence shouldn't be serialized. What am i missing or misunderstanding?
This has to do with what Spark considers it can use as a closure safely. This is in some cases very intuitive, since Spark uses reflection and in many cases can't recognize some of Scala's guarantees (not a full compiler or anything) or the fact that some variables in the same object are irrelevant. For safety, Spark will attempt to serialize any objects referenced, which in your case includes iw, which is not serializable.
The code inside ClosureCleaner has a good example:
For instance, transitive cleaning is necessary in the following
scenario:
class SomethingNotSerializable {
def someValue = 1
def scope(name: String)(body: => Unit) = body
def someMethod(): Unit = scope("one") {
def x = someValue
def y = 2
scope("two") { println(y + 1) }
}
}
In this example, scope "two" is not serializable because it references scope "one", which references SomethingNotSerializable. Note that, however, the body of scope "two" does not actually depend on SomethingNotSerializable. This means we can safely null out the parent pointer of a cloned scope "one" and set it the parent of scope "two", such that scope "two" no longer references SomethingNotSerializable transitively.
Probably the easiest fix is to create a local variable, in the same scope, that extracts the value from your object, such that there is no longer any reference to the encapsulating object inside the lambda:
val rdd = sc.parallelize(List(1,2,3))
val data0 = data(0)
rdd.map(_ + data0).count()
I have this piece of code that loads Properties from a file:
class Config {
val properties: Properties = {
val p = new Properties()
p.load(Thread.currentThread().getContextClassLoader.getResourceAsStream("props"))
p
}
val forumId = properties.get("forum_id")
}
This seems to be working fine.
I have tried moving the initialization of properties into another val, loadedProperties, like this:
class Config {
val properties: Properties = loadedProps
val forumId = properties.get("forum_id")
private val loadedProps = {
val p = new Properties()
p.load(Thread.currentThread().getContextClassLoader.getResourceAsStream("props"))
p
}
}
But it doesn't work! (properties is null in properties.get("forum_id") ).
Why would that be? Isn't loadedProps evaluated when referenced by properties?
Secondly, is this a good way to initialize variables that require non-trivial processing? In Java, I would declare them final fields, and do the initialization-related operations in the constructor.
Is there a pattern for this scenario in Scala?
Thank you!
Vals are initialized in the order they are declared (well, precisely, non-lazy vals are), so properties is getting initialized before loadedProps. Or in other words, loadedProps is still null when propertiesis getting initialized.
The simplest solution here is to define loadedProps before properties:
class Config {
private val loadedProps = {
val p = new Properties()
p.load(Thread.currentThread().getContextClassLoader.getResourceAsStream("props"))
p
}
val properties: Properties = loadedProps
val forumId = properties.get("forum_id")
}
You could also make loadedProps lazy, meaning that it will be initialized on its first access:
class Config {
val properties: Properties = loadedProps
val forumId = properties.get("forum_id")
private lazy val loadedProps = {
val p = new Properties()
p.load(Thread.currentThread().getContextClassLoader.getResourceAsStream("props"))
p
}
}
Using lazy val has the advantage that your code is more robust to refactoring, as merely changing the declaration order of your vals won't break your code.
Also in this particular occurence, you can just turn loadedProps into a def (as suggested by #NIA) as it is only used once anyway.
I think here loadedProps can be simply turned into a function by simply replacing val with def:
private def loadedProps = {
// Tons of code
}
In this case you are sure that it is called when you call it.
But not sure is it a pattern for this case.
Just an addition with a little more explanation:
Your properties field initializes earlier than loadedProps field here. null is a field's value before initialization - that's why you get it. In def case it's just a method call instead of accessing some field, so everything is fine (as method's code may be called several times - no initialization here). See, http://docs.scala-lang.org/tutorials/FAQ/initialization-order.html. You may use def or lazy val to fix it
Why def is so different? That's because def may be called several times, but val - only once (so its first and only one call is actually initialization of the fileld).
lazy val can initialize only when you call it, so it also would help.
Another, simpler example of what's going on:
scala> class A {val a = b; val b = 5}
<console>:7: warning: Reference to uninitialized value b
class A {val a = b; val b = 5}
^
defined class A
scala> (new A).a
res2: Int = 0 //null
Talking more generally, theoretically scala could analize the dependency graph between fields (which field needs other field) and start initialization from final nodes. But in practice every module is compiled separately and compiler might not even know those dependencies (it might be even Java, which calls Scala, which calls Java), so it's just do sequential initialization.
So, because of that, it couldn't even detect simple loops:
scala> class A {val a: Int = b; val b: Int = a}
<console>:7: warning: Reference to uninitialized value b
class A {val a: Int = b; val b: Int = a}
^
defined class A
scala> (new A).a
res4: Int = 0
scala> class A {lazy val a: Int = b; lazy val b: Int = a}
defined class A
scala> (new A).a
java.lang.StackOverflowError
Actually, such loop (inside one module) can be theoretically detected in separate build, but it won't help much as it's pretty obvious.
I am using scala 2.10.0-snapshot dated (20120522) and have the following Scala files:
this one defines the typeclass and a basic typeclass instance:
package com.netgents.typeclass.hole
case class Rabbit
trait Hole[A] {
def findHole(x: A): String
}
object Hole {
def apply[A: Hole] = implicitly[Hole[A]]
implicit val rabbitHoleInHole = new Hole[Rabbit] {
def findHole(x: Rabbit) = "Rabbit found the hole in Hole companion object"
}
}
this is the package object:
package com.netgents.typeclass
package object hole {
def findHole[A: Hole](x: A) = Hole[A].findHole(x)
implicit val rabbitHoleInHolePackage = new Hole[Rabbit] {
def findHole(x: Rabbit) = "Rabbit found the hole in Hole package object"
}
}
and here is the test:
package com.netgents.typeclass.hole
object Test extends App {
implicit val rabbitHoleInOuterTest = new Hole[Rabbit] {
def findHole(x: Rabbit) = "Rabbit found the hole in outer Test object"
}
{
implicit val rabbitHoleInInnerTest = new Hole[Rabbit] {
def findHole(x: Rabbit) = "Rabbit found the hole in inner Test object"
}
println(findHole(Rabbit()))
}
}
As you can see, Hole is a simple typeclass that defines a method which a Rabbit is trying to find. I am trying to figure out the implicit resolution rules on it.
with all four typeclass instances uncommented, scalac complains about ambiguities on rabbitHoleInHolePackage and rabbitHoleInHole. (Why?)
if I comment out rabbitHoleInHole, scalac compiles and I get back "Rabbit found the hole in Hole package object". (Shouldn't implicits in the local scope take precedence?)
if I then comment out rabbitHoleInHolePackage, scalac complains about ambiguities on rabbitHoleInOuterTest and rabbitHoleInInnerTest. (Why? In the article by eed3si9n, url listed below, he found implicits btw inner and outer scope can take different precedence.)
if I then comment out rabbitHoleInInnerTest, scalac compiles and I get back "Rabbit found the hole in outer Test object".
As you can see, the above behaviors do not follow the rules I've read on implicit resolution at all. I've only described a fraction of combinations you can do on commenting/uncommenting out instances and most of them are very strange indeed - and I haven't gotten into imports and subclasses yet.
I've read and watched presentation by suereth, stackoverflow answer by sobral, and a very elaborate revisit by eed3si9n, but I am still completely baffled.
Let's start with the implicits in the package object and the type class companion disabled:
package rabbit {
trait TC
object Test extends App {
implicit object testInstance1 extends TC { override def toString = "test1" }
{
implicit object testInstance2 extends TC { override def toString = "test2" }
println(implicitly[TC])
}
}
}
Scalac looks for any in scope implicits, finds testInstance1 and testInstance2. The fact that one is in a tighter scope is only relevant if they have the same name -- the normal rules of shadowing apply. We've chosen distinct names, and there neither implicit is more specific than the other, so an ambiguity is correctly reported.
Let's try another example, this time we'll play off an implicit in the local scope against one in the package object.
package rabbit {
object `package` {
implicit object packageInstance extends TC { override def toString = "package" }
}
trait TC
object Test extends App {
{
implicit object testInstance2 extends TC { override def toString = "test2" }
println(implicitly[TC])
}
}
}
What happens here? The first phase of the implicit search, as before, considers all implicits in scope at the call site. In this case, we have testInstance2 and packageInstance. These are ambiguous, but before reporting that error, the second phase kicks in, and searches the implicit scope of TC.
But what is in the implicit scope here? TC doesn't even have a companion object? We need to review the precise definition here, in 7.2 of the Scala Reference.
The implicit scope of a type T consists of all companion modules
(§5.4) of classes that are associated with the implicit parameter’s
type. Here, we say a class C is associated with a type T, if it
is a base class (§5.1.2) of some part of T.
The parts of a type T are:
if T is a compound type T1 with ... with Tn,
the union of the parts of T1, ..., Tn, as well as T itself,
if T is a parameterized type S[T1, ..., Tn], the union of the parts of S and
T1,...,Tn,
if T is a singleton type p.type, the parts of the type of p,
if T is a type projection S#U, the parts of S as well as T itself,
in all other cases, just T itself.
We're searching for rabbit.TC. From a type system perspective, this is a shorthand for: rabbit.type#TC, where rabbit.type is a type representing the package, as though it were a regular object. Invoking rule 4, gives us the parts TC and p.type.
So, what does that all mean? Simply, implicit members in the package object are part of the implicit scope, too!
In the example above, this gives us an unambiguous choice in the second phase of the implicit search.
The other examples can be explained in the same way.
In summary:
Implicit search proceeds in two phases. The usual rules of importing and shadowing determine a list of candidates.
implicit members in an enclosing package object may also be in scope, assuming you are using nested packages.
If there are more than one candidate, the rules of static overloading are used to see if there is a winner. An addiotnal a tiebreaker, the compiler prefers one implicit over another defined in a superclass of the first.
If the first phase fails, the implicit scope is consulted in the much same way. (A difference is that implicit members from different companions may have the same name without shadowing each other.)
Implicits in package objects from enclosing packages are also part of this implicit scope.
UPDATE
In Scala 2.9.2, the behaviour is different and wrong.
package rabbit {
trait TC
object Test extends App {
implicit object testInstance1 extends TC { override def toString = "test1" }
{
implicit object testInstance2 extends TC { override def toString = "test2" }
// wrongly considered non-ambiguous in 2.9.2. The sub-class rule
// incorrectly considers:
//
// isProperSubClassOrObject(value <local Test>, object Test)
// isProperSubClassOrObject(value <local Test>, {object Test}.linkedClassOfClass)
// isProperSubClassOrObject(value <local Test>, <none>)
// (value <local Test>) isSubClass <none>
// <notype> baseTypeIndex <none> >= 0
// 0 >= 0
// true
// true
// true
// true
//
// 2.10.x correctly reports the ambiguity, since the fix for
//
// https://issues.scala-lang.org/browse/SI-5354?focusedCommentId=57914#comment-57914
// https://github.com/scala/scala/commit/6975b4888d
//
println(implicitly[TC])
}
}
}