I am unable to understand how to work with nested classes in Scala esp when I encountered the error below:
class Action {
val entityModelVar = new EntityModel
}
class EntityModel {
class EntityLabel {
....
}
}
The above code-snippet gives an idea about my class structure. Here's two code blocks that puzzle me on how they work.
val actionList=Array[Action](Action1,Action2)
..
val newLabels=actionList(i).test(doc)
actionList(i).retrain(newLabels) //error pointed here
**Error: type mismatch:
found : Seq[a.entityModelVar.EntityLabel]
required : Seq[_13.entityModelVar.EntityLabel] where _13:Action**
However, the following code compiles without any error:
//This works fine
val a=actionList(i)
val newLabels=a.test(doc2)
a.retrain(newLabels)
Also, here is the definition of the retrain function:
def retrain(labels:Seq[entityModelVar.EntityLabel])={
entityModelVar.retrain(labels)
}
and the signature of EntityModel.retrain function:
def retrain(testLabels:Seq[EntityLabel]):Unit
The problem is that the inner class has got to belong to the same instance of the outer class. But is actionList(i) guaranteed to be the same instance between two calls? The compiler doesn't know for certain (maybe another thread fiddles with it? who knows what apply does anyway?), so it complains. The _13 is its name for a temporary variable that it wishes were there to assure that it is the same instance.
Your next one works because the compiler can see that you call actionList(i) once, store that instance, get an inner class from it and then apply it.
So, moral of the story is: you need to make it abundantly obvious to the compiler that your inner class instances match up to their proper outer class, and the best way to do that is to store that outer class in a val where it can't change without you (or the compiler) noticing.
(You can also specify types of individual variables if you break up parameter blocks. So, for instance: def foo(m: EntityModel)(l: m.EntityLabel) would be a way to write a function that takes an outer class an an inner one corresponding to it.)
Related
I have come across this new method definition. Need explanation what exactly happens here.
Parent trait
sealed trait Generic{
def name : String = name // what is the body of this function call?
def id : Int = id
def place : String = place
}
Child case classes
case class Capital(
countryName : String,
override val id: Int,
override val place:String
) extends Generic
warning: method place in trait Generic does nothing other than call itself recursively I get this warning message is there anything wrong in using these types of methods?
How exactly compiler treat these type of function calls def name : String = name?
Is it this call treats its body as its method name?
You are providing default implementations in the trait that are infinite loops, very much like in the following example:
def infiniteLoop: Unit = infiniteLoop
This is arguably the most useless and dangerous code that you could possibly put in a method of a trait. You could only make it worse by making it non-deterministic. Fortunately, the compiler gives you a very clear and precise warning:
warning: method place in trait Generic does nothing other than call itself recursively
"Is there anything wrong in using these types of methods"?: having unproductive infinite loops in your code is usually considered wrong, unless your goal is to produce as much heat as possible using computer hardware.
"How exactly compiler treat these type of function calls"?: Just like any other tail recursive function, but additionally it outputs the above warning, because it sees that it is obviously not what you want.
"Is it this call treats its body as its method name?": The body of each method declaration is what follows the =-sign. In your case, the otherwise common curly braces around the function body are omitted, and the entire function body consists only of the recursive call to itself.
If you don't want to have any unnecessary infinite loops around, simply leave the methods unimplemented:
sealed trait Generic{
def name: String
def id: Int
def place: String
}
This also has the additional advantage that the compiler can warn you if you forget to implement one of these methods in a subclass.
Ok, so in your trait you define methods body via recursion. Means that these methods, if not overridden (and they should not as soon as you have defined them somehow), will call itself recursively till StackOverflowError happens. For example, you did not override name method in Capital, so in this case you get StackOverflowError at runtime:
val c = Capital("countryName", 1, "place")
c.name
So, you are warned, that you have recursive definition. Trait is sealed, so at least it cannot be overridden in other places, but anyway, such definition is something like put mines on your road and rely on your memory, that you will not forget about them (and anybody else will be care enough to check trait definition before extending)
I'm writing a script to automatically configure sharding for some specific MongoDB collections when the app is being deployed on a fresh cluster. The application is using the Lift framework and basically every sharded collection is mapped to a MongoRecord class extending a particular "ShardedCollection" trait. I need to call a particular method on those classes in order to get their collection name.
So the first step is to find in the code those specifics classes and for that I use ClassUtil . Then I need a way to instantiate them and for that I thought that java reflection should be able to do it. It's working but only if those classes do not belong to an outer class.
The configuration in this specific edge case is like:
class X {
class Y extends ShardedCollection {
}
}
So after reading some documentation I found that I had to call YConstructor.newInstance(XObject), newInstance taking as a first argument an XObject (as an instance of X) When Y is an inner class of X. My strategy is to recursively instantiate the enclosing classes until I'm getting the one that has the ShardedCollection trait.
The problem arise when X is no more a class but a trait, and then there is no constructor that I can use for it, but I still need to feed an XObject to newInstance .. Tricky :(
To be very concise from the java doc
If the constructor's declaring class is an inner class in a non-static context, the first argument to the constructor needs to be the enclosing instance
What do I do when the enclosing "thing" is a trait ? (assuming that I can't modify anything in the code base)
How can you make code in a Scala library call type-specific code for objects supplied by a caller to that library, where the decision about which type-specific code to call is made at compile-time (statically), not at run-time?
To illustrate the concept, suppose I want to make a library function that prints objects one way if there's a CanMakeDetailedString defined for them, or just as .toString if not. See nicePrint in this example code:
import scala.language.implicitConversions
trait CanMakeDetailedString[A] extends (A => String)
def noDetailedString[A] = new CanMakeDetailedString[A] {
def apply(a: A) = a.toString
}
object Util {
def nicePrint[A](a: A)
(implicit toDetail: CanMakeDetailedString[A] = noDetailedString[A])
: Unit = println(toDetail(a))
def doStuff[A](a: A)
: Unit = { /* stuff goes here */ nicePrint(a) }
}
Now here is some test code:
object Main {
import Util._
case class Rototiller(name: String)
implicit val rototillerDetail = new CanMakeDetailedString[Rototiller] {
def apply(r: Rototiller) = s"The rototiller named ${r.name}."
}
val r = Rototiller("R51")
nicePrint(r)
doStuff(r)
}
Here's the output in Scala 2.11.2:
The rototiller named R51.
Rototiller(R51)
When I call nicePrint from the same scope where rototillerDetail is defined, the Scala compiler finds rototillerDetail and passes it implicitly to nicePrint. But when, from the same scope, I call a function in a different scope (doStuff) that calls nicePrint, the Scala compiler doesn't find rototillerDetail.
No doubt there are good reasons for that. I'm wondering, though, how can I tell the Scala compiler "If an object of the needed type exists, use it!"?
I can think of two workarounds, neither of which is satisfactory:
Supply an implicit toDetail argument to doStuff. This works, but it requires me to add an implicit toDetail argument to every function that might, somewhere lower in the call stack, have a use for a CanMakeDetailedString object. That is going to massively clutter my code.
Scrap the implicit approach altogether and do this in object-oriented style, making Rototiller inherit from CanMakeDetailedString by overriding a special new method like .toDetail.
Is there some technique, trick, or command-line switch that could enable the Scala compiler to statically resolve the right implicit object? (Rather than figuring it out dynamically, when the program is running, as in the object-oriented approach.) If not, this seems like a serious limitation on how much use library code can make of "typeclasses" or implicit arguments. In other words, what's a good way to do what I've done badly above?
Clarification: I'm not asking how this can be done with implicit val. I'm asking how you can get the Scala compiler to statically choose type-appropriate functions in library code, without explicitly listing, in every library function, an implicit argument for every function that might get called lower in the stack. It doesn't matter to me if it's done with implicits or anything else. I just want to know how to write generic code that chooses type-specific functions appropriately at compile-time.
implicits are resolved at compile time so it can't know what A is in doStuff without more information.
That information can be provided through an extra implicit parameter or a base type / interface as you suggested.
You could also use reflection on the A type, use the getType that returns the child type, cast the object to that type, and call a predefined function that has the name of the type that writes the string details for you. I don't really recommend it as any OOP or FP solution is better IMHO.
I am a newbie to scala.
Here is a Models.scala I am trying to write.
When I run sbt package it is giving error
Models.scala:25: models.Session.Network.type does not take parameters
[error] network : Network = Network() ,
I don't understand why this error is taking place, I am not passing any parameter in when doing Network(). Can someone please help me
Here is a smaller code that reproduces your problem :
case class A(b:B = B(3, 5))
case class B(i: Int, j: Int)
object A {
val B = "whatever"
}
On the first line, we get
too many arguments for method apply: (index: Int)Char in class StringOps
What happens is that when you define the signature of the case class, you are both defining the signature of the constructor (when you call with new), and of the apply method in the companion object (when you call without new).
When you put default value in argument, (Network() in your code, and B(3, 5) in mine), this code will be compiled both in the context of the constructor and of the apply method of the companion object.
As you have defined a companion object Session, the apply method is automatically added into this object. It happens that Network() in your companion object means Network.apply() on the Network object you have defined there, and it means the string B with value "whatever" in my code.
What is really weird then is that it is possible that the default expression has different meanings, but both correct in the context of the constructor and of the apply method. In this case, you may get different behavior depending on whether you call with or without new.
Here is an example :
case class A(b:B = bb)
case class B(i: Int, j: Int)
object bb extends B(3, 4)
object A {
val bb = new B(7, 2)
}
object Test extends App {
println(A())
println(new A())
}
Running test will print
A(B(7,2))
A(B(3,4))
For your specific problem, there are easy workarounds.
network: Network = models.Network(),
will work, obviously, because it is then clear that you want Network in the package and not in object Session.
network: Network = new Network(),
will work too, because with the new, the compiler will look for a Network type and not a Network value. In companion object session, the Network value is shadowed by the local declaration, but the Network type is not.
IMO, the former (models.Network) is clearer.
PS. I checked the specification and I believe this weird behavior is in line with it. Namely, (5.3.2) an apply method is genarated inside the companion object with the same parameter list as the constructor. That includes the default values, which would then be compiled inside the companion object.
It looks like you may have some import overrides going on. Do you have an import Sessions._ someplace in the code? Notice your error refers to Session.Network, which is your implicit BSonDocument class. You're probably trying to construct the plain case class.
Try using Network explicitly: network: models.Network = models.Network()
Ok, I'll explain why I ask this question. I begin to read Lift 2.2 source code these days.
It's good if you happened to read lift source code before.
In Lift, I found that, define inner class and inner trait are very heavily used.
object Menu has 2 inner traits and 4 inner classes. object Loc has 18 inner classes, 5 inner traits, 7 inner objects.
There're tons of codes write like this. I wanna to know why the author write like this.
Is it because it's the author's
personal taste or a powerful use of
language feature?
Is there any trade-off for this kind
of usage?
Before 2.8, you had to choose between packages and objects. The problem with packages is that they cannot contain methods or vals on their own. So you have to put all those inside another object, which can get awkward. Observe:
object Encrypt {
private val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = encryptInt(ii.next)
}
}
Now you can import Encrypt._ and gain access to the method encryptInt as well as the class EncryptIterator. Handy!
In contrast,
package encrypt {
object Encrypt {
private[encrypt] val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
}
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = Encrypt.encryptInt(ii.next)
}
}
It's not a huge difference, but it makes the user import both encrypt._ and encrypt.Encrypt._ or have to keep writing Encrypt.encryptInt over and over. Why not just use an object instead, as in the first pattern? (There's really no performance penalty, since nested classes aren't actually Java inner classes under the hood; they're just regular classes as far as the JVM knows, but with fancy names that tell you that they're nested.)
In 2.8, you can have your cake and eat it too: call the thing a package object, and the compiler will rewrite the code for you so it actually looks like the second example under the hood (except the object Encrypt is actually called package internally), but behaves like the first example in terms of namespace--the vals and defs are right there without needing an extra import.
Thus, projects that were started pre-2.8 often use objects to enclose lots of stuff as if they were a package. Post-2.8, one of the main motivations has been removed. (But just to be clear, using an object still doesn't hurt; it's more that it's conceptually misleading than that it has a negative impact on performance or whatnot.)
(P.S. Please, please don't try to actually encrypt anything that way except as an example or a joke!)
Putting classes, traits and objects in an object is sometimes required when you want to use abstract type variables, see e.g. http://programming-scala.labs.oreilly.com/ch12.html#_parameterized_types_vs_abstract_types
It can be both. Among other things, an instance of an inner class/trait has access to the variables of its parent. Inner classes have to be created with a parent instance, which is an instance of the outer type.
In other cases, it's probably just a way of grouping closely related things, as in your object example. Note that the trait LocParam is sealed, which means that all subclasses have to be in the same compile unit/file.
sblundy has a decent answer. One thing to add is that only with Scala 2.8 do you have package objects which let you group similar things in a package namespace without making a completely separate object. For that reason I will be updating my Lift Modules proposal to use a package object instead of a simple object.