Protected abstract vars in Scala can be implemented public? - scala

Could someone explain why scala would allow a public variable, to satisfy the implementation of an abstract declared Protected item? My first assumption is that the compiler would complain, but I created a small test to see if this worked, and to my surprise it does. Is there an advantage to this? (perhaps this is normal in OOP?) Any methods to avoid the accidental pitfall?
object NameConflict extends App {
abstract class A {
protected[this] var name:String
def speak = println(name)
}
class B(var name:String) extends A { //notice we've declared a public var
}
val t = new B("Tim")
t.speak
println(t.name) // name is exposed now?
}

It's normal and as in Java. Sometimes it's desirable to increase the visibility of a member.
You can't do it the other way around and turn down visibility in a subclass, because the member can by definition be accessed through the supertype.
If invoking a method has terrible consequences, keep the method private and use a template method that can be overridden; the default implementation would invoke the dangerous method.
abstract class A {
private[this] def dangerous = ???
final protected def process: Int = {
dangerous
template
}
protected def template: Int = ???
}
class B extends A {
override def template = 5
}

Related

Force inheriting class to implement methods as protected

I've a got a trait:
trait A {
def some: Int
}
and an object mixing it in:
object B extends A {
def some = 1
}
The question is, is there a way to declare some in A in a way that all inheriting objects have to declare the some method as protected for example? Something that would make the compiler yell at the above implementation of some in B?
UPDATE:
Just a clarification on the purpose of my question: Within an organization, there are some software development standards that are agreed upon. These standards, for example 'The some method is to always be declared as private when inheriting from trait A', are in general communicated via specs or documents listing all the standards or via tools such as Jenkins, etc... I am wondering if we could go even further and have these standards right in the code, which would save a lot of time correcting issues raised by Jenkins for example.
UPDATE 2:
A solution I could think of is as follows:
abstract class A(
protected val some: Int
){
protected def none: String
}
Use an abstract class instead of a trait and have the functions or values that I need to be protected by default passed in the constructor:
object B extends A(some = 1) {
def none: String = "none"
}
Note that in this case, some is by default protected unless the developer decides to expose it through another method. However, there will be no guarantee that, by default, none will be protected as well.
This works for the use case I described above. The problem with this implementation is that if we have a hierarchy of abstract classes, we would have to add the all the constructor parameters of the parent to every inheriting child in the hierarchy. For example:
abstract class A(
protected val some: Int
)
abstract class B(
someImp: Int,
protected val none: String
) extends A(some = someImp)
object C extends B(
someImp = 1,
none = "none"
)
In contrast, using traits, we could have been able to simply write:
trait A{
protected val some: Int
}
trait B extends A{
protected val none: String
}
object C extends B{
val some = 1
val none = "none"
}
I don't see any straight way to restrict subclasses from choosing a wider visibility for inherited members.
It depends on why you want to hide the field some, but if the purpose is just to forbid end-users from accessing the field, you can use a slightly modified form of the cake pattern:
trait A {
trait A0 {
protected def some: Int
}
def instance: A0
}
object B extends A {
def instance = new A0 {
def some = 5
}
}
Yeah, it looks nasty but the compiler will yell when someone tries to do:
B.instance.some
Another version of this solution is just to do things like in your example (adding protected to the member "some" in A), but to never expose directly a reference of type B (always return references of type A instead)

How to solve "Implementation restriction: trait ... accesses protected method ... inside a concrete trait method."

A Java library class I'm using declares
protected getPage(): Page { ... }
Now I want to make a helper Scala mixin to add features that I often use. I don't want to extend the class, because the Java class has different subclasses I want to extend at different places. The problem is that if I use getPage() in my mixin trait, I get this error:
Implementation restriction: trait MyMixin accesses protected method getPage inside a concrete trait method.
Is there a solution how to make it work, without affecting my subclasses? And why is there this restriction?
So far, I came up with a work-around: I override the method in the trait as
override def getPage(): Page = super.getPage();
This seems to work, but I'm not completely satisfied. Luckily I don't need to override getPage() in my subclasses, but if I needed, I'd get two overrides of the same method and this work-around won't work.
The problem is that even though the trait extends the Java class, the implementation is not actually in something that extends the Java class. Consider
class A { def f = "foo" }
trait T extends A { def g = f + "bar" }
class B extends T { def h = g + "baz" }
In the actual bytecode for B we see
public java.lang.String g();
Code:
0: aload_0
1: invokestatic #17; //Method T$class.g:(LT;)Ljava/lang/String;
4: areturn
which means it just forwards to something called T$class, which it turns out is
public abstract class T$class extends java.lang.Object{
public static java.lang.String g(T);
Code:
...
So the body of the code isn't called from a subclass of A at all.
Now, with Scala that's no problem because it just omits the protected flag from bytecode. But Java enforces that only subclasses can call protected methods.
And thus you have the problem, and the message.
You cannot easily get around this problem, though the error message suggests what is perhaps the best alternative:
public class JavaProtected {
protected int getInt() { return 5; }
}
scala> trait T extends JavaProtected { def i = getInt }
<console>:8: error: Implementation restriction: trait T accesses
protected method getInt inside a concrete trait method.
Add an accessor in a class extending class JavaProtected as a workaround.
Note the last line.
class WithAccessor extends JavaProtected { protected def myAccessor = getInt }
trait T extends WithAccessor { def i = myAccessor }
works.

Delaying trait initialization

I need a smart mechanism for component composition which allows mixed in traits to initialize after the composed component. The following throws a NullPointerException:
class Component {
def addListener(pf: PartialFunction[Any, Unit]) {}
}
trait DynamicComponent {
protected def component: Component
component.addListener {
case x =>
}
}
class Foo extends DynamicComponent {
protected val component = new Component
}
new Foo // -> NullPointerException
The following things are not options for me:
Using protected lazy val component; that would produce an avalange of dozens of vals needing to become lazy, something I do not want.
Putting addListener in a method, e.g. initDynamic(); because I will be mixing in many traits, and I don't want to remember to call half a dozen initFoo() methods.
Using DelayedInit. This doesn't work with traits, at least according to the scaladocs.
I could live with a single init() call, but only under the following conditions:
all mixed in traits can easily declare to be invoked in this one single call
it is a compile error to forget the init() statement.
You can delay the initialization of a trait by by using early definitions. (See section 5.1.6 of the scala language specification)
class Foo extends {
protected val component = new Component
} with DynamicComponent
It's even clunkier than your solution, but you can always require the creation of a val that must be set with the init() method. You could choose to not do it last and get an error at runtime, but at least you won't forget it entirely:
class Component {
def addListener(pf: PartialFunction[Any, Unit]) {
println("Added")
}
}
trait Dyn {
protected def component: Component
protected val initialized: Init
class Init private () {}
private object Init { def apply() = new Init() }
def init() = { component.addListener{ case x => }; Init() }
}
class Foo extends Dyn {
protected val component = new Component
protected val initialized = init()
}
No cheating!:
> class Bar extends Dyn { protected val component = new Component }
<console>:12: error: class Bar needs to be abstract, since value
initialized in trait Dyn of type Bar.this.Init is not defined
class Bar extends Dyn { protected val component = new Component }
The advantage this has is if you need multiple things to be in place before you initialize all of them cooperatively, or if your Component class is final so you can't mix in anything else.
AN idea could be to use the trick described here:
Cake pattern: how to get all objects of type UserService provided by components
All your components that should be initialized could be registered in some Seq[InitializableComponent]. And then you could initialize all registered components with a foreach.
No component will be forgotten in that Seq because they are registered automatically, but you can still forget to call the foreach anyway...
Here is one idea (I am happy to read about other suggestions):
class Component {
def addListener(pf: PartialFunction[Any, Unit]) {
println("Added")
}
}
trait DynamicComponentHost {
protected def component: Component with DynamicPeer
protected trait DynamicPeer {
_: Component =>
addListener {
case x =>
}
}
}
class Foo extends DynamicComponentHost {
protected val component = new Component with DynamicPeer
}
new Foo
So basically I am forcing the component to mix in a type that can only be provided by the mixed in trait. Reasonable? Looks a bit too complicated in my eyes.

Is there any advantage to definining a val over a def in a trait?

In Scala, a val can override a def, but a def cannot override a val.
So, is there an advantage to declaring a trait e.g. like this:
trait Resource {
val id: String
}
rather than this?
trait Resource {
def id: String
}
The follow-up question is: how does the compiler treat calling vals and defs differently in practice and what kind of optimizations does it actually do with vals? The compiler insists on the fact that vals are stable — what does in mean in practice for the compiler? Suppose the subclass is actually implementing id with a val. Is there a penalty for having it specified as a def in the trait?
If my code itself does not require stability of the id member, can it be considered good practice to always use defs in these cases and to switch to vals only when a performance bottleneck has been identified here — however unlikely this may be?
Short answer:
As far as I can tell, the values are always accessed through the accessor method. Using def defines a simple method, which returns the value. Using val defines a private [*] final field, with an accessor method. So in terms of access, there is very little difference between the two. The difference is conceptual, def gets reevaluated each time, and val is only evaluated once. This can obviously have an impact on performance.
[*] Java private
Long answer:
Let's take the following example:
trait ResourceDef {
def id: String = "5"
}
trait ResourceVal {
val id: String = "5"
}
The ResourceDef & ResourceVal produce the same code, ignoring initializers:
public interface ResourceVal extends ScalaObject {
volatile void foo$ResourceVal$_setter_$id_$eq(String s);
String id();
}
public interface ResourceDef extends ScalaObject {
String id();
}
For the subsidiary classes produced (which contain the implementation of the methods), the ResourceDef produces is as you would expect, noting that the method is static:
public abstract class ResourceDef$class {
public static String id(ResourceDef $this) {
return "5";
}
public static void $init$(ResourceDef resourcedef) {}
}
and for the val, we simply call the initialiser in the containing class
public abstract class ResourceVal$class {
public static void $init$(ResourceVal $this) {
$this.foo$ResourceVal$_setter_$id_$eq("5");
}
}
When we start extending:
class ResourceDefClass extends ResourceDef {
override def id: String = "6"
}
class ResourceValClass extends ResourceVal {
override val id: String = "6"
def foobar() = id
}
class ResourceNoneClass extends ResourceDef
Where we override, we get a method in the class which just does what you expect. The def is simple method:
public class ResourceDefClass implements ResourceDef, ScalaObject {
public String id() {
return "6";
}
}
and the val defines a private field and accessor method:
public class ResourceValClass implements ResourceVal, ScalaObject {
public String id() {
return id;
}
private final String id = "6";
public String foobar() {
return id();
}
}
Note that even foobar() doesn't use the field id, but uses the accessor method.
And finally, if we don't override, then we get a method which calls the static method in the trait auxiliary class:
public class ResourceNoneClass implements ResourceDef, ScalaObject {
public volatile String id() {
return ResourceDef$class.id(this);
}
}
I've cut out the constructors in these examples.
So, the accessor method is always used. I assume this is to avoid complications when extending multiple traits which could implement the same methods. It gets complicated really quickly.
Even longer answer:
Josh Suereth did a very interesting talk on Binary Resilience at Scala Days 2012, which covers the background to this question. The abstract for this is:
This talk focuses on binary compatibility on the JVM and what it means
to be binary compatible. An outline of the machinations of binary
incompatibility in Scala are described in depth, followed by a set of rules and guidelines that will help developers ensure their own
library releases are both binary compatible and binary resilient.
In particular, this talk looks at:
Traits and binary compatibility
Java Serialization and anonymous classes
The hidden creations of lazy vals
Developing code that is binary resilient
The difference is mainly that you can implement/override a def with a val but not the other way around. Moreover val are evaluated only once and def are evaluated every time they are used, using def in the abstract definition will give the code who mixes the trait more freedom about how to handle and/or optimize the implementation. So my point is use defs whenever there isn't a clear good reason to force a val.
A val expression is evaluated once on variable declaration, it is strict and immutable.
A def is re-evaluated each time you call it
def is evaluated by name and val by value. This means more or less that val must always return an actual value, while def is more like a promess that you can get a value when evaluating it. For example, if you have a function
def trace(s: => String ) { if (level == "trace") println s } // note the => in parameter definition
that logs an event only if the log level is set to trace and you want to log an objects toString. If you have overriden toString with a value, then you need to pass that value to the trace function. If toString however is a def, it will only be evaluated once it's sure that the log level is trace, which could save you some overhead.
def gives you more flexibility, while val is potentially faster
Compilerwise, traits are compiled to java interfaces so when defining a member on a trait, it makes no difference if its a var or def. The difference in performance would depend on how you choose to implement it.

How to implement intermediate types for implicit methods?

Assume I want to offer method foo on existing type A outside of my control. As far as I know, the canonical way to do this in Scala is implementing an implicit conversion from A to some type that implements foo. Now I basically see two options.
Define a separate, maybe even hidden class for the purpose:
protected class Fooable(a : A) {
def foo(...) = { ... }
}
implicit def a2fooable(a : A) = new Fooable(a)
Define an anonymous class inline:
implicit def a2fooable(a : A) = new { def foo(...) = { ... } }
Variant 2) is certainly less boilerplate, especially when lots of type parameters happen. On the other hand, I think it should create more overhead since (conceptually) one class per conversion is created, as opposed to one class globally in 1).
Is there a general guideline? Is there no difference, because compiler/VM get rid of the overhead of 2)?
Using a separate class is better for performance, as the alternative uses reflection.
Consider that
new { def foo(...) = { ... } }
is really
new AnyRef { def foo(...) = { ... } }
Now, AnyRef doesn't have a method foo. In Scala, this type is actually AnyRef { def foo(...): ... }, which, if you remove AnyRef, you should recognize as a structural type.
At compile time, this time can be passed back and forth, and everywhere it will be known that the method foo is callable. However, there's no structural type in the JVM, and to add an interface would require a proxy object, which would cause some problems such as breaking referential equality (ie, an object would not be equal with a structural type version of itself).
The way found around that was to use cached reflection calls for structural types.
So, if you want to use the Pimp My Library pattern for any performance-sensitive application, declare a class.
I believe 1 and 2 get compiled to the same bytecode (except for the class name that gets generated in case 2).
If Fooable exists only for you to be able to convert implicitly A to Fooable (and you're never going to directly create and use a Fooable), then I would go with option 2.
However, if you control A (meaning A is not a java library class that you can't subclass) I would consider using a trait instead of implicit conversions to add behaviour to A.
UPDATE:
I have to reconsider my answer. I would use variant 1 of your code, because variant 2 turns out to be using reflection (scala 2.8.1 on Linux).
I compiled these two versions of the same code, decompiled them to java with jd-gui and here are the results:
source code with named class
class NamedClass { def Foo : String = "foo" }
object test {
implicit def StrToFooable(a: String) = new NamedClass
def main(args: Array[String]) { println("bar".Foo) }
}
source code with anonymous class
object test {
implicit def StrToFooable(a: String) = new { def Foo : String = "foo" }
def main(args: Array[String]) { println("bar".Foo) }
}
compiled and decompiled to java with java-gui. The "named" version generates a NamedClass.class that gets decompiled to this java:
public class NamedClass
implements ScalaObject
{
public String Foo()
{
return "foo";
}
}
the anonymous generates a test$$anon$1 class that gets decompiled to the following java
public final class test$$anon$1
{
public String Foo()
{
return "foo";
}
}
so almost identical, except for the anonymous being "final" (they apparently want to make extra sure you won't get out of your way to try and subclass an anonymous class...)
however at the call site I get this java for the "named" version
public void main(String[] args)
{
Predef..MODULE$.println(StrToFooable("bar").Foo());
}
and this for the anonymous
public void main(String[] args) {
Object qual1 = StrToFooable("bar"); Object exceptionResult1 = null;
try {
exceptionResult1 = reflMethod$Method1(qual1.getClass()).invoke(qual1, new Object[0]);
Predef..MODULE$.println((String)exceptionResult1);
return;
} catch (InvocationTargetException localInvocationTargetException) {
throw localInvocationTargetException.getCause();
}
}
I googled a little and found that others have reported the same thing but I haven't found any more insight as to why this is the case.