How can I meaningfully use precondition contracts in D interfaces? - interface

When I override functions in D with "in" contracts, then the inherited "in" contracts are checked. If they fail, then the overridden "in" contracts are checked. If I don't specify any in contract, then it is interpreted as if there is an empty "in" contract. So the following code compiles and runs successfully.
module main;
import std.stdio;
interface I
{
void write( int i )
in
{
assert( i > 0 );
}
}
class C : I
{
void write( int i )
{
writeln( i );
}
}
int main()
{
I i = new C;
i.write( -5 );
getchar();
return 0;
}
I only want the precondition of I.write() to be checked when I call i.write() since that is what is statically known to be sufficient for I.write() to run correctly by the compiler. Checking all preconditions after dynamic dispatch strikes me as odd from an OO perspective since encapsulation is lost.
I could repeat the precondition or write in { assert( false ); } in all classes implementing the interface, but that's a pain. Is that a design error in the D language? Or is there any proper scalable way to do that?

http://dlang.org/dbc.html
If a function in a derived class overrides a function in its super class, then only one of the in contracts of the function and its base functions must be satisfied. Overriding functions then becomes a process of loosening the in contracts.
A function without an in contract means that any values of the function parameters are allowed. This implies that if any function in an inheritance hierarchy has no in contract, then in contracts on functions overriding it have no useful effect.
Conversely, all of the out contracts needs to be satisfied, so overriding functions becomes a processes of tightening the out contracts.
It is actually a difficult design puzzle when polymorphic behavior comes into question. Look, for example, at this bug report with related long discussion: http://d.puremagic.com/issues/show_bug.cgi?id=6857
Regarding question how to achieve wanted behavior - mixin's always work when copy-paste needs to be prevented, but I am not sure that it is OK to do it from the point of Design By Contract paradigm. Unfortunately, need someones more theoretically competent in this question advice.

A precondition in D is a requirement for the function to run correctly. If you overload the function, you write a new code for it, the old precondition - which is a requirement for the old code - is not necessarily a requirement for the new code.

So this issue, while not directly discussed about interfaces, is http://d.puremagic.com/issues/show_bug.cgi?id=6856
This might be a hard one to get in though, Walter is big on the no breaking changes thing.

Related

Dependency Injection before generation

This is a follow up question from my previous question (Difference between "new" and "gen").
Is there a way to pass dependencies into a struct before generation occurs?
I'm interested in trying to write my code in a way which is easily tested. Currently, our codebase uses get_enclosing_unit() frequently to acquire pointers to helper structs such as a translator/params. This causes there to be lots of bidirectional dependencies in our codebase. This means it is hard to test pieces independent of the other structs.
Here is an example of what I am trying to avoid.
pregenerate() is also {
var translator : my_translator_s = get_enclosing_unit(some_enclosing_unit).get_translator_pointer();
};
I'm trying to avoid depending on some_enclosing_unit since it doesn't relate to my struct and gets in the way of unit testing
With the lack of a constructor in e, I'm lost as to how to pass a dependency in from the calling unit/struct without using get_enclosing_unit(). "new... with" seems like it might be able to help, but as I learned in my last question, it doesn't generate underlying fields and "gen...keeping" doesn't set my generation needed dependencies until after generation has been completed.
There is no easy answer because your architecture seems to be entangled already.
You are right in being suspicious about these bi-directional, vertical dependencies in your instance tree. In general, one should follow the constraints-from-above (CFA) strategy where you pass dependencies down the hierarchy like in
unit child_u {
p_tr: translator_s;
keep soft p_tr == NULL; // safety catch, in case you forget to constrain it
};
unit parent_u {
tr: translator_s;
child: child_u is instance;
keep child.p_tr == tr;
};
Also, I recommend to not have generation dependencies between units. This way you can keep all your pointers to units non-generatable and connect them in the connect_pointers() method of a unit which is called after generation (see documentation).
extend child_u {
!p_parent: parent_u;
}
extend parent_u {
connect_pointers() is also {
child.p_parent = me;
};
};
But then of course you can't have constraints in child that point to parent.
In the case where you absolutely need a generated pointer, use keep soft <ptr> == NULL to provoke failure in case you forgot to constrain it.
Just my 2cents.

Benefit of explicitly providing the method return type or variable type in scala

This question may be very silly, but I am a little confused which is the best way to do in scala.
In scala, compiler does the type inference and assign the most closest(or may be Restrictive) type for each variable or a method.
I am new to scala, and from many sample code/ libraries, I have noticed that in many places people are not explicitly providing the types for most of the time. But, in most of the code I wrote, I was/still am explicitly providing the types. For eg:
val someVal: String = "def"
def getMeResult() : List[String]= {
val list:List[String] = List("abc","def")
list
}
The reason I started to write this especially for method return type is that, when I write a method itself, I know what it should return. So If I explicitly provide the return type, I can find out if I am making any mistakes. Also, I felt it is easier to understand what that method returns by reading the return type itself. Otherwise, I will have to check what the return type of the last statement.
So my questions/doubts are :
1. Does it take less compilation time since the compiler doesn't have to infer much? Or it doesn't matter much ?
2. What is the normal standard in the scala world?
From "Scala in Depth" chapter 4.5:
For a human reading a nontrivial method implementation, infering the
return type can be troubling. It’s best to explicitly document and
enforce return types in public APIs.
From "Programming in Scala" chapter 2:
Sometimes the Scala compiler will require you to specify the result
type of a function. If the function is recursive, for example, you
must explicitly specify the function’s result type.
It is often a good idea to indicate function result types explicitly.
Such type annotations can make the code easier to read, because the
reader need not study the function body to figure out the inferred
result type.
From "Scala in Action" chapter 2.2.3:
It’s a good practice to specify the return type for the users of the
library. If you think it’s not clear from the function what its return
type is, either try to improve the name or specify the return type.
From "Programming Scala" chapter 1:
Recursive functions are one exception where the execution scope
extends beyond the scope of the body, so the return type must be
declared.
For simple functions perhaps it’s not that important to show it
explicitly. However, sometimes the inferred type won’t be what’s
expected. Explicit return types provide useful documentation for the
reader. I recommend adding return types, especially in public APIs.
You have to provide explicit return types in the following cases:
When you explicitly call return in a method.
When a method is recursive.
When two or more methods are overloaded and one of them calls another; the calling method needs a return type annotation.
When the inferred return type would be more general than you intended, e.g., Any.
Another reason which has not yet been mentioned in the other answers is the following. You probably know that it is a good idea to program to an interface, not an implementation.
In the case of return values of functions or methods, that means that you don't want users of the function or method to know what specific implementation of some interface (or trait) the function returns - that's an implementation detail you want to hide.
If you write a method like this:
trait Example
class ExampleImpl1 extends Example { ... }
class ExampleImpl2 extends Example { ... }
def example() = new ExampleImpl1
then the return type of the method will be inferred to be ExampleImpl1 - so, it is exposing the fact that it is returning a specific implementation of trait Example. You can use an explicit return type to hide this:
def example(): Example = new ExampleImpl1
The standard rule is to use explicit types for API (in order to specify the type precisely and as a guard against refactoring) and also for implicits (especially because implicits without an explicit type may be ignored if the definition site is after the use site).
To the first question, type inference can be a significant tax, but that is balanced against the ease of both writing and reading expressions.
In the example, the type on the local list is not even a "better java." It's just visual clutter.
However, it should be easy to read the inferred type. Occasionally, I have to fire up the IDE just to tell me what is inferred.
By implication, methods should be short enough so that it's easy to scan for the result type.
Sorry for the lack of references. Maybe someone else will step forward; the topic is frequent on MLs and SO.
2. The scala style guide says
Use type inference where possible, but put clarity first, and favour explicitness in public APIs.
You should almost never annotate the type of a private field or a local variable, as their type will usually be immediately evident in their value:
private val name = "Daniel"
However, you may wish to still display the type where the assigned value has a complex or non-obvious form.
All public methods should have explicit type annotations. Type inference may break encapsulation in these cases, because it depends on internal method and class details. Without an explicit type, a change to the internals of a method or val could alter the public API of the class without warning, potentially breaking client code. Explicit type annotations can also help to improve compile times.
The twitter scala style guide says of method return types:
While Scala allows these to be omitted, such annotations provide good documentation: this is especially important for public methods. Where a method is not exposed and its return type obvious, omit them.
I think there's a broad consensus that explicit types should be used for public APIs, and shouldn't be used for most local variable declarations. When to use explicit types for "internal" methods is less clear-cut and more a matter of judgement; different organizations have different standards.
1. Type inference doesn't seem to visibly affect compilation time for the line where the inference happens (aside from a few rare cases with implicits which are basically compiler bugs) - after all, the compiler still has to check the type, which is pretty much the same calculation it would use to infer it. But if a method return type is inferred then anything using that method has to be recompiled when that method changes.
So inferring a method (or public variable) that's used in many places can slow down compilation (particularly if you're using incremental compilation). But inferring local or private variables, private methods, or public methods that are only used in one or two places, makes no (significant) difference.

Parenthesis for not pure functions

I know that that I should use () by convention if a method has side effects
def method1(a: String): Unit = {
//.....
}
//or
def method2(): Unit = {
//.....
}
Do I have to do the same thing if a method doesn't have side effects but it's not pure, doesn't have any parameters and, of course, it returns the different results each time it's being called?
def method3() = getRemoteSessionId("login", "password")
Edit: After reviewing Luigi Plinge's comment, I came to think that I should rewrite the answer. This is also not a clear yes/no answer, but some suggestions.
First: The case regarding var is an interesting one. Declaring a var foo gives you a getter foo without parentheses. Obviously it is an impure call, but it does not have a side effect (it does not change anything unobserved by the caller).
Second, regarding your question: I now would not argue that the problem with getRemoteSessionId is that it is impure, but that it actually makes the server maintain some session login for you, so clearly you interfere destructively with the environment. Then method3() should be written with parentheses because of this side-effect nature.
A third example: Getting the contents of a directory should thus be written file.children and not file.children(), because again it is an impure function but should not have side effects (other than perhaps a read-only access to your file system).
A fourth example: Given the above, you should write System.currentTimeMillis. I do tend to write System.currentTimeMillis() however...
Using this forth case, my tentative answer would be: Parentheses are preferable when the function has either a side-effect; or if it is impure and depending on state not under the control of your program.
With this definition, it would not matter whether getRemoteSessionId has known side-effects or not. On the other hand, it implies to revert to writing file.children()...
The Scala style guide recommends:
Methods which act as accessors of any sort (either encapsulating a field or a logical property) should be declared without parentheses except if they have side effects.
It doesn't mention any other use case besides accessors. So the question boils down to whether you regard this method as an accessor, which in turns depends on how the rest of the class is set up and perhaps also on the (intended) call sites.

In Scala, what is an "early initializer"?

In Martin Odersky's recent post about levels of programmer ability in Scala, in the Expert library designer section, he includes the term "early initializers".
These are not mentioned in Programming in Scala. What are they?
Early initializers are part of the constructor of a subclass that is intended to run before its superclass. For example:
abstract class X {
val name: String
val size = name.size
}
class Y extends {
val name = "class Y"
} with X
If the code was written instead as
class Z extends X {
val name = "class Z"
}
then a null pointer exception would occur when Z got initialized, because size is initialized before name in the normal ordering of initialization (superclass before class).
As far as I can tell, the motivation (as given in the link above) is:
"Naturally when a val is overridden, it is not initialized more than once. So though x2 in the above example is seemingly defined at every point, this is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val."
I don't see why this is natural at all. It is completely possible that the r.h.s. of an assignment might have a side effect. Note that such code structure is completely impossible in either C++ or Java (and I will guess Smalltalk, although I can't speak for that language). In fact you have to make such dual assignments implicit...ticilpmi...EXplicit in those languages via constructors. In the light of the r.h.s. side effect uncertainty, it really doesn't seem like much of a motivation at all: the ability to sidestep superclass side effects (thereby voiding superclass invariants) via ASSIGNMENT? Ick!
Are there other "killer" motivations for allowing such unsafe code structure? Object-oriented languages have done without such a mechanism for about 40 years (30-odd years, if you count from the creation of the language), why include it now?
It...just...seems...dangerous.
On second thought, a year layer...
This is just cake. Literally.
Not an early ANYTHING. Just cake (mixins).
Cake is a term/pattern coined by The Grand Pooh-bah himself, one that employs Scala's trait system, which is halfway between a class and an interface. It is far better than Java's decoration pattern.
The so-called "interface" is merely an unnamed base class, and what used to be the base class is acting as a trait (which I frankly did not know could be done). It is unclear to me if a "with'd" class can take arguments (traits can't), will try it and report back.
This question and its answer has stepped into one of Scala's coolest features. Read up on it and be in awe.

When should I (and should I not) use Scala's #inline annotation?

I believe I understand the basics of inline functions: instead of a function call resulting in parameters being placed on the stack and an invoke operation occurring, the definition of the function is copied at compile time to where the invocation was made, saving the invocation overhead at runtime.
So I want to know:
Does scalac use smarts to inline some functions (e.g. private def) without the hints from annotations?
How do I judge when it be a good idea to hint to scalac that it inlines a function?
Can anyone share examples of functions or invocations that should or shouldn't be inlined?
Never #inline anything whose implementation might reasonably change and which is going to be a public part of a library.
When I say "implementation change" I mean that the logic actually might change. For example:
object TradeComparator extends java.lang.Comparator[Trade] {
#inline def compare(t1 : Trade, t2 : Trade) Int = t1.time compare t2.time
}
Let's say the "natural comparison" then changed to be based on an atomic counter. You may find that an application ends up with 2 components, each built and inlined against different versions of the comparison code.
Personally, I use #inline for alias:
class A(param: Param){
#inline def a = param.a
def a2() = a * a
}
Now, I couldn't find a way to know if it does anything (I tried to jad the generated .class, but couldn't conclude anything).
My goal is to explicit what I want the compiler to do. But let it decide what's best, or simply do what it's capable of. If it doesn't do it, maybe later compiler version will.