I want to declare a class like this:
class StringSetCreate(val s: String*) {
// ...
}
and call that in Java. The problem is that the constructor is of type
public StringSetCreate(scala.collection.Seq)
So in java, you need to fiddle around with the scala sequences which is ugly.
I know that there is the #annotation.varargs annotation which, if used on a method, generates a second method which takes the java varargs.
This annotation does not work on constructors, at least I don't know where to put it. I found a Scala Issue SI-8383 which reports this problem. As far as I understand there is no solution currently. Is this right? Are there any workarounds? Can I somehow define that second constructor by hand?
The bug is already filed as https://issues.scala-lang.org/browse/SI-8383 .
For a workaround I'd recommend using a factory method (perhaps on the companion object), where #varargs should work:
object StringSetCreate {
#varargs def build(s: String*) = new StringSetCreate(s: _*)
}
Then in Java you call StringSetCreate.build("a", "b") rather than using new.
Imagine this:
val myObject = if(someCondition) {
new Whatever with Trait1
} else if(otherCondition) {
new Whatever with Trait2 with Trait3 with Trait4
} else {
new Whatever with Trait5
}
Is the myObject object "composed" at runtime, or is the scala compiler smart enough to generate the appropriate code at compile time? What kind of performance impact will it have on the code if you have multiple places that are applying traits like in the above code?
It's composed at compile-time
The traits will be added as interfaces to the resulting type, and any concrete methods from those traits will (usually) be copied to the class in their entirety.
Occasionally, the compiler may have to provide concrete implementations via forwarders to static methods, but this isn't usually the case.
Scala will create three anonymous classes, (except the last condition is a syntax error).
Note: These classes will be named using the order in which they are defined in the scope they are defined. So... OuterClass$anon$1 -> 3. Please avoid using these anonymous classes in any long-term Java-serialization as this locks down the order of anonymous classes in your code.
Other than that, it's an awesome convenience feature!
Is it possible to create an AOP like interceptor using Scalas new Dynamic Type feature? For example: Would it be possible to create a generic stopwatch interceptor that could be mixed in with arbitrary types to profile my code? Or would I still have to use AspectJ?
I'm pretty sure Dynamic is only used when the object you're selecting on doesn't already have what you're selecting:
From the nightly scaladoc:
Instances x of this trait allow calls x.meth(args) for arbitrary method names meth and argument lists args. If a call is not natively supported by x, it is rewritten to x.invokeDynamic("meth", args)
Note that since the documentation was written, the method has been renamed applyDynamic.
No.
In order for a dynamic object to be supplied as a parameter, it'll need to have the expected type - which means inheriting from the class you want to proxy, or from the appropriate superclass / interface.
As soon as you do this, it'll have the relevant methods statically provided, so applyDynamic would never be considered.
I think your odds are bad. Scala will call applyDynamic only if there is no static match on the method call:
class Slow {
def doStuff = //slow stuff
}
var slow = new Slow with DynamicTimer
slow.doStuff
In the example above, scalac won't call applyDynamic because it statically resolved your call to doStuff. It will only fall through to applyDynamic if the method you are calling matches none of the names of methods on the type.
class Foo {
#SomeAnnotation
var bar: String = _
}
#SomeAnnotation is a Java annotation (so it is kept at runtime) with runtime retention and Method target. The code compiles, but at runtime the bar() and bar_=() methods that the compiler generated are not annotated.
Assuming this is not a bug, is there a clean way of annotating the generated getter method without needing to def the method explicitly?
This mailing list post might be of use:
http://old.nabble.com/-scala--field-annotations,-getters-setters-and-BeanProperty-td24970781.html
Yes, you need to use the meta-annotations in scala.annotation.target. See the documentation in https://lampsvn.epfl.ch/trac/scala/browser/scala/trunk/src/library/scala/annotation/target/getter.scala
Can I specify interfaces when I declare a member?
After thinking about this question for a while, it occurred to me that a static-duck-typed language might actually work. Why can't predefined classes be bound to an interface at compile time? Example:
public interface IMyInterface
{
public void MyMethod();
}
public class MyClass //Does not explicitly implement IMyInterface
{
public void MyMethod() //But contains a compatible method definition
{
Console.WriteLine("Hello, world!");
}
}
...
public void CallMyMethod(IMyInterface m)
{
m.MyMethod();
}
...
MyClass obj = new MyClass();
CallMyMethod(obj); // Automatically recognize that MyClass "fits"
// MyInterface, and force a type-cast.
Do you know of any languages that support such a feature? Would it be helpful in Java or C#? Is it fundamentally flawed in some way? I understand you could subclass MyClass and implement the interface or use the Adapter design pattern to accomplish the same thing, but those approaches just seem like unnecessary boilerplate code.
A brand new answer to this question, Go has exactly this feature. I think it's really cool & clever (though I'll be interested to see how it plays out in real life) and kudos on thinking of it.
As documented in the official documentation (as part of the Tour of Go, with example code):
Interfaces are implemented implicitly
A type implements an interface by implementing its methods. There is
no explicit declaration of intent, no "implements" keyword.
Implicit interfaces decouple the definition of an interface from its
implementation, which could then appear in any package without
prearrangement.
How about using templates in C++?
class IMyInterface // Inheritance from this is optional
{
public:
virtual void MyMethod() = 0;
}
class MyClass // Does not explicitly implement IMyInterface
{
public:
void MyMethod() // But contains a compatible method definition
{
std::cout << "Hello, world!" "\n";
}
}
template<typename MyInterface>
void CallMyMethod(MyInterface& m)
{
m.MyMethod(); // instantiation succeeds iff MyInterface has MyMethod
}
MyClass obj;
CallMyMethod(obj); // Automatically generate code with MyClass as
// MyInterface
I haven't actually compiled this code, but I believe it's workable and a pretty trivial C++-ization of the original proposed (but nonworking) code.
Statically-typed languages, by definition, check types at compile time, not run time. One of the obvious problems with the system described above is that the compiler is going to check types when the program is compiled, not at run time.
Now, you could build more intelligence into the compiler so it could derive types, rather than having the programmer explicitly declare types; the compiler might be able to see that MyClass implements a MyMethod() method, and handle this case accordingly, without the need to explicitly declare interfaces (as you suggest). Such a compiler could utilize type inference, such as Hindley-Milner.
Of course, some statically typed languages like Haskell already do something similar to what you suggest; the Haskell compiler is able to infer types (most of the time) without the need to explicitly declare them. But obviously, Java/C# don't have this ability.
I don't see the point. Why not be explicit that the class implements the interface and have done with it? Implementing the interface is what tells other programmers that this class is supposed to behave in the way that interface defines. Simply having the same name and signature on a method conveys no guarantees that the intent of the designer was to perform similar actions with the method. That may be, but why leave it up for interpretation (and misuse)?
The reason you can "get away" with this successfully in dynamic languages has more to do with TDD than with the language itself. In my opinion, if the language offers the facility to give these sorts of guidance to others who use/view the code, you should use it. It actually improves clarity and is worth the few extra characters. In the case where you don't have access to do this, then an Adapter serves the same purpose of explicitly declaring how the interface relates to the other class.
F# supports static duck typing, though with a catch: you have to use member constraints. Details are available in this blog entry.
Example from the cited blog:
let inline speak (a: ^a) =
let x = (^a : (member speak: unit -> string) (a))
printfn "It said: %s" x
let y = (^a : (member talk: unit -> string) (a))
printfn "Then it said %s" y
type duck() =
member x.speak() = "quack"
member x.talk() = "quackity quack"
type dog() =
member x.speak() = "woof"
member x.talk() = "arrrr"
let x = new duck()
let y = new dog()
speak x
speak y
TypeScript!
Well, ok... So it's a javascript superset and maybe does not constitute a "language", but this kind of static duck-typing is vital in TypeScript.
Most of the languages in the ML family support structural types with inference and constrained type schemes, which is the geeky language-designer terminology that seems most likely what you mean by the phrase "static duck-typing" in the original question.
The more popular languages in this family that spring to mind include: Haskell, Objective Caml, F# and Scala. The one that most closely matches your example, of course, would be Objective Caml. Here's a translation of your example:
open Printf
class type iMyInterface = object
method myMethod: unit
end
class myClass = object
method myMethod = printf "Hello, world!"
end
let callMyMethod: #iMyInterface -> unit = fun m -> m#myMethod
let myClass = new myClass
callMyMethod myClass
Note: some of the names you used have to be changed to comply with OCaml's notion of identifier case semantics, but otherwise, this is a pretty straightforward translation.
Also, worth noting, neither the type annotation in the callMyMethod function nor the definition of the iMyInterface class type is strictly necessary. Objective Caml can infer everything in your example without any type declarations at all.
Crystal is a statically duck-typed language. Example:
def add(x, y)
x + y
end
add(true, false)
The call to add causes this compilation error:
Error in foo.cr:6: instantiating 'add(Bool, Bool)'
add(true, false)
^~~
in foo.cr:2: undefined method '+' for Bool
x + y
^
A pre-release design for Visual Basic 9 had support for static duck typing using dynamic interfaces but they cut the feature* in order to ship on time.
Boo definitely is a static duck-typed language: http://boo.codehaus.org/Duck+Typing
An excerpt:
Boo is a statically typed language,
like Java or C#. This means your boo
applications will run about as fast as
those coded in other statically typed
languages for .NET or Mono. But using
a statically typed language sometimes
constrains you to an inflexible and
verbose coding style, with the
sometimes necessary type declarations
(like "x as int", but this is not
often necessary due to boo's Type
Inference) and sometimes necessary
type casts (see Casting Types). Boo's
support for Type Inference and
eventually generics help here, but...
Sometimes it is appropriate to give up
the safety net provided by static
typing. Maybe you just want to explore
an API without worrying too much about
method signatures or maybe you're
creating code that talks to external
components such as COM objects. Either
way the choice should be yours not
mine.
Along with the normal types like
object, int, string...boo has a
special type called "duck". The term
is inspired by the ruby programming
language's duck typing feature ("If it
walks like a duck and quacks like a
duck, it must be a duck").
New versions of C++ move in the direction of static duck typing. You can some day (today?) write something like this:
auto plus(auto x, auto y){
return x+y;
}
and it would fail to compile if there's no matching function call for x+y.
As for your criticism:
A new "CallMyMethod" is created for each different type you pass to it, so it's not really type inference.
But it IS type inference (you can say foo(bar) where foo is a templated function), and has the same effect, except it's more time-efficient and takes more space in the compiled code.
Otherwise, you would have to look up the method during runtime. You'd have to find a name, then check that the name has a method with the right parameters.
Or you would have to store all that information about matching interfaces, and look into every class that matches an interface, then automatically add that interface.
In either case, that allows you to implicitly and accidentally break the class hierarchy, which is bad for a new feature because it goes against the habits of what programmers of C#/Java are used to. With C++ templates, you already know you're in a minefield (and they're also adding features ("concepts") to allow restrictions on template parameters).
Structural types in Scala does something like this.
See Statically Checked “Duck Typing” in Scala
D (http://dlang.org) is a statically compiled language and provides duck-typing via wrap() and unwrap() (http://dlang.org/phobos-prerelease/std_typecons.html#.unwrap).
Sounds like Mixins or Traits:
http://en.wikipedia.org/wiki/Mixin
http://www.iam.unibe.ch/~scg/Archive/Papers/Scha03aTraits.pdf
In the latest version of my programming language Heron it supports something similar through a structural-subtyping coercion operator called as. So instead of:
MyClass obj = new MyClass();
CallMyMethod(obj);
You would write:
MyClass obj = new MyClass();
CallMyMethod(obj as IMyInterface);
Just like in your example, in this case MyClass does not have to explicitly implement IMyInterface, but if it did the cast could happen implicitly and the as operator could be omitted.
I wrote a bit more about the technique which I call explicit structural sub-typing in this article.