How do Frege classes work? - class

It seems that Frege's ideas about type-classes differ significantly from Haskell. In particular:
The syntax appears to be different, for no obvious reason.
Function types cannot have class instances. (Seems a rather odd rule...)
The language spec says something about implementing superclasses in a subclass instance declaration. (But not if you have diamond inheritance... it won't be an error, but it's not guaranteed to work somehow?)
Frege is less fussy about what an instance looks like. (Type aliases are allowed, type variables are not required to be distinct, etc.)
Methods can be declared as native, though it is not completely clear what the meaning of this is.
It appears that you can write type.method to access a method. Again, no indication as to what this means or why it's useful.
Subclass declarations can provide default implementations for superclass methods. (?)
In short, it would be useful if somebody who knows about this stuff could write an explanation of how this stuff works. It's listed in the language spec, but the descriptions are a little bit terse.
(Regarding the syntax: I think Haskell's instance syntax is more logical. "If X is an instance of Y and Z, then it is also an instance of Q in the following way..." Haskell's class syntax has always seemed a bit strange to me. If X implements Eq, that does not imply that it implements Ord, it implies that it could implement Ord if it wants to. I'm not sure what a better symbol would be though...)
Per Ingo's answer:
I'm assuming that providing a default implementation for a superclass method only works if you declare your instances "all at once"?
For example, suppose Foo is a superclass of Bar. Suppose each class has three methods (foo1, foo2, foo3, bar1, bar2, bar3), and Bar provides a default implementation for foo1. That should mean that
instance Bar FB where
foo2 = ...
foo3 = ...
bar1 = ...
bar2 = ...
bar3 = ...
should work. But would this work:
instance Foo FB where
foo2 = ...
foo3 = ...
instance Bar FB where
bar1 = ...
bar2 = ...
bar3 = ...
So if I declare a method as native in a class declaration, that just sets the default implementation for that method?
So if I do something like
class Foobar f where
foo :: f -> Int
native foo
bar :: f -> String
native bar
then that just means that if I write an empty instance declaration for some Java native class, then foo maps to object.foo() in Java?
In particular, if a class method is declared as native, I can still provide some other implementation for it if I choose to?
Every type [constructor] is a namespace. I get how that would be helpful for the infamous named fields problem. I'm not sure why you'd want to declare other things in the scope of this namespace...

You seem to have read the language spec very carefully. Great. But, no, type classes/instances do not differ substantially from Haskell 2010. Just a bit, and that bit is notational.
Your points:
ad 1. Yes. The rule is that the constraints, if any, are attached to the type and the class name follows the keyword. But this will change soon in favor of the Haskell syntax when multi param type classes are added to the language.
ad 2. Meanwhile, function types are fully supported. This will be included in the next release. The current release has only support for (a->b), though.
ad 3. Yes. Consider our categoric classes hierarchy Functor -> Applicative -> Monad. You can just write the following instead of 3 separate instances:
instance Monad Foo where
-- implementation of all methods that are due Monad, Applicative, Functor
ad 4. Yes, currently. There will be changes with multi param type classes, however. The lang spec recommends to stay with the Haskell 2010 rules.
ad 5. You'd need that if you model Java Class Hierarchies with type classes. native function declarations are nothing special for type classes/instances. Because you can have an annotation and a default implementation in a class (just as like in Haskell 2010), you can have this in the form of a native declaration, which gives a) the type and b) the implementation (by referring to a Java method).
ad 6. It's orthogonality. Just as you can write M.foo where M is a module, you can write T.foo when T is a type (constructor), because both are namespaces. In addition, if you have a "record", you may need to write T.f x when Frege cannot infer the type of x.
foo x = x.a + x.b -- this doesn't work, type of x is unknown
-- remedy 1: provide a type signature
foo :: Record -> Int -- Record being some data type
-- remedy 2: access the field getter functions directly
foo x = Record.a x + Record.b x
ad 7. Yes, for example, Ord has a default implementation for (==) in terms of compare. Hence you can make an Ord instance of something without implementing (==).
Hope this helps. Generally, it must be said, the lang spec needs a) completion and b) updates. If only the day had 36 hours .....
The syntactic issue is also discussed here: https://groups.google.com/forum/?fromgroups#!topic/frege-programming-language/2mCNWMVg5eY
---- Second part ------------
Your example would not work, because, if you define instance Foo FB then this must hold, irrespective of other instances and subclasses. The default foo1 method in Bar will be used only if no Foo instance exists.
then that just means that if I write an empty instance declaration for
some Java native class, then foo maps to object.foo() in Java?
Yes, but it depends on the native declaration, it doesn't have to be an Java instance method of that java class, it could also be a static method or a method of another class, or just a member access, etc.
In particular, if a class method is declared as native, I can still
provide some other implementation for it if I choose to?
Sure, just like with any other default class methods. Say a default class method is implemented using pattern guards, that does not mean that you must use pattern guards for your implementation.
Look,
native [pure] foo "javaspec" :: a -> b -> c
just means: please make me a frege function foo with type a -> b -> c that happens to use javaspec for implementation. (How exactly is supposed to be described in Chapter 6 of the language reference. It's not done yet. Sorry.)
For example:
native pure int2long "(long)" :: Int -> Long
The compiler will see tat this is syntactically a cast operation, and when it sees:
... int2long val ...
it will generate java code like:
((long)(unbox(val))
Apart from that, it will also make a wrapper, so that you can, for example:
map int2long [1,2,4]
The point is that, if I tell you: there is a function X.Y.z, you're not able to tell whether this is a native or a regular one without looking at the source code. Hence, native is the way to lift Java methods, operators and so forth to the Frege realm. Practically everything that is known as "primOp" in Haskell is just a native function in Frege. For example,
pure native + :: Int -> Int -> Int
(It's not always that easy, of course.)
Every type [constructor] is a namespace. I get how that would be
helpful for the infamous named fields problem. I'm not sure why you'd
want to declare other things in the scope of this namespace...
It gives you somewhat more control regarding the top namespace. Apart from that, you don't have to define other things there. I just did not see a reason to forbid it once I committed to this simple approach to tackle the record field problem.

Related

Overloading vs. Overriding in Julia

I am not familiar with Julia but I feel like I noticed it allows you to define functions multiple times with different signatures, such as this:
FK5Coords{e}(ra::T, dec::T) where {e,T<:AbstractFloat} = FK5Coords{e, T}(ra, dec)
FK5Coords{e}(ra::Real, dec::Real) where {e} =
FK5Coords{e}(promote(float(ra), float(dec))...)
To me it looks like this allows you to call FK5Coords with two different signatures.
So I'm wondering (a) if that is true, if Julia allows overloading functions like this, and (b) if Julia allows something like super in a function, which seems like it would conflict with overloading. And (c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
Note: If that was not an example of overloading, then (from Wikipedia) this was what I was imagining Julia supported (along these lines):
// volume of a cylinder
double volume(const double r, const int h)
{
return 3.1415926*r*r*static_cast<double>(h);
}
// volume of a cuboid
long volume(const long l, const int b, const int h)
{
return l*b*h;
}
So I'm wondering (a) if that is true, if Julia allows overloading functions like this
Julia allows you to write different versions of the same function (different "methods" for the function) that differ in the type/number of arguments. That's pretty similar to overloading, except that overloading usually means the function to be called is decided based on the compile-time type of the arguments, whereas in Julia it's decided based on the run-time type of the arguments. This is commonly called dynamic dispatch. See this C++ example to see what overloading lacks and dispatch gives you.
(b) if Julia allows something like super in a function, which seems like it would conflict with overloading
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
I'm not sure why you think overloading will conflict with super. In C++, overriding involves having the exact same argument numbers and types, whereas overloading requires having either the number or the type of arguments be different. Compilers are smart enough to easily distinguish between those two cases, and AFAICT C++ can have a super method despite having both overloading and overriding, except that it also has multiple inheritance. I believe (with my limited C++ knowledge) that multiple inheritance is the reason C++ doesn't have a super method call, not overloading.
Anyway, if you peel back behind the Object-oriented curtain and look into method signatures, you'll see that all overriding is really a particular type of overloading: Dog::run(int dist, int dir) can override Animal::run(int dist, int dir) (assume Dog inherits from Animal), but that's equivalent to overloading a run(Animal a, int dist, int dir) function with a run(Dog d, int dist, int dir) definition. (If run was a virtual function, this would be dynamic dispatch instead of overloading, but that's a separate discussion.)
In Julia we do this explicitly, so the definitions would be run(d::Dog, dist::Int, dir::Int) and run(a::Animal, dist::Int, dir::Int). However, in Julia, you can only inherit from abstract types, so here the supertype Animal would be an abstract type, so you can't really call the second method with an Animal instance - the second method definition is really a shorthand way of saying "call this method for any instance of some concrete subtype of Animal, unless that subtype has its own separate method definition" (which Dog does, in this case). I'm not aware of any easy way of calling the second method run(Animal... from the first run(Dog..., which would be the equivalent of a super call.
(You can also 'override' a method from another module with import, but if it has completely the same parameters and parameter types, you'd probably be committing type piracy, which is usually a bad idea. I'm not aware of any way of getting back the original method after this type of overriding. "Overloading" (using dispatch) by defining and using your own types is much more common anyway.)
(c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The first code snippet you posted is an example of using dispatch (which is what Julia uses instead of overloading). For another example, let's first define our base type and function:
abstract type Vehicle end
function move(v::Vehicle, dist::Float64)
println("Moving by $dist meters")
end
Now we can create another method of this function for dispatch ("overload" it) this way:
function move(v::Vehicle, dist::LightYears)
println("Blazing across $dist light years")
end
We can do an object-oriented style "override" too (though at the language level this is just seen as another method for dispatch):
struct Car <: Vehicle
model::String
end
function move(c::Car, dist::Float64)
println("$model is moving $dist meters")
end
This is the equivalent of overriding Vehicle.move(float dist) in derived class as Car.move(float dist).
And just for the heck of it, the volume function from the question:
# volume of a cylinder
volume(r::Float64, h::Int) = π*r*r*h
volume(l::Int, b::Int, h::Int) = l*b*h;
Now the correct volume method to call will be decided based on the number (and type) of arguments passed (and the return type is automatically inferred by the compiler, Float64 for the first method and Int for the second one).

Scala Nothing datatype

I know Scala Nothing is the bottom type. When I see the API it extends from "Any" which is the top in the hierarchy.
Now since Scala does not support multiple inheritance, how can we say that it is the bottom type. In other words it is not inheriting directly all the classes or traits like Seq, List, String, Int and so on. If that is the case how can we say that it is the bottom of all type ?
What I meant is that if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type. As we know Nothing is a bottom type but I am having little difficulty in seeing the relation between String and Nothing as I stated in the above example.
Thanks & Regards,
Mohamed
tl;dr summary: Nothing is a subtype of every type because the spec says so. It cannot be explained from within the language. Every language (or at least almost every language) has some things at the very core that cannot be explained from within the language, e.g. java.lang.Object having no superclass even though every class has a superclass, since even if we don't write an extends clause, the class will implicitly get a superclass. Or the "bootstrap paradox" in Ruby, Object being an instance of Class, but Class being a subclass of Object, and thus Object being an indirect instance of itself (and even more directly: Class being an instance of Class).
I know Scala Nothing is the bottom type. When I see the API it extends from "Any" which is the top in the hierarchy.
Now since Scala does not support multiple inheritance, how can we say that it is the bottom type.
There are two possible answers to this.
The simple and short answer is: because the spec says so. The spec says Nothing is a subtype of all types, so Nothing is the subtype of all types. How? We don't care. The spec says it is so, so that's what it is. Let the compiler designers worry about how to represent this fact within their compiler. Do you care how Any is able to have to superclass? Do you care how def is represented internally in the compiler?
The slightly longer answer is: Yes, it's true, Nothing inherits from Any and only from Any. But! Inheritance is not the same thing as subtyping. In Scala, inheritance and subtyping are closely tied together, but they are not the same thing. The fact that Nothing can only inherit from one class does not mean that it cannot be the subtype of more than one type. A type is not the same thing as a class.
In fact, to be very specific, the spec does not even say that Nothing is a subtype of all types. It only says that Nothing conforms to all types.
In other words it is not inheriting directly all the classes or traits like Seq, List, String, Int and so on. If that is the case how can we say that it is the bottom of all type ?
Again, we can say that, because the spec says we can say that.
How can we say that def defines a method? Because the spec says so. How can we say that a b c means the same thing as a.b(c) and a b_: c means the same thing as { val __some_unforgeable_id__ = a; c.b_:(__some_unforgeable_id__) }? Because the spec says so. How can we say that "" is a string and '' is a character? Because the spec says so.
What I meant is that if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type.
Yes, there is a direct correlation between the types Nothing and String. Nothing is a subtype of String because Nothing is a subtype of all types, including String.
As we know Nothing is a bottom type but I am having little difficulty in seeing the relation between String and Nothing as I stated in the above example.
The relation between String and Nothing is that Nothing is a subtype of String. Why? Because the spec says so.
The compiler knows Nothing is a subtype of String the same way it knows 1 is an instance of Int and has a + method, even though if you look at the source code of the Scala standard library, the Int class is actually abstract and all its methods have no implementation.
Someone, somewhere wrote some code within the compiler that knows how to handle adding two numbers, even though those numbers are actually represented as JVM primitives and don't even exist inside the Scala object system. The same way, someone, somewhere wrote some code within the compiler that knows that Nothing is a subtype of all types even though this fact is not represented (and is not even representable) in the source code of Nothing.
Now since Scala does not support multiple inheritance
Scala does support multiple inheritance, using trait mixin. This is currently not commutative, i.e. the type A with B is not identical with B with A (this will happen with Dotty), but still it's a form of multiple inheritance, and indeed one of Scala's strong points, as it solves the diamond problem through its linearisation rules.
By the way, Null is another bottom type, inherited from Java (which could also be said to have a Nothing bottom type because you can throw a runtime exception in any possible place).
I think you need to distinguish between class inheritance and type bounds. There is no contradiction in defining Nothing as a bottom type, although it does not "explicitly" inherit from any type you want, such as List. It's more like a capability, the capability to throw an exception.
if we are able to assign List[Nothing] (Nil) to List[String] as List is covariant in scala how it is possible because there is no direct correlation between Nothing and String type
Yes, the idea of the bottom type is that Nothing is also (among many other things) a sub-type of String. So you can write
def foo: String = throw new Exception("No")
This only works because Nothing (the type of throwing an exception) is more specific than the declared return type String.

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.

In Scala, what is an "early initializer"?

In Martin Odersky's recent post about levels of programmer ability in Scala, in the Expert library designer section, he includes the term "early initializers".
These are not mentioned in Programming in Scala. What are they?
Early initializers are part of the constructor of a subclass that is intended to run before its superclass. For example:
abstract class X {
val name: String
val size = name.size
}
class Y extends {
val name = "class Y"
} with X
If the code was written instead as
class Z extends X {
val name = "class Z"
}
then a null pointer exception would occur when Z got initialized, because size is initialized before name in the normal ordering of initialization (superclass before class).
As far as I can tell, the motivation (as given in the link above) is:
"Naturally when a val is overridden, it is not initialized more than once. So though x2 in the above example is seemingly defined at every point, this is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val."
I don't see why this is natural at all. It is completely possible that the r.h.s. of an assignment might have a side effect. Note that such code structure is completely impossible in either C++ or Java (and I will guess Smalltalk, although I can't speak for that language). In fact you have to make such dual assignments implicit...ticilpmi...EXplicit in those languages via constructors. In the light of the r.h.s. side effect uncertainty, it really doesn't seem like much of a motivation at all: the ability to sidestep superclass side effects (thereby voiding superclass invariants) via ASSIGNMENT? Ick!
Are there other "killer" motivations for allowing such unsafe code structure? Object-oriented languages have done without such a mechanism for about 40 years (30-odd years, if you count from the creation of the language), why include it now?
It...just...seems...dangerous.
On second thought, a year layer...
This is just cake. Literally.
Not an early ANYTHING. Just cake (mixins).
Cake is a term/pattern coined by The Grand Pooh-bah himself, one that employs Scala's trait system, which is halfway between a class and an interface. It is far better than Java's decoration pattern.
The so-called "interface" is merely an unnamed base class, and what used to be the base class is acting as a trait (which I frankly did not know could be done). It is unclear to me if a "with'd" class can take arguments (traits can't), will try it and report back.
This question and its answer has stepped into one of Scala's coolest features. Read up on it and be in awe.

Practical uses for Structural Types?

Structural types are one of those "wow, cool!" features of Scala. However, For every example I can think of where they might help, implicit conversions and dynamic mixin composition often seem like better matches. What are some common uses for them and/or advice on when they are appropriate?
Aside from the rare case of classes which provide the same method but aren't related nor do implement a common interface (for example, the close() method -- Source, for one, does not extend Closeable), I find no use for structural types with their present restriction. If they were more flexible, however, I could well write something like this:
def add[T: { def +(x: T): T }](a: T, b: T) = a + b
which would neatly handle numeric types. Every time I think structural types might help me with something, I hit that particular wall.
EDIT
However unuseful I find structural types myself, the compiler, however, uses it to handle anonymous classes. For example:
implicit def toTimes(count: Int) = new {
def times(block: => Unit) = 1 to count foreach { _ => block }
}
5 times { println("This uses structural types!") }
The object resulting from (the implicit) toTimes(5) is of type { def times(block: => Unit) }, ie, a structural type.
I don't know if Scala does that for every anonymous class -- perhaps it does. Alas, that is one reason why doing pimp my library that way is slow, as structural types use reflection to invoke the methods. Instead of an anonymous class, one should use a real class to avoid performance issues in pimp my library.
Structural types are very cool constructs in Scala. I've used them to represent multiple unrelated types that share an attribute upon which I want to perform a common operation without a new level of abstraction.
I have heard one argument against structural types from people who are strict about an application's architecture. They feel it is dangerous to apply a common operation across types without an associative trait or parent type, because you then leave the rule of what type the method should apply to open-ended. Daniel's close() example is spot on, but what if you have another type that requires different behavior? Someone who doesn't understand the architecture might use it and cause problems in the system.
I think structural types are one of these features that you don't need that often, but when you need it, it helps you a lot. One area where structural types really shine is "retrofitting", e.g. when you need to glue together several pieces of software you have no source code for and which were not intended for reuse. But if you find yourself using structural types a lot, you're probably doing it wrong.
[Edit]
Of course implicits are often the way to go, but there are cases when you can't: Imagine you have a mutable object you can modify with methods, but which hides important parts of it's state, a kind of "black box". Then you have to work somehow with this object.
Another use case for structural types is when code relies on naming conventions without a common interface, e.g. in machine generated code. In the JDK we can find such things as well, like the StringBuffer / StringBuilder pair (where the common interfaces Appendable and CharSequence are way to general).
Structural types gives some benefits of dynamic languages to a statically linked language, specifically loose coupling. If you want a method foo() to call instance methods of class Bar, you don't need an interface or base-class that is common to both foo() and Bar. You can define a structural type that foo() accepts and whose Bar has no clue of existence. As long as Bar contains methods that match the structural type signatures, foo() will be able to call.
It's great because you can put foo() and Bar on distinct, completely unrelated libraries, that is, with no common referenced contract. This reduces linkage requirements and thus further contributes for loose coupling.
In some situations, a structural type can be used as an alternative to the Adapter pattern, because it offers the following advantages:
Object identity is preserved (there is no separate object for the adapter instance, at least in the semantic level).
You don't need to instantiate an adapter - just pass a Bar instance to foo().
You don't need to implement wrapper methods - just declare the required signatures in the structural type.
The structural type doesn't need to know the actual instance class or interface, while the adapter must know Bar so it can call its methods. This way, a single structural type can be used for many actual types, whereas with adapter it's necessary to code multiple classes - one for each actual type.
The only drawback of structural types compared to adapters is that a structural type can't be used to translate method signatures. So, when signatures doesn't match, you must use adapters that will have some translation logic. I particularly don't like to code "intelligent" adapters because in many times they are more than just adapters and cause increased complexity. If a class client needs some additional method, I prefer to simply add such method, since it usually doesn't affect footprint.