Overloading vs. Overriding in Julia - operator-overloading

I am not familiar with Julia but I feel like I noticed it allows you to define functions multiple times with different signatures, such as this:
FK5Coords{e}(ra::T, dec::T) where {e,T<:AbstractFloat} = FK5Coords{e, T}(ra, dec)
FK5Coords{e}(ra::Real, dec::Real) where {e} =
FK5Coords{e}(promote(float(ra), float(dec))...)
To me it looks like this allows you to call FK5Coords with two different signatures.
So I'm wondering (a) if that is true, if Julia allows overloading functions like this, and (b) if Julia allows something like super in a function, which seems like it would conflict with overloading. And (c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
Note: If that was not an example of overloading, then (from Wikipedia) this was what I was imagining Julia supported (along these lines):
// volume of a cylinder
double volume(const double r, const int h)
{
return 3.1415926*r*r*static_cast<double>(h);
}
// volume of a cuboid
long volume(const long l, const int b, const int h)
{
return l*b*h;
}

So I'm wondering (a) if that is true, if Julia allows overloading functions like this
Julia allows you to write different versions of the same function (different "methods" for the function) that differ in the type/number of arguments. That's pretty similar to overloading, except that overloading usually means the function to be called is decided based on the compile-time type of the arguments, whereas in Julia it's decided based on the run-time type of the arguments. This is commonly called dynamic dispatch. See this C++ example to see what overloading lacks and dispatch gives you.
(b) if Julia allows something like super in a function, which seems like it would conflict with overloading
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
I'm not sure why you think overloading will conflict with super. In C++, overriding involves having the exact same argument numbers and types, whereas overloading requires having either the number or the type of arguments be different. Compilers are smart enough to easily distinguish between those two cases, and AFAICT C++ can have a super method despite having both overloading and overriding, except that it also has multiple inheritance. I believe (with my limited C++ knowledge) that multiple inheritance is the reason C++ doesn't have a super method call, not overloading.
Anyway, if you peel back behind the Object-oriented curtain and look into method signatures, you'll see that all overriding is really a particular type of overloading: Dog::run(int dist, int dir) can override Animal::run(int dist, int dir) (assume Dog inherits from Animal), but that's equivalent to overloading a run(Animal a, int dist, int dir) function with a run(Dog d, int dist, int dir) definition. (If run was a virtual function, this would be dynamic dispatch instead of overloading, but that's a separate discussion.)
In Julia we do this explicitly, so the definitions would be run(d::Dog, dist::Int, dir::Int) and run(a::Animal, dist::Int, dir::Int). However, in Julia, you can only inherit from abstract types, so here the supertype Animal would be an abstract type, so you can't really call the second method with an Animal instance - the second method definition is really a shorthand way of saying "call this method for any instance of some concrete subtype of Animal, unless that subtype has its own separate method definition" (which Dog does, in this case). I'm not aware of any easy way of calling the second method run(Animal... from the first run(Dog..., which would be the equivalent of a super call.
(You can also 'override' a method from another module with import, but if it has completely the same parameters and parameter types, you'd probably be committing type piracy, which is usually a bad idea. I'm not aware of any way of getting back the original method after this type of overriding. "Overloading" (using dispatch) by defining and using your own types is much more common anyway.)
(c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The first code snippet you posted is an example of using dispatch (which is what Julia uses instead of overloading). For another example, let's first define our base type and function:
abstract type Vehicle end
function move(v::Vehicle, dist::Float64)
println("Moving by $dist meters")
end
Now we can create another method of this function for dispatch ("overload" it) this way:
function move(v::Vehicle, dist::LightYears)
println("Blazing across $dist light years")
end
We can do an object-oriented style "override" too (though at the language level this is just seen as another method for dispatch):
struct Car <: Vehicle
model::String
end
function move(c::Car, dist::Float64)
println("$model is moving $dist meters")
end
This is the equivalent of overriding Vehicle.move(float dist) in derived class as Car.move(float dist).
And just for the heck of it, the volume function from the question:
# volume of a cylinder
volume(r::Float64, h::Int) = π*r*r*h
volume(l::Int, b::Int, h::Int) = l*b*h;
Now the correct volume method to call will be decided based on the number (and type) of arguments passed (and the return type is automatically inferred by the compiler, Float64 for the first method and Int for the second one).

Related

How do Frege classes work?

It seems that Frege's ideas about type-classes differ significantly from Haskell. In particular:
The syntax appears to be different, for no obvious reason.
Function types cannot have class instances. (Seems a rather odd rule...)
The language spec says something about implementing superclasses in a subclass instance declaration. (But not if you have diamond inheritance... it won't be an error, but it's not guaranteed to work somehow?)
Frege is less fussy about what an instance looks like. (Type aliases are allowed, type variables are not required to be distinct, etc.)
Methods can be declared as native, though it is not completely clear what the meaning of this is.
It appears that you can write type.method to access a method. Again, no indication as to what this means or why it's useful.
Subclass declarations can provide default implementations for superclass methods. (?)
In short, it would be useful if somebody who knows about this stuff could write an explanation of how this stuff works. It's listed in the language spec, but the descriptions are a little bit terse.
(Regarding the syntax: I think Haskell's instance syntax is more logical. "If X is an instance of Y and Z, then it is also an instance of Q in the following way..." Haskell's class syntax has always seemed a bit strange to me. If X implements Eq, that does not imply that it implements Ord, it implies that it could implement Ord if it wants to. I'm not sure what a better symbol would be though...)
Per Ingo's answer:
I'm assuming that providing a default implementation for a superclass method only works if you declare your instances "all at once"?
For example, suppose Foo is a superclass of Bar. Suppose each class has three methods (foo1, foo2, foo3, bar1, bar2, bar3), and Bar provides a default implementation for foo1. That should mean that
instance Bar FB where
foo2 = ...
foo3 = ...
bar1 = ...
bar2 = ...
bar3 = ...
should work. But would this work:
instance Foo FB where
foo2 = ...
foo3 = ...
instance Bar FB where
bar1 = ...
bar2 = ...
bar3 = ...
So if I declare a method as native in a class declaration, that just sets the default implementation for that method?
So if I do something like
class Foobar f where
foo :: f -> Int
native foo
bar :: f -> String
native bar
then that just means that if I write an empty instance declaration for some Java native class, then foo maps to object.foo() in Java?
In particular, if a class method is declared as native, I can still provide some other implementation for it if I choose to?
Every type [constructor] is a namespace. I get how that would be helpful for the infamous named fields problem. I'm not sure why you'd want to declare other things in the scope of this namespace...
You seem to have read the language spec very carefully. Great. But, no, type classes/instances do not differ substantially from Haskell 2010. Just a bit, and that bit is notational.
Your points:
ad 1. Yes. The rule is that the constraints, if any, are attached to the type and the class name follows the keyword. But this will change soon in favor of the Haskell syntax when multi param type classes are added to the language.
ad 2. Meanwhile, function types are fully supported. This will be included in the next release. The current release has only support for (a->b), though.
ad 3. Yes. Consider our categoric classes hierarchy Functor -> Applicative -> Monad. You can just write the following instead of 3 separate instances:
instance Monad Foo where
-- implementation of all methods that are due Monad, Applicative, Functor
ad 4. Yes, currently. There will be changes with multi param type classes, however. The lang spec recommends to stay with the Haskell 2010 rules.
ad 5. You'd need that if you model Java Class Hierarchies with type classes. native function declarations are nothing special for type classes/instances. Because you can have an annotation and a default implementation in a class (just as like in Haskell 2010), you can have this in the form of a native declaration, which gives a) the type and b) the implementation (by referring to a Java method).
ad 6. It's orthogonality. Just as you can write M.foo where M is a module, you can write T.foo when T is a type (constructor), because both are namespaces. In addition, if you have a "record", you may need to write T.f x when Frege cannot infer the type of x.
foo x = x.a + x.b -- this doesn't work, type of x is unknown
-- remedy 1: provide a type signature
foo :: Record -> Int -- Record being some data type
-- remedy 2: access the field getter functions directly
foo x = Record.a x + Record.b x
ad 7. Yes, for example, Ord has a default implementation for (==) in terms of compare. Hence you can make an Ord instance of something without implementing (==).
Hope this helps. Generally, it must be said, the lang spec needs a) completion and b) updates. If only the day had 36 hours .....
The syntactic issue is also discussed here: https://groups.google.com/forum/?fromgroups#!topic/frege-programming-language/2mCNWMVg5eY
---- Second part ------------
Your example would not work, because, if you define instance Foo FB then this must hold, irrespective of other instances and subclasses. The default foo1 method in Bar will be used only if no Foo instance exists.
then that just means that if I write an empty instance declaration for
some Java native class, then foo maps to object.foo() in Java?
Yes, but it depends on the native declaration, it doesn't have to be an Java instance method of that java class, it could also be a static method or a method of another class, or just a member access, etc.
In particular, if a class method is declared as native, I can still
provide some other implementation for it if I choose to?
Sure, just like with any other default class methods. Say a default class method is implemented using pattern guards, that does not mean that you must use pattern guards for your implementation.
Look,
native [pure] foo "javaspec" :: a -> b -> c
just means: please make me a frege function foo with type a -> b -> c that happens to use javaspec for implementation. (How exactly is supposed to be described in Chapter 6 of the language reference. It's not done yet. Sorry.)
For example:
native pure int2long "(long)" :: Int -> Long
The compiler will see tat this is syntactically a cast operation, and when it sees:
... int2long val ...
it will generate java code like:
((long)(unbox(val))
Apart from that, it will also make a wrapper, so that you can, for example:
map int2long [1,2,4]
The point is that, if I tell you: there is a function X.Y.z, you're not able to tell whether this is a native or a regular one without looking at the source code. Hence, native is the way to lift Java methods, operators and so forth to the Frege realm. Practically everything that is known as "primOp" in Haskell is just a native function in Frege. For example,
pure native + :: Int -> Int -> Int
(It's not always that easy, of course.)
Every type [constructor] is a namespace. I get how that would be
helpful for the infamous named fields problem. I'm not sure why you'd
want to declare other things in the scope of this namespace...
It gives you somewhat more control regarding the top namespace. Apart from that, you don't have to define other things there. I just did not see a reason to forbid it once I committed to this simple approach to tackle the record field problem.

function currying vs an ordinary callback method

I'm trying to understand how the programming technique known as currying differs from an ordinary callback interface (such as the Observer/Observable interfaces in Java, or the classic Visitor design pattern).
I understand what currying is, I just don't understand why it's uniquely useful to the point that it requires its own terminology and language support.
Could someone explain a programming situation that is better solved by currying than by a callback method? What's the practical significance of the fact that currying uses a separate function for each argument?
[update:] to summarize the answers I got: currying comes part and parcel with the fact that functions are "first class" citizens, ie objects that can be created and passed around like any other object reference. This makes it possible to return a function from a function, in other words currying.
As for the reason why currying is useful, currying provides a syntax to let you concisely decorate function calls so that derived functions can be created with minimal boilerplate code overhead. Whereas in java you might create several overloaded or "wrapper" methods for each partial parameter set which ultimately invoke a master method containing all the parameters, currying provides a lighter syntax that lets you generate these "function wrappers" as needed in your code.
Currying and callbacks are two completely different technologies.
Callbacks are essentially a synonym for "passing a function to a function" (i.e. higher-order function that consumes a function); currying is a form of partial application, i.e. a function which isn't passed all of the parameters it expects returns a new function that only expects the free parameters.
Accordingly, they are not alternatives at all.
Currying is useful because it makes it much easier to concisely create functions that can be used as, for example, callbacks, or in a pointfree programme. It also means that you can, e.g. pass a callback to a function like map, and have a new function that applies your callback to every element of any list you care to pass to it.
Well, it's a point of language support.
In Java, for example, you can define all sorts of callback interfaces: one for parmeterless methods, one for methods with one argument, one for methods with two arguments, and so forth.
But wehn functions are first class citiziens, one does not need this: Single argument functions will do the job, because functions can be returned. Hence, one important interface in all "functional java" projects will be some interface of the form:
interface Fun<A,B> {
public B apply(A a);
}
or the like that covers this pattern.

Why making a difference between methods and functions in Scala?

I have been reading about methods and functions in Scala. Jim's post and Daniel's complement to it do a good job of explaining what the differences between these are. Here is what I took with me:
functions are objects, methods are not;
as a consequence functions can be passed as argument, but methods can not;
methods can be type-parametrised, functions can not;
methods are faster.
I also understand the difference between def, val and var.
Now I have actually two questions:
Why can't we parametrise the apply method of a function to parametrise the function? And
Why can't the method be called by the function object to run faster? Or the caller of the function be made calling the original method directly?
Looking forward to your answers and many thanks in advance!
1 - Parameterizing functions.
It is theoretically possible for a compiler to parameterize the type of a function; one could add that as a feature. It isn't entirely trivial, though, because functions are contravariant in their argument and covariant in their return value:
trait Function1[+T,-R] { ... }
which means that another function that can take more arguments counts as a subclass (since it can process anything that the superclass can process), and if it produces a smaller set of results, that's okay (since it will also obey the superclass construct that way). But how do you encode
def fn[A](a: A) = a
in that framework? The whole point is that the return type is equal to the type passed in, whatever that type has to be. You'd need
Function1[ ThisCanBeAnything, ThisHasToMatch ]
as your function type. "This can be anything" is well-represented by Any if you want a single type, but then you could return anything as the original type is lost. This isn't to say that there is no way to implement it, but it doesn't fit nicely into the existing framework.
2 - Speed of functions.
This is really simple: a function is the apply method on another object. You have to have that object in order to call its method. This will always be slower (or at least no faster) than calling your own method, since you already have yourself.
As a practical matter, JVMs can do a very good job inlining functions these days; there is often no difference in performance as long as you're mostly using your method or function, not creating the function object over and over. If you're deeply nesting very short loops, you may find yourself creating way too many functions; moving them out into vals outside of the nested loops may save time. But don't bother until you've benchmarked and know that there's a bottleneck there; typically the JVM does the right thing.
Think about the type signature of a function. It explicitly says what types it takes. So then type-parameterizing apply() would be inconsistent.
A function is an object, which must be created, initialized, and then garbage-collected. When apply() is called, it has to grab the function object in addition to the parent.

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.

Do you know of any examples of elegant solutions in dynamically typed languages?

Imagine two languages which (apart from the type information) do have exactly the same syntax, but one is statically typed while the other one uses dynamic typing. Then, for every program written in the statically typed language, one can derive an equivalent dynamically typed program by removing all type information. As this is not neccessarily possible the other way around, the class of dynamically typed programs thus is strictly larger than the class of statically typed programs. Let's call this dynamically typed programs, for which there is no mapping of variables to types making them statically typed "real dynamically typed programs".
As both language families are definitly turing-complete, we can be sure that for every such real dynamically typed program there exists a statically typed program doing exactly the same thing, but I often read that "experienced programmers are able to write very elegant code in dynamically typed languages". I thus ask myself: Are there any good examples of real dynamically typed programs, for which any equivalent statically typed programm clearly is much more complex / much less "elegant" (whatever that may mean)?
Do you know of any such examples?
I am sure that for many "elegance" problems of static languages, static type checking itself isn't to blame, but the lack of expressiveness of the static type system implemented in the language and the limited capabilities of the compiler. If this is done "righter" (like in Haskell for example), then suddenly the programs turn out to be terse, elegant .. and safer that their dynamic counterpart.
Here's an illustration (C++ specific, sorry): C++ is so powerful, that it implements a metalanguage with it's template class system. But still, a very simple function is hard to declare:
template<class X,class Y>
? max(X x, Y y)
There is an astounding amount of possible solutions, like ?=boost::variant<X,Y> or computing ?=is_convertible(X,Y)?(X:is_convertible(Y,X):Y:error), none of them really satisfiying.
But now imagine a preprocessor, that could transform an input program into it's equivalent continuation passing style form, where each continuation is a callable object which accepts all possible argument types. A CPS version of max would look like this:
template<class X, class Y, class C>
void cps_max(X x, Y y, C cont) // cont is a object which can be called with X or Y
{
if (x>y) cont(x); else cont(y);
}
The problem is gone, max calls a continuation which accepts X or Y. So, there is a solution for max with static type checking, but we can't express max in it's non-CPS form, untransform(cps_max) is undefined, so to speak. So,we have some argument that max can be done right, but we don't have the means to do so. This is lack of expressiveness.
Update for 2501:
Assume there are some unrelated types X and Y and there is a bool operator<(X,Y). What shouldmax(X,Y) return? Let us further assume, that X and Y both have a member function foo();. How could we make it possible to write:
void f(X x, Y y) {
max(X,Y).foo();
}
returning either X or Y and invoking foo() on the result is no problem for a dynamic language, but close to impossible for most static languages. However, we can have the intended functionality by rewriting f() to use cps_max:
struct call_foo { template<class T> void operator(const T &t) const { t.foo(); } };
void f(X x, Y y) {
cps_max(x,y,call_foo());
}
So this can't be a problem for static type checking, but it looks very ugly and does not scale well beyond simple examples. So what is missing from this static language that we can not provide a static and readable solution.
Yes, check out Eric Raymonds Python Success Story. Basically, it is about how much easier reflection-type tasks are with dynamically typed programming languages.
Groovy and XML
Groovy and Domain-specific language