Why can't code in #:fallbacks refer to the generic methods? - racket

This code:
(require racket/generic)
;; A holder that assigns ids to the things it holds. Some callers want to know the
;; the id that was assigned when adding a thing to the holder, and others don't.
(define-generics holder
(add-new-thing+id holder new-thing) ;returns: holder id (two values)
(add-new-thing holder new-thing) ;returns: holder
#:fallbacks
[(define (add-new-thing holder new-thing) ;probably same code for all holder structs
(let-values ([(holder _) (add-new-thing+id holder new-thing)])
holder))])
produces this error message:
add-new-thing+id: method not implemented in: (add-new-thing+id holder new-thing)
I'm able to fix it by adding a define/generic inside the fallbacks, like this:
#:fallbacks
[(define/generic add add-new-thing+id)
(define (add-new-thing holder new-thing)
(let-values ([(holder _) (add holder new-thing)])
holder))])
but this seems to add complexity without adding value, and I don't understand why one works and the other doesn't.
As I understand #:fallbacks, the idea is that the generic definition can build methods out of the most primitive methods, so structs that implement the generic interface don't always need to reimplement the same big set of methods that usually just call the core methods with identical code—but they can override those "derived" methods if needed. say, for optimization. That's a very useful thing*—but have I misunderstood fallbacks?
It seems strange that fallbacks code couldn't refer to the generic methods. Isn't the main value of fallbacks to call them? The documentation for define/generic says that it's a syntax error to invoke it outside a #:methods clause in a struct definition, so I'm probably misusing it. Anyway, can someone explain what are the rules for code in a #:fallbacks clause? How are you supposed to write it?
* The Clojure world has something similar, in the potemkin library's def-abstract-type and deftype+, but not as well integrated into the language. potemkin/def-map-type illustrates very nicely why fallbacks—as I understand them, anyway—are such a valuable feature.

The second version of your code is correct.
The first version of your code would work if you had a fallback definition of add-new-thing+id, but because you are referring to any possible definition of that method outside of the fallback scope, you need to import it.
It effectively feels a bit repetitive to have to define the generic again inside the fallback clause. It's because #:fallbacks works the same way as #:methods, and therefore has the same behavior of overriding generics with its own definitions.
To make it explicit that you are overriding a method, you need to "import" it inside your clause, using define/generic (which is not really defining anything, it is just importing the generic into the context).
As the documentation for define/generic says:
When used inside the method definitions associated with the #:methods keyword, binds local-id to the generic for method-id. This form is useful for method specializations to use generic methods (as opposed to the local specialization) on other values.
Then in define-generics:
The syntax of the fallback-impls is the same as the methods provided for the #:methods keyword for struct.
Which means #:fallbacks has the same behavior as using #:methods in a struct.
Why?
The logic behind that behavior is that method definition blocks, like #:methods and #:fallbacks have access to their own definitions of all the generics, so that it's easy to refer to your own context. To explicitly use a generic from outside this context, you need to use define/generic.

Related

Scheme (Kawa) - How to force macro expansion inside another macro

I want to make a macro, that when used in class definition creates a field, it's public setter, and an annotation. However, it'd seem the macro is not expanding, mostly because it's used inside other (class definition) macro.
Here is an example how to define a class with one field:
(define-simple-class test-class ()
(foo :: java.util.List ))
My macro (only defines field as of now):
(define-syntax autowire
(syntax-rules ()
((autowire class id)
(id :: class))))
However, if I try to use it:
(define-simple-class test-class ()
(autowire java.util.List foo))
and query fields of the new class via reflection, I can see that it creates a field named autowire, and foo is nowhere to be seen. Looks like an issue of the order the macros are expanded.
Yes, macros are expanded “from the outside in”. After expanding define-simple-class, the subform (autowire java.util.List foo) does not exist anymore.
If you want this kind of behaviour modification, you need to define your own define-not-so-simple-class macro, which might expand to a define-simple-class form.
However, please step back before making such a minor tweak to something that is standard, and ask yourself whether it is worth it. The upside might be syntax that is slightly better aligned to the way you think, but the downside is that it might be worse aligned to the way others think (who might need to understand your code). There is a maxim for maintainable and readable coding: “be conventional”.

Overloading vs. Overriding in Julia

I am not familiar with Julia but I feel like I noticed it allows you to define functions multiple times with different signatures, such as this:
FK5Coords{e}(ra::T, dec::T) where {e,T<:AbstractFloat} = FK5Coords{e, T}(ra, dec)
FK5Coords{e}(ra::Real, dec::Real) where {e} =
FK5Coords{e}(promote(float(ra), float(dec))...)
To me it looks like this allows you to call FK5Coords with two different signatures.
So I'm wondering (a) if that is true, if Julia allows overloading functions like this, and (b) if Julia allows something like super in a function, which seems like it would conflict with overloading. And (c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
Note: If that was not an example of overloading, then (from Wikipedia) this was what I was imagining Julia supported (along these lines):
// volume of a cylinder
double volume(const double r, const int h)
{
return 3.1415926*r*r*static_cast<double>(h);
}
// volume of a cuboid
long volume(const long l, const int b, const int h)
{
return l*b*h;
}
So I'm wondering (a) if that is true, if Julia allows overloading functions like this
Julia allows you to write different versions of the same function (different "methods" for the function) that differ in the type/number of arguments. That's pretty similar to overloading, except that overloading usually means the function to be called is decided based on the compile-time type of the arguments, whereas in Julia it's decided based on the run-time type of the arguments. This is commonly called dynamic dispatch. See this C++ example to see what overloading lacks and dispatch gives you.
(b) if Julia allows something like super in a function, which seems like it would conflict with overloading
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
I'm not sure why you think overloading will conflict with super. In C++, overriding involves having the exact same argument numbers and types, whereas overloading requires having either the number or the type of arguments be different. Compilers are smart enough to easily distinguish between those two cases, and AFAICT C++ can have a super method despite having both overloading and overriding, except that it also has multiple inheritance. I believe (with my limited C++ knowledge) that multiple inheritance is the reason C++ doesn't have a super method call, not overloading.
Anyway, if you peel back behind the Object-oriented curtain and look into method signatures, you'll see that all overriding is really a particular type of overloading: Dog::run(int dist, int dir) can override Animal::run(int dist, int dir) (assume Dog inherits from Animal), but that's equivalent to overloading a run(Animal a, int dist, int dir) function with a run(Dog d, int dist, int dir) definition. (If run was a virtual function, this would be dynamic dispatch instead of overloading, but that's a separate discussion.)
In Julia we do this explicitly, so the definitions would be run(d::Dog, dist::Int, dir::Int) and run(a::Animal, dist::Int, dir::Int). However, in Julia, you can only inherit from abstract types, so here the supertype Animal would be an abstract type, so you can't really call the second method with an Animal instance - the second method definition is really a shorthand way of saying "call this method for any instance of some concrete subtype of Animal, unless that subtype has its own separate method definition" (which Dog does, in this case). I'm not aware of any easy way of calling the second method run(Animal... from the first run(Dog..., which would be the equivalent of a super call.
(You can also 'override' a method from another module with import, but if it has completely the same parameters and parameter types, you'd probably be committing type piracy, which is usually a bad idea. I'm not aware of any way of getting back the original method after this type of overriding. "Overloading" (using dispatch) by defining and using your own types is much more common anyway.)
(c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The first code snippet you posted is an example of using dispatch (which is what Julia uses instead of overloading). For another example, let's first define our base type and function:
abstract type Vehicle end
function move(v::Vehicle, dist::Float64)
println("Moving by $dist meters")
end
Now we can create another method of this function for dispatch ("overload" it) this way:
function move(v::Vehicle, dist::LightYears)
println("Blazing across $dist light years")
end
We can do an object-oriented style "override" too (though at the language level this is just seen as another method for dispatch):
struct Car <: Vehicle
model::String
end
function move(c::Car, dist::Float64)
println("$model is moving $dist meters")
end
This is the equivalent of overriding Vehicle.move(float dist) in derived class as Car.move(float dist).
And just for the heck of it, the volume function from the question:
# volume of a cylinder
volume(r::Float64, h::Int) = π*r*r*h
volume(l::Int, b::Int, h::Int) = l*b*h;
Now the correct volume method to call will be decided based on the number (and type) of arguments passed (and the return type is automatically inferred by the compiler, Float64 for the first method and Int for the second one).

Why are FunctionN types in Scala created as a subtype of AnyRef, not AnyVal?

If I understand correctly, under the JVM, every time I use a lambda expression, an Object has to be created.
Why the overhead? Why did the Scala creators choose to extend AnyRef instead of AnyVal when designing FunctionN types? I mean, they don't have any real 'values' in them by themselves, so shouldn't it be possible for functions to be value objects with an underlying Unit representation (or a long containing some hash for equality checking or whatever)? I can imagine not allocating an object per every lambda can lead to performance boosts in some codebases.
One obvious disadvantage that comes to my mind of extending AnyVal is that it would prohibit subclassing function types. Maybe that alone would be sufficient to be not extending AnyVal, but what other reasons can there be?
--Edit
I understand that functions need to close over other variables, but I think it would be more natural to model it as arguments to the apply method, not field members of FunctionN objects (thus removing the necessity to have a java.lang.Object on this part) -- after all, isn't what variables are closed over all known at compile time?
--Edit again
I found out about it; what I had in mind was 'lambda lifting'.
The only ways to call a method are the bytecode operations invokevirtual (virtual dispatch on class), invokeinterface (same, but on interfaces), invokespecial (invoke exactly the given method, ignoring virtual lookup. Used for private, super, and new.), and invokestatic (summon unicorns call static method). invokespecial is out, because calling exactly some function is the antithesis of abstracting over a function. invokestatic is out, too, because it's essentially an invokespecial clone that doesn't need a this argument. invokevirtual and invokeinterface are similar enough for our purposes to be considered the same.
There's no way to pass a plain function pointer like you might see in C, and even if you could, you could never call it, as there is no instruction that can jump to an arbitrary point in code. (All jump targets inside a method are restricted to that method, and all references to the outside boil down to strings. (The JVM, of course, is free to optimize that representation once a file is loaded.))
In order to invoke a method with either instruction, the JVM must look up the method inside the target object's virtual dispatch table. If you tried to dummy out the object with () (AnyVal subclasses didn't exist until 2.10, but let's suspend our disbelief), then the JVM would get horribly confused when you tried to call a (presumably interesting) method on something that's as close to "featureless blob" as you can get.
Also remember that an object's methods are totally determined by its class. If a.getClass == b.getClass, then a and b have the exact same methods, code and all. In order to get around this, we need to create subclasses of the FunctionN traits, such that each subclass represents one function, and each instance of each class contains a reference to its class which contains a reference to the code associated with that function. Even with invokedynamic, this is still true, as the LambdaMetaFactory's current implementation creates an inner class at runtime.
Finally, the assumption that functions need no state is false, as #Oleg points out. Closures need to keep references to their environment, and that is only possible with an object.

function currying vs an ordinary callback method

I'm trying to understand how the programming technique known as currying differs from an ordinary callback interface (such as the Observer/Observable interfaces in Java, or the classic Visitor design pattern).
I understand what currying is, I just don't understand why it's uniquely useful to the point that it requires its own terminology and language support.
Could someone explain a programming situation that is better solved by currying than by a callback method? What's the practical significance of the fact that currying uses a separate function for each argument?
[update:] to summarize the answers I got: currying comes part and parcel with the fact that functions are "first class" citizens, ie objects that can be created and passed around like any other object reference. This makes it possible to return a function from a function, in other words currying.
As for the reason why currying is useful, currying provides a syntax to let you concisely decorate function calls so that derived functions can be created with minimal boilerplate code overhead. Whereas in java you might create several overloaded or "wrapper" methods for each partial parameter set which ultimately invoke a master method containing all the parameters, currying provides a lighter syntax that lets you generate these "function wrappers" as needed in your code.
Currying and callbacks are two completely different technologies.
Callbacks are essentially a synonym for "passing a function to a function" (i.e. higher-order function that consumes a function); currying is a form of partial application, i.e. a function which isn't passed all of the parameters it expects returns a new function that only expects the free parameters.
Accordingly, they are not alternatives at all.
Currying is useful because it makes it much easier to concisely create functions that can be used as, for example, callbacks, or in a pointfree programme. It also means that you can, e.g. pass a callback to a function like map, and have a new function that applies your callback to every element of any list you care to pass to it.
Well, it's a point of language support.
In Java, for example, you can define all sorts of callback interfaces: one for parmeterless methods, one for methods with one argument, one for methods with two arguments, and so forth.
But wehn functions are first class citiziens, one does not need this: Single argument functions will do the job, because functions can be returned. Hence, one important interface in all "functional java" projects will be some interface of the form:
interface Fun<A,B> {
public B apply(A a);
}
or the like that covers this pattern.

scala - is it possible to force immutability on an object?

I mean if there's some declarative way to prevent an object from changing any of it's members.
In the following example
class student(var name:String)
val s = new student("John")
"s" has been declared as a val, so it will always point to the same student.
But is there some way to prevent s.name from being changed by just declaring it like immutable???
Or the only solution is to declare everything as val, and manually force immutability?
No, it's not possible to declare something immutable. You have to enforce immutability yourself, by not allowing anyone to change it, that is remove all ways of modifying the class.
Someone can still modify it using reflection, but that's another story.
Scala doesn't enforce that, so there is no way to know. There is, however, an interesting compiler-plugin project named pusca (I guess it stands for Pure-Scala). Pure is defined there as not mutating a non-local variable and being side-effect free (e.g. not printing to the console)—so that calling a pure method repeatedly will always yield the same result (what is called referentially transparent).
I haven't tried out that plug-in myself, so I can't say if it's any stable or usable already.
There is no way that Scala could do this generally.
Consider the following hypothetical example:
class Student(var name : String, var course : Course)
def stuff(course : Course) {
magically_pure_val s = new Student("Fredzilla", course)
someFunctionOfStudent(s)
genericHigherOrderFunction(s, someFunctionOfStudent)
course.someMethod()
}
The pitfalls for any attempt to actually implement that magically_pure_val keyword are:
someFunctionOfStudent takes an arbitrary student, and isn't implemented in this compilation unit. It was written/compiled knowing that Student consists of two mutable fields. How do we know it doesn't actually mutate them?
genericHigherOrderFunction is even worse; it's going to take our Student and a function of Student, but it's written polymorphically. Whether or not it actually mutates s depends on what its other arguments are; determining that at compile time with full generality requires solving the Halting Problem.
Let's assume we could get around that (maybe we could set some secret flags that mean exceptions get raised if the s object is actually mutated, though personally I wouldn't find that good enough). What about that course field? Does course.someMethod() mutate it? That method call isn't invoked from s directly.
Worse than that, we only know that we'll have passed in an instance of Course or some subclass of Course. So even if we are able to analyze a particular implementation of Course and Course.someMethod and conclude that this is safe, someone can always add a new subclass of Course whose implementation of someMethod mutates the Course.
There's simply no way for the compiler to check that a given object cannot be mutated. The pusca plugin mentioned by 0__ appears to detect purity the same way Mercury does; by ensuring that every method is known from its signature to be either pure or impure, and by raising a compiler error if the implementation of anything declared to be pure does anything that could cause impurity (unless the programmer promises that the method is pure anyway).[1]
This is quite a different from simply declaring a value to be completely (and deeply) immutable and expecting the compiler to notice if any of the code that could touch it could mutate it. It's also not a perfect inference, just a conservative one
[1]The pusca README claims that it can infer impurity of methods whose last expression is a call to an impure method. I'm not quite sure how it can do this, as checking if that last expression is an impure call requires checking if it's calling a not-declared-impure method that should be declared impure by this rule, and the implementation might not be available to the compiler at that point (and indeed could be changed later even if it is). But all I've done is look at the README and think about it for a few minutes, so I might be missing something.