Reloading multimethods via Slime - emacs

I'm having trouble reloading multimethods when developing in Emacs with a Slime repl.
Redefining the defmethod forms works fine, but if I change the dispatch function I don't seem to be able to reload the defmulti form. I think I specifically added or removed dispatch function parameters.
As a workaround I've been able to ns-unmap the multimethod var, reload the defmulti form, and then reload all the defmethod forms.
Presumably this is a "limitation" of the way Clojure implements multimethods, i.e. we're sacrificing some dynamism for execution speed, but are there any idioms or development practises that help workaround this?

The short answer is that your way of dealing with this is exactly correct. If you find yourself updating a multimethod in order to change the dispatch function particularly frequently, (1) I think that's unusual :-), (2) you could write a suite of functions / macros to help with the reloading. I sketch two untested (!) macros to help with (2) further below.
Why?
First, however, a brief discussion of the "why". Dispatch function lookup for a multimethod as currently implemented requires no synchronization -- the dispatch fn is stored in a final field of the MultiFn object. This of course means that you cannot just change the dispatch function for a given multimethod -- you have to recreate the multimethod itself. That, as you point out, necessitates re-registration of all previously defined methods, which is a hassle.
The current behaviour lets you reload namespaces with defmethod forms in them without losing all your methods at the cost of making it slightly more cumbersome to replace the actual multimethod when that is indeed what you want to do.
If you really wanted to, the dispatch fn could be changed via reflection, but that has problematic semantics, particularly in multi-threaded scenarios (see Java Language Specification 17.5.3 for information on reflective updates to final fields after construction).
Hacks (non-reflective)
One approach to (2) would be to automate re-adding the methods after redefinition with a macro along the lines of (untested)
(defmacro redefmulti [multifn & defmulti-tail]
`(let [mt# (methods ~multifn)]
(ns-unmap (.ns (var ~multifn)) '~multifn)
(defmulti ~multifn ~#defmulti-tail)
(doseq [[dispval# meth#] mt#]
(.addMethod ~multifn dispval# meth#))))
An alternative design would use a macro called, say, with-method-reregistration, taking a seqable of multifn names and a body and promising to reregister the methods after executing the body; here's a sketch (again, untested):
(defmacro with-method-reregistration [multifns & body]
`(let [mts# (doall (zipmap ~(map (partial list 'var) multifns)
(map methods ~multifns))))]
~#body
(doseq [[v# mt#] mts#
[dispval# meth#] mt#]
(.addMethod #v# dispval# meth#))))
You'd use it to say (with-method-reregistration [my-multi-1 my-multi-2] (require :reload 'ns1 ns2)). Not sure this is worth the loss of clarity.

Related

Examples of non-trivial fexpr usage

I'm looking for (real world) uses of fexprs, where they are used in a way different to what can be accomplished with lazy evaluation.
Most examples that I could find use fexprs only to implement conditional evaluation, like for a short circuit "and" operative (Evaluate first argument, if false, don't evaluate second and directly return false).
I'm looking for "useful" uses, that is where using fexpr leads to code that is "better" (cleaner) than what could be done without fexprs.
There are two main reasons you would want to use fexprs.
The first one is because they allow you to evaluate the arguments an arbitrary number of times. This makes it possible to implement operators that evaluate their arguments lazily like you suggested. Constructs built this way are also capable of evaluating their arguments more than once. This makes it possible to implement loops through fexprs!
The other case is for transformation. Transforming code is basically a way of writing a compiler on top of your existing Lisp. Although it uses macros and not fexprs, cl-who is a great example of the kind of transformations that can be made.
Fexpr are somewhat orthogonal to lazy/eager evaluation.
The usual function approach is to eval the arguments to a function then call it on the result. Lazy eval still behaves like this, it just delays the evaluation until immediately before the parameter is used.
The usual macro approach is to pass the unevaluated arguments into a template which evaluates anything that isn't quoted. The resulting piece of AST is injected into the call site where it is usually evaluated again. This works much the same with lazy eval.
The historically insane fexpr approach is to pass unevaluated arguments to the function, which does as it pleases with them. The result is injected directly into the call site and usually not evaluated automatically.
The fexpr is pretty close to an arbitrary transform. So you can implement macros and lambdas with them. You can also implement whatever hybrid of eager/lazy evaluation you wish. Likewise you could implement fexpr given default lazy eval and explicit calls to eval() in various places to force eager behaviour.
I don't think I would characterise fexpr as an easy solution to implementing lazy eval though, in a cure is worse than the disease sense.

What is the difference between Clojure REPL and Scala REPL?

I’ve been working with Scala language for a few months and I’ve already created couple projects in Scala. I’ve found Scala REPL (at least its IntelliJ worksheet implementation) is quite convenient for quick development. I can write code, see what it does and it’s nice. But I do the procedure only for functions (not whole program). I can’t start my application and change it on spot. Or at least I don’t know how (so if you know you are welcome to give me piece of advice).
Several days ago my associate told me about Clojure REPL. He uses Emacs for development process and he can change code on spot and see results without restarting. For example, he starts the process and if he changes implementation of a function, his code will change his behavior without restart. I would like to have the same thing with Scala language.
P.S. I want to discuss neither which language is better nor does functional programming better than object-oriented one. I want to find a good solution. If Clojure is the better language for the task so let it be.
The short answer is that Clojure was designed to use a very simple, single pass compiler which reads and compiles a single s-expression or form at a time. For better or worse there is no global type information, no global type inference and no global analysis or optimization. Clojure uses clojure.lang.Var instances to create global bindings through a series of hashmaps from textual symbols to transactional values. def forms all create bindings at global scope in this global binding map. So where in Scala a "function" (method) will be resolved to an instance or static method on a given JVM class, in Clojure a "function" (def) is really just a reference to an entry in the table of var bindings. When a function is invoked, there isn't a static link to another class, instead the var is reference by symbolic name, then dereferenced to get an instance of a clojure.lang.IFn object which is then invoked.
This layer of indirection means that it is possible to re-evaluate only a single definition at a time, and that re-evaluation becomes globaly visible to all clients of the re-defined var.
In comparison, when a definition in Scala changes, scalac must reload the changed file, macroexpand, type infer, type check, and compile. Then due to the semantics of classloading on the JVM, scalac must also reload all classes which depend on methods in the class which changed. Also all values which are instances of the changed class become trash.
Both approaches have their strengths and weaknesses. Obviously Clojure's approach is simpler to implement, however it pays an ongoing cost in terms of performance due to continual function lookup operations forget correctness concerns due to lack of static types and what have you. This is arguably suitable for contexts in which lots of change is happening in a short timeframe (interactive development) but is less suitable for context when code is mostly static (deployment, hence Oxcart). some work I did suggests that the slowdown on Clojure programs from lack of static method linking is on the order of 16-25%. This is not to call Clojure slow or Scala fast, they just have different priorities.
Scala chooses to do more work up front so that the compiled application will perform better which is arguably more suitable for application deployment when little or no reloading will take place, but proves a drag when you want to make lots of small changes.
Some material I have on hand about compiling Clojure code more or less cronological by publication order since Nicholas influenced my GSoC work a lot.
Clojure Compilation [Nicholas]
Clojure Compilation: Full Disclojure [Nicholas]
Why is Clojure bootstrapping so slow? [Nicholas]
Oxcart and Clojure [me]
Of Oxen, Carts and Ordering [me]
Which I suppose leaves me in the unhappy place of saying simply "I'm sorry, Scala wasn't designed for that the way Clojure was" with regards to code hot swapping.

Would the ability to declare Lisp functions 'pure' be beneficial?

I have been reading a lot about Haskell lately, and the benefits that it derives from being a purely functional language. (I'm not interested in discussing monads for Lisp) It makes sense to me to (at least logically) isolate functions with side-effects as much as possible. I have used setf and other destructive functions plenty, and I recognize the need for them in Lisp and (most of) its derivatives.
Here we go:
Would something like (declare pure) potentially help an optimizing compiler? Or is this a moot point because it already knows?
Would the declaration help in proving a function or program, or at least a subset that was declared as pure? Or is this again something that is unnecessary because it's already obvious to the programmer and compiler and prover?
If for nothing else, would it be useful to a programmer for the compiler to enforce purity for functions with this declaration and add to the readability/maintainablity of Lisp programs?
Does any of this make any sense? Or am I too tired to even think right now?
I'd appreciate any insights here. Info on compiler implementation or provability is welcome.
EDIT
To clarify, I didn't intend to restrict this question to Common Lisp. It clearly (I think) doesn't apply to certain derivative languages, but I'm also curious if some features of other Lisps may tend to support (or not) this kind of facility.
You have two answers but neither touch on the real problem.
First, yes, it would obviously be good to know that a function is pure. There's a ton of compiler level things that would like to know that, as well as user level things. Given that lisp languages are so flexible, you could twist things a bit: instead of a "pure" declaration that asks the compiler to try harder or something, you just make the declaration restrict the code in the definition. This way you can guarantee that the function is pure.
You can even do that with additional supporting facilities -- I mentioned two of them in a comment I made to johanbev's answer: add the notion of immutable bindings and immutable data structures. I know that in Common Lisp these are very problematic, especially immutable bindings (since CL loads code by "side-effecting" it into place). But such features will help simplifying things, and they're not inconceivable (see for example the Racket implementation that has immutable pairs and other data structures, and has immutable bindings.
But the real question is what can you do in such restricted functions. Even a very simple looking problem would be infested with issues. (I'm using Scheme-like syntax for this.)
(define-pure (foo x)
(cons (+ x 1) (bar)))
Seems easy enough to tell that this function is indeed pure, it doesn't do anything . Also, seems that having define-pure restrict the body and allow only pure code would work fine in this case, and will allow this definition.
Now start with the problems:
It's calling cons, so it assumes that it is also known to be pure. In addition, as I mentioned above, it should rely on cons being what it is, so assume that the cons binding is immutable. Easy, since it's a known builtin. Do the same with bar, of course.
But cons does have a side effect (even if you're talking about Racket's immutable pairs): it allocates a new pair. This seems like a minor and ignorable point, but, for example, if you allow such things to appear in pure functions, then you won't be able to auto-memoize them. The problem is that someone might rely on every foo call returning a new pair -- one that is not-eq to any other existing pair. Seems that to make it fine you need to further restrict pure functions to deal not only with immutable values, but also values where the constructor doesn't always create a new value (eg, it could hash-cons instead of allocate).
But that code also calls bar -- so no you need to make the same assumptions on bar: it must be known as a pure function, with an immutable binding. Note specifically that bar receives no arguments -- so in that case the compiler could not only require that bar is a pure function, it could also use that information and pre-compute its value. After all, a pure function with no inputs could be reduced to a plain value. (Note BTW that Haskell doesn't have zero-argument functions.)
And that brings another big issue in. What if bar is a function of one input? In that case you'd have an error, and some exception will get thrown ... and that's no longer pure. Exceptions are side-effects. You now need to know the arity of bar in addition to everything else, and you need to avoid other exceptions. Now, how about that input x -- what happens if it isn't a number? That will throw an exception too, so you need to avoid it too. This means that you now need a type system.
Change that (+ x 1) to (/ 1 x) and you can see that not only do you need a type system, you need one that is sophisticated enough to distinguish 0s.
Alternatively, you could re-think the whole thing and have new pure arithmetic operations that never throw exceptions -- but with all the other restrictions you're now quite a long way from home, with a language that is radically different.
Finally, there's one more side-effect that remains a PITA: what if the definition of bar is (define-pure (bar) (bar))? It certainly is pure according to all of the above restrictions... But diverging is a form of a side effect, so even this is no longer kosher. (For example, if you did make your compiler optimize nullary functions to values, then for this example the compiler itself would get stuck in an infinite loop.) (And yes, Haskell doesn't deal with that, it doesn't make it less of an issue.)
Given a Lisp function, knowing if it is pure or not is undecidable in general. Of course, necessary conditions and sufficient conditions can be tested at compile time. (If there are no impure operations at all, then the function must be pure; if an impure operation gets executed unconditionally, then the function must be impure; for more complicated cases, the compiler could try to prove that the function is pure or impure, but it will not succeed in all cases.)
If the user can manually annotate a function as pure, then the compiler could either (a.) try harder to prove that the function is pure, ie. spend more time before giving up, or (b.) assume that it is and add optimizations which would not be correct for impure functions (like, say, memoizing results). So, yes, annotating functions as pure could help the compiler if the annotations are assumed to be correct.
Apart from heuristics like the "trying harder" idea above, the annotation would not help to prove stuff, because it's not giving any information to the prover. (In other words, the prover could just assume that the annotation is always there before trying.) However, it could make sense to attach to pure functions a proof of their purity.
The compiler could either (a.) check if pure functions are indeed pure at compile time, but this is undecidable in general, or (b.) add code to try to catch side effects in pure functions at runtime and report those as an error. (a.) would probably be helpful with simple heuristics (like "an impure operation gets executed unconditionally), (b.) would be useful for debug.
No, it seems to make sense. Hopefully this answer also does.
The usual goodies apply when we can assume purity and referential
transparency. We can automatically memoize hotspots. We can
automatically parallelize computation. We can deal away with a lot of
race conditions. We can also use structure sharing with data that we
know cannot be modified, for instance the (quasi) primitive ``cons()''
does not need to copy the cons-cells in the list it's consing to.
These cells are not affected in any way by having another cons-cell
pointing to it. This example is kinda obvious, but compilers are often
good performers in figuring out more complex structure sharing.
However, actually determining if a lambda (a function) is pure or has
referential transparency is very tricky in Common Lisp. Remember that
a funcall (foo bar) start by looking at (symbol-function foo). So in
this case
(defun foo (bar)
(cons 'zot bar))
foo() is pure.
The next lambda is also pure.
(defun quux ()
(mapcar #'foo '(zong ding flop)))
However, later on we can redefine foo:
(let ((accu -1))
(defun foo (bar)
(incf accu)))
The next call to quux() is no longer pure! The old pure foo() has been
redefined to an impure lambda. Yikes. This example is maybe somewhat
contrived but it's not that uncommon to lexically redefine some
functions, for instance with a let block. In that case it's not
possible to know what would happen at compile time.
Common Lisp has a very dynamic semantic, so actually being
able to determine control flow and data flow ahead of time (for
instance when compiling) is very hard, and in most useful cases
entirely undecidable. This is quite typical of languages with dynamic
type systems. There is a lot of common idioms in Lisp you cannot use
if you must use static typing. It's mainly these that fouls any
attempt to do much meaningful static analysis. We can do it for primitives
like cons and friends. But for lambdas involving other things than
primitives we are in much deeper water, especially in the cases where
we need to look at complex interplay between functions. Remember that
a lambda is only pure if all the lambdas it calls are also pure.
On the top of my head, it could be possible, with some deep macrology,
to do away with the redefinition problem. In a sense, each lambda gets
an extra argument which is a monad that represents the entire state of
the lisp image (we can obviously restrict ourselves to what the function
will actually look at). But it's probably more useful to be able do
declare purity ourselves, in the sense that we promise the compiler
that this lambda is indeed pure. The consequences if it isn't is then
undefined, and all sorts of mayhem could ensue...

Closures and dynamic scope?

I think I understand why there is a danger in allowing closures in a language using dynamic scope. That is, it seems you will be able to close the variable OK, but when trying to read it you will only get the value at the top of global stack. This might be dangerous if other functions use same name in the interim.
Have I missed some other subtlety?
I realize I'm years late answering this, but I just ran across this question while doing a web search and I wanted to correct some misinformation that is posted here.
"Closure" just means a callable object that contains both code and an environment that provides bindings for free variables within that code. That environment is usually a lexical environment, but there is no technical reason why it can't be a dynamic environment.
The trick is to close the code over the environment and not the particular values. This is what Lisp 1.5 did, and also what MACLisp did for "downward funargs."
You can see how Lisp 1.5 did this by reading the Lisp 1.5 manual at http://www.softwarepreservation.org/projects/LISP/book
Pay particular attention in Appendix B to how eval handles FUNCTION and how apply handles FUNARG.
You can get the basic flavor of programming using dynamic closures from http://c2.com/cgi/wiki?DynamicClosure
You can get an in depth introduction to the implementation issues from ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-199.pdf
Modern dynamically scoped languages generally use shallow binding, where the current value of each variable is kept in one global location, and function calls save old values away on the stack. One way of implementing dynamic closures with shallow binding is described at http://www.pipeline.com/~hbaker1/ShallowBinding.html
Yes, that's the basic problem. The term "closure" is short for "lexical closure", though, which by definition captures its lexical scope. I'd call the things in a dynamically scoped language something else, like LAMBDA. Lambdas are perfectly safe in a dynamically scoped language as long as you don't try to return them.
(For an interesting thought experiment, compare the problem of returning a dynamically scoped lambda in Emacs Lisp to the problem of returning a reference to a stack-allocated variable in C, and how both are impossible in Scheme.)
A long time ago, back when languages with dynamic scope were much less rare than today, this was known as the funargs problem. The problem you mention is the upward funargs problem.

which clojure library interface design is best?

I want to provide multiple implementations of a message reader/writer. What is the best approach?
Here is some pseudo-code of what I'm currently thinking:
just have a set of functions that all implementations must provide and leave it up to the caller to hold onto the right streams
(ns x-format)
(read-message [stream] ...)
(write-message [stream message] ...)
return a map with two closed functions holding onto the stream
(ns x-format)
(defn make-formatter [socket]
{:read (fn [] (.read (.getInputStream socket))))
:write (fn [message] (.write (.getOutputStream socket) message)))})
something else?
I think the first option is better. It's more extensible, depending how these objects are going to be used. It's easier to add or change a new function that works on an existing object if the functions and objects are separate. In Clojure there usually isn't much reason to bundle functions along with the objects they work on, unless you really want to hide implementation details from users of your code.
If you're writing an interface for which you expect many implementations, consider using multimethods also. You can have the default throw a "not implemented" exception, to force implementors to implement your interface.
As Gutzofter said, if the only reason you're considering the second option is to allow people not to have to type a parameter on every function call, you could consider having all of your functions use some var as the default socket object and writing a with-socket macro which uses binding to set that var's value. See the builtin printing methods which default to using the value of *out* as the output stream, and with-out-str which binds *out* to a string writer, as a Clojure example.
This article may interest you; it compares and contrasts some OOP idioms with Clojure equivalents.
I think that read-message and write-message are utility functions. What you need to do is encapsulate your functions in a with- macro(s). See 'with-output-to-string' in common lisp to see what I mean.
Edit:
When you use a with- macro you can have error handling and resource allocation in the macro expansion.
I'd go with the first option and make all those functions multimethods.