Simulate a where in scheme with defmac - macros

I'm actually faced to a problem in Scheme. And I just don't have any idea about how to solve it. It is pretty simple to understand and I guess kind of easy for any Scheme expert. I just have to simulate the where expression of haskell with the defmac function in scheme defining a macro "operation". For example, to execute a code like the following
> (operation (+ x y)
where ([x 1]
[y (+ x 32)]))
34
I'm kind of familiar with the way to represent simple objects in scheme with macros (defmac) but now i'm really stuck with this problem.
Any help or idea would be really welcome.
Thank you in advance.

If i understand correctly you want to transform that code into something like
(let* ((x 1)
(y (+ x 32))
(+ x y))
(define-syntax operation
(syntax-rules (where)
((operation expression where body)
(let* body expression))))
Should do it, but only where "where" is right after the expression

Sounds like this should do the trick (using this defintion of defmac):
(defmac (operation expr
where (binding ...))
#:keywords where
(let* (binding ...)
expr))
It simply converts your operation form into the equivalent let*, so that your example would become:
(let* ((x 1)
(y (+ x 32)))
(+ x y))

Related

let over lambda doesn't seem to work in elisp

In Common Lisp this sort of thing works fine
(let ((x 7))
(defun g (y) (* y x)))
(g 16)
In elisp this errors saying x is not defined as if the lexical closure did not happen. This is something I have not encountered in other lisps. What is happening with this?
Ah, I see. It works after
(setq lexical-binding t)

Macro expanding to same operator

Many Lisp-family languages have a little bit of syntax sugar for things like addition or comparison allowing more than two operands, if optionally omitting the alternate branch etc. There would be something to be said for implementing these with macros, that would expand (+ a b c) to (+ a (+ b c)) etc; this would make the actual runtime code cleaner, simpler and slightly faster (because the logic to check for extra arguments would not have to run every time you add a pair of numbers).
However, the usual macro expansion algorithm is 'keep expanding the outermost form over and over until you get a non-macro result'. So that means e.g. + had better not be a macro that expands to +, even a reduced version, or you get an infinite loop.
Is there any existing Lisp that solves this problem at macro expansion time? If so, how does it do it?
Common Lisp provides compiler macros.
These can be used for such optimizations. A compiler macro can conditionally decline to provide an expansion by just returning the form itself.
This is an addendum to Rainer's answer: this answer really just gives some examples.
First of all compiling things like arithmetic operations is a hairy business because you there's a particular incentive to try and turn as much as possible into the operations the machine understands and failing to do that can result in enormous slowdowns in numerically intensive code. So typically the compiler has a lot of knowledge of how to compile things, and it is also allowed a lot of freedom: for instance in CL (+ a 2 b 3) can be turned by the compiler into
(+ 5 a b): the compiler is allowed to reorder & coalesce things (but not to change the evaluation order: it can turn (+ (f a) (g b)) into something like (let ((ta (f a)) (tb (g b))) (+ tb ta)) but not into (+ (g b) (f a))).
So arithmetic is usually pretty magic. But it's still interesting to look at how you can do this with macros and why you need compiler macros in CL.
(Note: all of the below macros below are things I wrote without much thought: they may be semantically wrong.)
Macros: the wrong answer
So, addition, in CL. One obvious trick is to have a 'primitive-two-arg' function (which presumably the compiler can inline into assembly in good cases), and then to have the public interface be a macro which expands into that.
So, here is that
(defun plus/2 (a b)
;; just link to the underlying CL arithmetic
(+ a b))
And you can then write the general function in terms of that in the obvious way:
(defun plus/many (a &rest bcd)
(if (null bcd)
a
(reduce #'plus/2 bcd :initial-value a)))
And now you can write the public interface, plus as a macro on top of this:
(defmacro plus (a &rest bcd)
(cond ((null bcd)
a)
((null (rest bcd))
`(plus/2 ,a ,(first bcd)))
(t
`(plus/2 (plus/2 ,a ,(first bcd))
(plus ,#(rest bcd))))))
And you can see that
(plus a b) expands to (plus/2 a b)'
(plus a b c) expands to (plus/2 (plus/2 a b) (plus c)) and thence to (plus/2 (plus/2 a b) c).
And we can do better than this:
(defmacro plus (a &rest bcd)
(multiple-value-bind (numbers others) (loop for thing in (cons a bcd)
if (numberp thing)
collect thing into numbers
else collect thing into things
finally (return (values numbers things)))
(cond ((null others)
(reduce #'plus/2 numbers :initial-value 0))
((null (rest others))
`(plus/2 ,(reduce #'plus/2 numbers :initial-value 0)
,(first others)))
(t
`(plus/2 ,(reduce #'plus/2 numbers :initial-value 0)
,(reduce (lambda (x y)
`(plus/2 ,x ,y))
others))))))
And now you can expand, for instance (plus 1 x y 2.0 3 z 4 a) into (plus/2 10.0 (plus/2 (plus/2 (plus/2 x y) z) a)), which I think looks OK to me.
But this is hopeless. It's hopeless because what happens if I say (apply #'plus ...)? Doom: plus needs to be a function, it can't be a macro.
Compiler macros: the right answer
And this is where compiler macros come in. Let's start again, but this time the function (never used above) plus/many will just be plus:
(defun plus/2 (a b)
;; just link to the underlying CL arithmetic
(+ a b))
(defun plus (a &rest bcd)
(if (null bcd)
a
(reduce #'plus/2 bcd :initial-value a)))
And now we can write a compiler macro for plus, which is a special macro which may be used by the compiler:
The presence of a compiler macro definition for a function or macro indicates that it is desirable for the compiler to use the expansion of the compiler macro instead of the original function form or macro form. However, no language processor (compiler, evaluator, or other code walker) is ever required to actually invoke compiler macro functions, or to make use of the resulting expansion if it does invoke a compiler macro function. – CLHS 3.2.2.1.3
(define-compiler-macro plus (a &rest bcd)
(multiple-value-bind (numbers others) (loop for thing in (cons a bcd)
if (numberp thing)
collect thing into numbers
else collect thing into things
finally (return (values numbers things)))
(cond ((null others)
(reduce #'plus/2 numbers :initial-value 0))
((null (rest others))
`(plus/2 ,(reduce #'plus/2 numbers :initial-value 0)
,(first others)))
(t
`(plus/2 ,(reduce #'plus/2 numbers :initial-value 0)
,(reduce (lambda (x y)
`(plus/2 ,x ,y))
others))))))
Note that the body of this compiler macro is identical to the second definition of plus as a macro above: it's identical because for this function there are no cases where the macro wants to decline the expansion.
You can check the expansion with compiler-macroexpand:
> (compiler-macroexpand '(plus 1 2 3 x 4 y 5.0 z))
(plus/2 15.0 (plus/2 (plus/2 x y) z))
t
The second value indicates that the compiler macro did not decline the expansion. And
> (apply #'plus '(1 2 3))
6
So that looks good.
Unlike ordinary macros a macro like this can decline to expand, and it does so by returning the whole macro form unchanged. For instance here's a version of the above macro which only deals with very simple cases:
(define-compiler-macro plus (&whole form a &rest bcd)
(cond ((null bcd)
a)
((null (rest bcd))
`(plus/2 ,a ,(first bcd)))
(t ;cop out
form)))
And now
> (compiler-macroexpand '(plus 1 2 3 x 4 y 5.0 z))
(plus 1 2 3 x 4 y 5.0 z)
nil
but
> (compiler-macroexpand '(plus 1 2))
(plus/2 1 2)
t
OK.

let vs let* in LISP - is there a difference in efficiency?

This should be a quick one: I've been asking myself often whether there's a difference in efficiency between the LISP special functions let and let*? For instance, are they equivalent when creating only one variable?
As Barmar pointed out, there shouldn't be any performance difference in "production ready" Lisps.
For CLISP, both of these produce the same (bytecode) assembly:
(defun foo (x) (let ((a x) (b (* x 2))) (+ a b)))
(defun bar (x) (let* ((a x) (b (* x 2))) (+ a b)))
Though for non-optimizing, simple interpreters (or also compilers) there could very well be a difference, e.g. because let* and let could be implemented as simple macros, and a single lambda with multiple parameters is probably more efficient than multiple lambdas with a single parameter each:
;; Possible macro expansion for foo's body
(funcall #'(lambda (a b) (+ a b)) x (* x 2))
;; Possible macro expansion for bar's body
(funcall #'(lambda (a) (funcall #'(lambda (b) (+ a b)) (* x 2))) x)
Having multiple lambdas, as well as the (avoidable) closing over a could make the second expansion less "efficient".
When used with only one binding, then there shouldn't be any difference even then, though.
But if you're using an implementation that isn't optimizing let* (or let), then there's probably no point discussing performance at all.
There shouldn't be any performance difference. The only difference between them is the scope of the variables, which is dealt with at compile time. If there's only one variable, there's absolutely no difference.

a `let' binding is not available for subsequent `let' bindings?

I learn Emacs Lisp, because I want to customize my editor and to be clear I am little bit stuck with how Dynamic binding works.
Here is example:
(setq y 2)
(let ((y 1)
(z y))
(list y z))
==> (1 2)
As a result I get back => (1 2)
Please could some one explain what actually going on. I tried to explain it for my self using concept of frames where each frame create local binding, but it seems like here it works in different way.
Why it doesn't take closest value of 'y' in the nearest frame?
If could describe in details what is going here, I will be so happy.
Thanks in advance. Nick.
In emacs lisp (as in many lisps), the values that will be bound in let are computed in parallel, in the environment "outside" the let.
As an example, the following is (approximately) equivalent:
(let ((a b)
(b a))
...)
=>
(funcall (lambda (a b) ...) b a)
If you want to bind things in sequence, you should use let*, which does what you expected let to do.
Your example seems to be taken straight from the Emacs Lisp Reference. If you scroll down to let*, you'll get the explanation:
This special form is like let, but it binds each variable right after
computing its local value, before computing the local value for the
next variable. Therefore, an expression in bindings can refer to the
preceding symbols bound in this let* form. Compare the following
example with the example above for let:
(setq y 2)
⇒ 2
(let* ((y 1)
(z y)) ; Use the just-established value of y.
(list y z))
⇒ (1 1)
The problem is solved if You use let* which allows You to use the let-mentioned vars:
(setq y 2)
(let* ((y 1)
(z y))
(list y z))
==> (1 1)
The value of y you are setting in the let us only in effect in the BODY of the let, it is not yet in effect yet when you set z in the same let statement.

Racket Lisp : comparison between new-if and if

(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter(improve guess x)
x)))
(define (improve guess x)
(average guess(/ x guess)))
(define (average x y)
(/ (+ x y) 2))
(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.0001))
(define (square x)
(* x x))
(define (sqrt-g x)
(sqrt-iter 1.0 x))
This is a program for sqrt. And the question is what happens when you attempts to use new-if to replace if with new-if.
(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter(improve guess x)
x)))
This is new if
(define (new-if predicate then-clause else-clause)
(cond (predicate then-clause)
(else else-clause)))
My opinion is the result of two program gonna be the same. because new-if and if can produce the same results.
However, new-if proved wrong, because it is a dead circle when I tried.
So, why?
new-if is a function. All the arguments to a function are evaluated before calling the function. But sqrt-iter is a recursive function, and you need to avoid making the recursive call when the argument is already good enough.
The built-in if is syntax, and only evaluates the then-branch or else-branch, depending on the value of the condition.
You can use a macro to write new-if.
This is the perfect example for demonstration the algebraic stepper!
In the the algebraic stepper you can see how the course of the computation differs from your expectation. Here you must pay notice to the differences in evaluation of, say, (new-if 1 2 3) and (if 1 2 3).
If you haven't tried the algebraic stepper before, see this answer to see what it looks like.
Since racket is an applicative procedure the 3rd argument to the new-if is (sqrt-iter(improve guess x) x)). Since sqrt-iter is recursive the 3rd argument never has a value assigned to it. Therefore you never move into the procedure of new-if to evaluate the function.