How does a Lisp macro extend the syntax and semantics of the Lisp programming language? - lisp

A book [1] that I am reading says this:
One of the most interesting developments in programming languages has
been the creation of extensible languages—languages whose syntax and
semantics can be changed within a program. One of the earliest and
most commonly proposed schemes for language extension is the macro
definition.
Would you give an example (along with an explanation) of a Lisp macro that extends the syntax and semantics of the Lisp programming language, please?
[1] The Theory of Parsing, Translation, and Compiling, Volume 1 Parsing by Aho and Ullman, page 58.

Picture the scene: it is 1958, and FORTRAN has just been invented. Lit only by the afterglow of atomic tests, primitive Lisp programmers are writing loops in primitive Lisp the way primitive FORTRAN programmers always had:
(prog ((i 0)) ;i is 0
start ;label beginning of loop
(if (>= i 10)
(go end)) ;skip to end when finished
(do-hard-sums-on i) ;hard sums!
(setf i (+ i 1)) ;increment i
(go start) ;jump to start
end) ;end
(Except, of course this would all be in CAPITALS because lower-case had not been invented then, and the thing I wrote as setf would be something uglier, because setf (a macro!) also had not been invented then).
Enter, in clouds of only-slightly toxic smoke from their jetpack, another Lisp programmer who had escaped to 1958 from the future. 'Look', they said, 'we could write this strange future thing':
(defmacro sloop ((var init limit &optional (step 1)) &body forms)
(let ((<start> (make-symbol "START")) ;avoid hygiene problems ...
(<end> (make-symbol "END"))
(<limit> (make-symbol "LIMIT")) ;... and multiple evaluation problems
(<step> (make-symbol "STEP")))
`(prog ((,var ,init)
(,<limit> ,limit)
(,<step> ,step))
,<start>
(if (>= ,var ,<limit>)
(go ,<end>))
,#forms
(setf ,var (+ ,var ,<step>))
(go ,<start>)
,<end>)))
'And now', they say, 'you can write this':
(sloop (i 0 10)
(do-hard-sums i))
And thus were simple loops invented.
Back here in the future we can see what this loop expands into:
(sloop (i 0 10)
(format t "~&i = ~D~%" i))
->
(prog ((i 0) (#:limit 10) (#:step 1))
#:start (if (>= i #:limit) (go #:end))
(format t "~&i = ~D~%" i)
(setf i (+ i #:step))
(go #:start)
#:end)
Which is the code that the primitive Lisp programmers used to type in by hand. And we can run this:
> (sloop (i 0 10)
(format t "~&i = ~D~%" i))
i = 0
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i = 8
i = 9
nil
And in fact this is how loops work in Lisp today. If I try a simple do loop, one of Common Lisp's predefined macros, we can see what it expands into:
(do ((i 0 (+ i 1)))
((>= i 10))
(format t "~&i = ~D~%" i))
->
(block nil
(let ((i 0))
(declare (ignorable i))
(declare)
(tagbody
#:g1481 (if (>= i 10) (go #:g1480))
(tagbody (format t "~&i = ~D~%" i)
(setq i (+ i 1)))
(go #:g1481)
#:g1480)))
Well, this expansion is not the same and it uses constructs I haven't talked about, but you can see the important thing: this loop has been rewritten to use GO. And, although Common Lisp does not define the expansion of it's looping macros, it is almost certainly the case that all the standard ones expand into something like this (but more complicated in general).
In other words: Lisp doesn't have any primitive looping constructs, but all such constructs are added to the language by macros. These macros as well as other macros to extend the language in other ways, can be written by users: they do not have to be provided by the language itself.
Lisp is a programmable programming language.

Well, perhaps the explanations will be terse, but you could look at the macros used in the lisp language itself, defun, for example.
http://clhs.lisp.se/Body/m_defun.htm
In lisp, macros are a big part of the language itself, basically allowing you to rewrite code before it's compiled.

There's more than just macros defined by defmacro. There are also Reader Macros ! as Paul Graham said in On Lisp :
The three big moments in a Lisp expression’s life are read-time,
compile-time, and runtime. Functions are in control at runtime. Macros
give us a chance to perform transformations on programs at
compile-time. …read-macros… do their work at read-time.
Macros and read-macros see your program at different stages. Macros
get hold of the program when it has already been parsed into Lisp
objects by the reader, and read-macros operate on a program while it
is still text. However, by invoking read on this text, a read-macro
can, if it chooses, get parsed Lisp objects as well. Thus read-macros
are at least as powerful as ordinary macros.
With Reader Macros, you can define new semantics way beyond ordinary macros, like for example :
adds support for string interpolation ( cl-interpol )
adds support for JSON directly into the language : See this article to know more.

Related

lisp: concat arbitrary number of lists

In my ongoing quest to recreate lodash in lisp as a way of getting familiar with the language I am trying to write a concat-list function that takes an initial list and an arbitrary number of additional lists and concatenates them.
I'm sure that this is just a measure of getting familiar with lisp convention, but right now my loop is just returning the second list in the argument list, which makes sense since it is the first item of other-lists.
Here's my non-working code (edit: refactored):
(defun concat-list (input-list &rest other-lists)
;; takes an arbitrary number of lists and merges them
(loop
for list in other-lists
append list into input-list
return input-list
)
)
Trying to run (concat-list '(this is list one) '(this is list two) '(this is list three)) and have it return (this is list one this is list two this is list three).
How can I spruce this up to return the final, merged list?
The signature of your function is a bit unfortunate, it becomes easier if you don't treat the first list specially.
The easy way:
(defun concat-lists (&rest lists)
(apply #'concatenate 'list lists))
A bit more lower level, using loop:
(defun concat-lists (&rest lists)
(loop :for list :in lists
:append list))
Going lower, using dolist:
(defun concat-lists (&rest lists)
(let ((result ()))
(dolist (list lists (reverse result))
(setf result (revappend list result)))))
Going even lower would maybe entail implementing revappend yourself.
It's actually good style in Lisp not to use LABELS based iteration, since a) it's basically a go-to like low-level iteration style and it's not everywhere supported. For example the ABCL implementation of Common Lisp on the JVM does not support TCO last I looked. Lisp has wonderful iteration facilities, which make the iteration intention clear:
CL-USER 217 > (defun my-append (&rest lists &aux result)
(dolist (list lists (nreverse result))
(dolist (item list)
(push item result))))
MY-APPEND
CL-USER 218 > (my-append '(1 2 3) '(4 5 6) '(7 8 9))
(1 2 3 4 5 6 7 8 9)
Some pedagogical solutions to this problem
If you just want to do this, then use append, or nconc (destructive), which are the functions which do it.
If you want to learn how do to it, then learning about loop is not how to do that, assuming you want to learn Lisp: (loop for list in ... append list) really teaches you nothing but how to write a crappy version of append using arguably the least-lispy part of CL (note I have nothing against loop & use it a lot, but if you want to learn lisp, learning loop is not how to do that).
Instead why not think about how you would write this if you did not have the tools to do it, in a Lispy way.
Well, here's how you might do that:
(defun append-lists (list &rest more-lists)
(labels ((append-loop (this more results)
(if (null this)
(if (null more)
(nreverse results)
(append-loop (first more) (rest more) results))
(append-loop (rest this) more (cons (first this) results)))))
(append-loop list more-lists '())))
There's a dirty trick here: I know that results is completely fresh so I am using nreverse to reverse it, which does so destructively. Can we write nreverse? Well, it's easy to write reverse, the non-destructive variant:
(defun reverse-nondestructively (list)
(labels ((r-loop (tail reversed)
(if (null tail)
reversed
(r-loop (rest tail) (cons (first tail) reversed)))))
(r-loop list '())))
And it turns out that a destructive reversing function is only a little harder:
(defun reverse-destructively (list)
(labels ((rd-loop (tail reversed)
(if (null tail)
reversed
(let ((rtail (rest tail)))
(setf (rest tail) reversed)
(rd-loop rtail tail)))))
(rd-loop list '())))
And you can check it works:
> (let ((l (make-list 1000 :initial-element 1)))
(time (reverse-destructively l))
(values))
Timing the evaluation of (reverse-destructively l)
User time = 0.000
System time = 0.000
Elapsed time = 0.000
Allocation = 0 bytes
0 Page faults
Why I think this is a good approach to learning Lisp
[This is a response to a couple of comments which I thought was worth adding to the answer: it is, of course, my opinion.]
I think that there are at least three different reasons for wanting to solve a particular problem in a particular language, and the approach you might want to take depends very much on what your reason is.
The first reason is because you want to get something done. In that case you want first of all to find out if it has been done already: if you want to do x and the language a built-in mechanism for doing x then use that. If x is more complicated but there is some standard or optional library which does it then use that. If there's another language you could use easily which does x then use that. Writing a program to solve the problem should be something you do only as a last resort.
The second reason is because you've fallen out of the end of the first reason, and you now find yourself needing to write a program. In that case what you want to do is use all of the tools the language provides in the best way to solve the problem, bearing in mind things like maintainability, performance and so on. In the case of CL, then if you have some problem which naturally involves looping, then, well, use loop if you want to. It doesn't matter whether loop is 'not lispy' or 'impure' or 'hacky': just do what you need to do to get the job done and make the code maintainable. If you want to print some list of objects, then by all means write (format t "~&~{~A~^, ~}~%" things).
The third reason is because you want to learn the language. Well, assuming you can program in some other language there are two approaches to doing this.
the first is to say 'I know how to do this thing (write loops, say) in languages I know – how do I do it in Lisp?', and then iterate this for all the thing you already know how to do in some other language;
the second is to say 'what is it that makes Lisp distinctive?' and try and understand those things.
These approaches result in very approaches to learning. In particular I think the first approach is often terrible: if the language you know is, say, Fortran, then you'll end up writing Fortran dressed up as Lisp. And, well, there are perfectly adequate Fortran compilers out there: why not use them? Even worse, you might completely miss important aspects of the language and end up writing horrors like
(defun sum-list (l)
(loop for i below (length l)
summing (nth i l)))
And you will end up thinking that Lisp is slow and pointless and return to the ranks of the heathen where you will spread such vile calumnies until, come the great day, the golden Lisp horde sweeps it all away. This has happened.
The second approach is to ask, well, what are the things that are interesting about Lisp? If you can program already, I think this is a much better approach to the first, because learning the interesting and distinctive features of a language first will help you understand, as quickly as possible, whether its a language you might actually want to know.
Well, there will inevitably be argument about what the interesting & distinctive features of Lisp are, but here's a possible, partial, set.
The language has a recursively-defined data structure (S expressions or sexprs) at its heart, which is used among other things to represent the source code of the language itself. This representation of the source is extremely low-commitment: there's nothing in the syntax of the language which says 'here's a block' or 'this is a conditiona' or 'this is a loop'. This low-commitment can make the language hard to read, but it has huge advantages.
Recursive processes are therefore inherently important and the language is good at expressing them. Some variants of the language take this to the extreme by noticing that iteration is simply a special case of recursion and have no iterative constructs at all (CL does not do this).
There are symbols, which are used as names for things both in the language itself and in programs written in the language (some variants take this more seriously than others: CL takes it very seriously).
There are macros. This really follows from the source code of the language being represented as sexprs and this structure having a very low commitment to what it means. Macros, in particular, are source-to-source transformations, with the source being represented as sexprs, written in the language itself: the macro language of Lisp is Lisp, without restriction. Macros allow the language itself to be seamlessly extended: solving problems in Lisp is done by designing a language in which the problem can be easily expressed and solved.
The end result of this is, I think two things:
recursion, in addition to and sometimes instead of iteration is an unusually important technique in Lisp;
in Lisp, programming means building a programming language.
So, in the answer above I've tried to give you examples of how you might think about solving problems involving a recursive data structure recursively: by defining a local function (append-loop) which then recursively calls itself to process the lists. As Rainer pointed out that's probably not a good way of solving this problem in Common Lisp as it tends to be hard to read and it also relies on the implementation to turn tail calls into iteration which is not garuanteed in CL. But, if your aim is to learn to think the way Lisp wants you to think, I think it is useful: there's a difference between code you might want to write for production use, and code you might want to read and write for pedagogical purposes: this is pedagogical code.
Indeed, it's worth looking at the other half of how Lisp might want you to think to solve problems like this: by extending the language. Let's say that you were programming in 1960, in a flavour of Lisp which has no iterative constructs other than GO TO. And let's say you wanted to process some list iteratively. Well, you might write this (this is in CL, so it is not very like programming in an ancient Lisp would be: in CL tagbody establishes a lexical environment in the body of which you can have tags – symbols – and then go will go to those tags):
(defun print-list-elements (l)
;; print the elements of a list, in order, using GO
(let* ((tail l)
(current (first tail)))
(tagbody
next
(if (null tail)
(go done)
(progn
(print current)
(setf tail (rest tail)
current (first tail))
(go next)))
done)))
And now:
> (print-list-elements '(1 2 3))
1
2
3
nil
Let's program like it's 1956!
So, well, let's say you don't like writing this sort of horror. Instead you'd like to be able to write something like this:
(defun print-list-elements (l)
;; print the elements of a list, in order, using GO
(do-list (e l)
(print e)))
Now if you were using most other languages you need to spend several weeks mucking around with the compiler to do this. But in Lisp you spend a few minutes writing this:
(defmacro do-list ((v l &optional (result-form nil)) &body forms)
;; Iterate over a list. May be buggy.
(let ((tailn (make-symbol "TAIL"))
(nextn (make-symbol "NEXT"))
(donen (make-symbol "DONE")))
`(let* ((,tailn ,l)
(,v (first ,tailn)))
(tagbody
,nextn
(if (null ,tailn)
(go ,donen)
(progn
,#forms
(setf ,tailn (rest ,tailn)
,v (first ,tailn))
(go ,nextn)))
,donen
,result-form))))
And now your language has an iteration construct which it previously did not have. (In real life this macro is called dolist).
And you can go further: given our do-list macro, let's see how we can collect things into a list:
(defun collect (thing)
;; global version: just signal an error
(declare (ignorable thing))
(error "not collecting"))
(defmacro collecting (&body forms)
;; Within the body of this macro, (collect x) will collect x into a
;; list, which is returned from the macro.
(let ((resultn (make-symbol "RESULT"))
(rtailn (make-symbol "RTAIL")))
`(let ((,resultn '())
(,rtailn nil))
(flet ((collect (thing)
(if ,rtailn
(setf (rest ,rtailn) (list thing)
,rtailn (rest ,rtailn))
(setf ,resultn (list thing)
,rtailn ,resultn))
thing))
,#forms)
,resultn)))
And now we can write the original append-lists function entirely in terms of constructs we've invented:
(defun append-lists (list &rest more-lists)
(collecting
(do-list (e list) (collect e))
(do-list (l more-lists)
(do-list (e l)
(collect e)))))
If that's not cool then nothing is.
In fact we can get even more carried away. My original answer above used labels to do iteration As Rainer has pointed out, this is not safe in CL since CL does not mandate TCO. I don't particularly care about that (I am happy to use only CL implementations which mandate TCO), but I do care about the problem that using labels this way is hard to read. Well, you can, of course, hide this in a macro:
(defmacro looping ((&rest bindings) &body forms)
;; A sort-of special-purpose named-let.
(multiple-value-bind (vars inits)
(loop for b in bindings
for var = (typecase b
(symbol b)
(cons (car b))
(t (error "~A is hopeless" b)))
for init = (etypecase b
(symbol nil)
(cons (unless (null (cddr b))
(error "malformed binding ~A" b))
(second b))
(t
(error "~A is hopeless" b)))
collect var into vars
collect init into inits
finally (return (values vars inits)))
`(labels ((next ,vars
,#forms))
(next ,#inits))))
And now:
(defun append-lists (list &rest more-lists)
(collecting
(looping ((tail list) (more more-lists))
(if (null tail)
(unless (null more)
(next (first more) (rest more)))
(progn
(collect (first tail))
(next (rest tail) more))))))
And, well, I just think it is astonishing that I get to use a programming language where you can do things like this.
Note that both collecting and looping are intentionally 'unhygenic': they introduce a binding (for collect and next respectively) which is visible to code in their bodies and which would shadow any other function definition of that name. That's fine, in fact, since that's their purpose.
This kind of iteration-as-recursion is certainly cool to think about, and as I've said I think it really helps you to think about how the language can work, which is my purpose here. Whether it leads to better code is a completely different question. Indeed there is a famous quote by Guy Steele from one of the 'lambda the ultimate ...' papers:
procedure calls may be usefully thought of as GOTO statements which also pass parameters
And that's a lovely quote, except that it cuts both ways: procedure calls, in a language which optimizes tail calls, are pretty much GOTO, and you can do almost all the horrors with them that you can do with GOTO. But GOTO is a problem, right? Well, it turns out so are procedure calls, for most of the same reasons.
So, pragmatically, even in a language (or implementation) where procedure calls do have all these nice characteristics, you end up wanting constructs which can express iteration and not recursion rather than both. So, for instance, Racket which, being a Scheme-family language, does mandate tail-call elimination, has a whole bunch of macros with names like for which do iteration.
And in Common Lisp, which does not mandate tail-call elimination but which does have GOTO, you also need to build macros to do iteration, in the spirit of my do-list above. And, of course, a bunch of people then get hopelessly carried away and the end point is a macro called loop: loop didn't exist (in its current form) in the first version of CL, and it was common at that time to simply obtain a copy of it from somewhere, and make sure it got loaded into the image. In other words, loop, with all its vast complexity, is just a macro which you can define in a CL which does not have it already.
OK, sorry, this is too long.
(loop for list in (cons '(1 2 3)
'((4 5 6) (7 8 9)))
append list)

For-loop macro in Racket

This macro to implement a C-like for-loop in Lisp is mentioned on this page: https://softwareengineering.stackexchange.com/questions/124930/how-useful-are-lisp-macros
(defmacro for-loop [[sym init check change :as params] & steps]
`(loop [~sym ~init value# nil]
(if ~check
(let [new-value# (do ~#steps)]
(recur ~change new-value#))
value#)))
So than one can use following in code:
(for-loop [i 0 , (< i 10) , (inc i)]
(println i))
How can I convert this macro to be used in Racket language?
I am trying following code:
(define-syntax (for-loop) (syntax-rules (parameterize ((sym) (init) (check) (change)) & steps)
`(loop [~sym ~init value# nil]
(if ~check
(let [new-value# (do ~#steps)]
(recur ~change new-value#))
value#))))
But it give "bad syntax" error.
The snippet of code you have included in your question is written in Clojure, which is one of the many dialects of Lisp. Racket, on the other hand, is descended from Scheme, which is quite a different language from Clojure! Both have macros, yes, but the syntax is going to be a bit different between the two languages.
The Racket macro system is quite powerful, but syntax-rules is actually a slightly simpler way to define macros. Fortunately, for this macro, syntax-rules will suffice. A more or less direct translation of the Clojure macro to Racket would look like this:
(define-syntax-rule (for-loop [sym init check change] steps ...)
(let loop ([sym init]
[value #f])
(if check
(let ([new-value (let () steps ...)])
(loop change new-value))
value)))
It could subsequently be invoked like this:
(for-loop [i 0 (< i 10) (add1 i)]
(println i))
There are a number of changes from the Clojure code:
The Clojure example uses ` and ~ (pronounced “quasiquote” and “unquote” respectively) to “interpolate” values into the template. The syntax-rules form performs this substitution automatically, so there is no need to explicitly perform quotation.
The Clojure example uses names that end in a hash (value# and new-value#) to prevent name conflicts, but Racket’s macro system is hygienic, so that sort of escaping is entirely unnecessary—identifiers bound within macros automatically live in their own scope by default.
The Clojure code uses loop and recur, but Racket supports tail recursion, so the translation just uses “named let”, which is really just some extremely simple sugar for an immediately invoked lambda that calls itself.
There are a few other minor syntactic differences, such as using let instead of do, using ellipses instead of & steps to mark multiple occurrences, the syntax of let, and the use of #f instead of nil to represent the absence of a value.
Finally, commas are not used in the actual use of the for-loop macro because , means something different in Racket. In Clojure, it is treated as whitespace, so it’s totally optional there, too, but in Racket, it would be a syntax error.
A full macro tutorial is well outside the scope of a single Stack Overflow post, though, so if you’re interested in learning more, take a look at the Macros section of the Racket guide.
It’s also worth noting that an ordinary programmer would not need to implement this sort of macro themselves, given that Racket already provides a set of very robust for loops and comprehensions built into the language. In truth, though, they are just defined as macros themselves—there is no special magic just because they are builtins.
Racket’s for loops do not look like traditional C-style for loops, however, because C-style for loops are extremely imperative. On the other hand, Scheme, and therefore Racket, tends to favor a functional style, which avoids mutation and often looks more declarative. Therefore, Racket’s loops attempt to describe higher-level iteration patterns, such as looping through a range of numbers or iterating through a list, rather than low-level semantics like describing how a value should be updated. Of course, if you really want something like that, Racket provides the do loop, which is almost identical to the for-loop macro defined above, albeit with some minor differences.
I want to expand on Alexis's excellent answer a bit. Here's an example usage that demonstrates what she means by do being almost identical to your for-loop:
(do ([i 0 (add1 i)])
((>= i 10) i)
(println i))
This do expression actually expands to the following code:
(let loop ([i 0])
(if (>= i 10)
i
(let ()
(println i)
(loop (add1 i)))))
The above version uses a named let, which is considered the conventional way to write loops in Scheme.
Racket also provides for comprehensions, also mentioned in Alexis's answer, which are also considered conventional, and here's how it'd look like:
(for ([i (in-range 10)])
(println i))
(except that this doesn't actually return the final value of i).
I want to rewrite on Alexis's excellent answer and Chris Jester-Young's excellent answer for people not familiar with let.
#lang racket
(define-syntax-rule (for-loop [var init check change] expr ...)
(local [(define (loop var value)
(if check
(loop change (begin expr ...))
value))]
(loop init #f)))
(for-loop [i 0 (< i 10) (add1 i)]
(println i))

Why does lisp use gensym and other languages don't?

Correct me if I'm wrong, but there is nothing like gensym in Java, C, C++, Python, Javascript, or any of the other languages I've used, and I've never seemed to need it. Why is it necessary in Lisp and not in other langauges? For clarification, I'm learning Common Lisp.
Common Lisp has a powerful macro system. You can make new syntax patterns that behave exactly the way you want them to behave. It's even expressed in its own language, making everything in the language available to transform the code from what you want to write to something that CL actually understands. All languages with powerful macro systems provide gensym or do it implicitly in their macro implementation.
In Common Lisp you use gensym when you want to make code where the symbol shouldn't match elements used any other places in the result. Without it there is no guarantee that a user uses a symbol that the macro implementer also use and they start to interfere and the result is something different than the intended behavior. It makes sure nested expansions of the same macro don't interfere with previous expansions. With the Common Lisp macro system it's possible to make more restrictive macro systems similar to Scheme syntax-rules and syntax-case.
In Scheme there are several macro systems. One with pattern matching where new introduced symbols act automatically as if they are made with gensym. syntax-case will also by default make new symbols as if they were made with gensym and there is also a way to reduce hygiene. You can make CL defmacro with syntax-case but since Scheme doesn't have gensym you wouldn't be able to make hygienic macros with it.
Java, C, C++, Python, Javascript are all Algol dialects and none of them have other than simple template based macros. Thus they don't have gensym because they don't need it. Since the only way to introduce new syntax in these languages is to wish next version of it will provide it.
There are two Algol dialects with powerful macros that come to mind. Nemerle and Perl6. Both of them have hygienic approach, meaning variables introduced behave as if they are made with gensym.
In CL, Scheme, Nemerle, Perl6 you don't need to wait for language features. You can make them yourself! The news in both Java and PHP are easily implemented with macros in any of them should it not already be available.
Can't say which languages have an equivalent of GENSYM. Many languages don't have a first-class symbol data type (with interned and uninterned symbols) and many are not providing similar code generation (macros, ...) facilities.
An interned symbol is registered in a package. An uninterned is not. If the reader (the reader is the Lisp subsystem which takes textual s-expressions as input and returns data) sees two interned symbols in the same package and with the same name, it assumes that it is the same symbol:
CL-USER 35 > (eq 'cl:list 'cl:list)
T
If the reader sees an uninterned symbol, it creates a new one:
CL-USER 36 > (eq '#:list '#:list)
NIL
Uninterned symbols are written with #: in front of the name.
GENSYM is used in Lisp to create numbered uninterned symbols, because it is sometimes useful in code generation and then debugging this code. Note that the symbols are always new and not eq to anything else. But the symbol name could be the same as the name of another symbol. The number gives a clue to the human reader about the identity.
An example using MAKE-SYMBOL
make-symbol creates a new uninterned symbol using a string argument as its name.
Let's see this function generating some code:
CL-USER 31 > (defun make-tagbody (exp test)
(let ((start-symbol (make-symbol "start"))
(exit-symbol (make-symbol "exit")))
`(tagbody ,start-symbol
,exp
(if ,test
(go ,start-symbol)
(go ,exit-symbol))
,exit-symbol)))
MAKE-TAGBODY
CL-USER 32 > (pprint (make-tagbody '(incf i) '(< i 10)))
(TAGBODY
#:|start| (INCF I)
(IF (< I 10) (GO #:|start|) (GO #:|exit|))
#:|exit|)
Above generated code uses uninterned symbols. Both #:|start| are actually the same symbol. We would see this if we would have *print-circle* to T, since the printer then would clearly label identical objects. But here we don't get this added information. Now if you nest this code, then you would see more than the one start and one exit symbol, each which was used in two places.
An example using GENSYM
Now let's use gensym. Gensym also creates an uninterned symbol. Optionally this symbol is named by a string. A number (see the variable CL:*GENSYM-COUNTER*) is added.
CL-USER 33 > (defun make-tagbody (exp test)
(let ((start-symbol (gensym "start"))
(exit-symbol (gensym "exit")))
`(tagbody ,start-symbol
,exp
(if ,test
(go ,start-symbol)
(go ,exit-symbol))
,exit-symbol)))
MAKE-TAGBODY
CL-USER 34 > (pprint (make-tagbody '(incf i) '(< i 10)))
(TAGBODY
#:|start213051| (INCF I)
(IF (< I 10) (GO #:|start213051|) (GO #:|exit213052|))
#:|exit213052|)
Now the number is an indicator that the two uninterned #:|start213051| symbols are actually the same. When the code would be nested, the new version of the start symbol would have a different number:
CL-USER 7 > (pprint (make-tagbody `(progn
(incf i)
(setf j 0)
,(make-tagbody '(incf ij) '(< j 10)))
'(< i 10)))
(TAGBODY
#:|start2756| (PROGN
(INCF I)
(SETF J 0)
(TAGBODY
#:|start2754| (INCF IJ)
(IF (< J 10)
(GO #:|start2754|)
(GO #:|exit2755|))
#:|exit2755|))
(IF (< I 10) (GO #:|start2756|) (GO #:|exit2757|))
#:|exit2757|)
Thus it helps understanding generated code, without the need to turn *print-circle* on, which would label the identical objects:
CL-USER 8 > (let ((*print-circle* t))
(pprint (make-tagbody `(progn
(incf i)
(setf j 0)
,(make-tagbody '(incf ij) '(< j 10)))
'(< i 10))))
(TAGBODY
#3=#:|start1303| (PROGN
(INCF I)
(SETF J 0)
(TAGBODY
#1=#:|start1301| (INCF IJ)
(IF (< J 10) (GO #1#) (GO #2=#:|exit1302|))
#2#))
(IF (< I 10) (GO #3#) (GO #4=#:|exit1304|))
#4#)
Above is readable for the Lisp reader (the subsystem which reads s-expressions for textual representations), but a bit less for the human reader.
I believe that symbols (in the Lisp sense) are mostly useful in homoiconic languages (those in which the syntax of the language is representable as a data of that language).
Java, C, C++, Python, Javascript are not homoiconic.
Once you have symbols, you want some way to dynamically create them. gensym is a possibility, but you can also intern them.
BTW, MELT is a lisp-like dialect, it does not create symbols with gensym or by interning strings but with clone_symbol. (actually MELT symbols are instances of predefined CLASS_SYMBOL, ...).
gensym is available as a predicate in most of Prolog interpreters. You can find it in the eponym library.

Translating this to Common Lisp

I've been reading an article by Olin Shivers titled Stylish Lisp programming techniques and found the second example there (labeled "Technique n-1") a bit puzzling. It describes a self-modifying macro that looks like this:
(defun gen-counter macro (x)
(let ((ans (cadr x)))
(rplaca (cdr x)
(+ 1 ans))
ans))
It's supposed to get its calling form as argument x (i.e. (gen-counter <some-number>)). The purpose of this is to be able to do something like this:
> ;this prints out the numbers from 0 to 9.
(do ((n 0 (gen-counter 1)))
((= n 10) t)
(princ n))
0.1.2.3.4.5.6.7.8.9.T
>
The problem is that this syntax with the macro symbol after the function name is not valid in Common Lisp. I've been unsuccessfully trying to obtain similar behavior in Common Lisp. Can someone please provide a working example of analogous macro in CL?
Why the code works
First, it's useful to consider the first example in the paper:
> (defun element-generator ()
(let ((state '(() . (list of elements to be generated)))) ;() sentinel.
(let ((ans (cadr state))) ;pick off the first element
(rplacd state (cddr state)) ;smash the cons
ans)))
ELEMENT-GENERATOR
> (element-generator)
LIST
> (element-generator)
OF
> (element-generator)
This works because there's one literal list
(() . (list of elements to be generated)
and it's being modified. Note that this is actually undefined behavior in Common Lisp, but you'll get the same behavior in some Common Lisp implementations. See Unexpected persistence of data and some of the other linked questions for a discussion of what's happening here.
Approximating it in Common Lisp
Now, the paper and code you're citing actually has some useful comments about what this code is doing:
(defun gen-counter macro (x) ;X is the entire form (GEN-COUNTER n)
(let ((ans (cadr x))) ;pick the ans out of (gen-counter ans)
(rplaca (cdr x) ;increment the (gen-counter ans) form
(+ 1 ans))
ans)) ;return the answer
The way that this is working is not quite like an &rest argument, as in Rainer Joswig's answer, but actually a &whole argument, where the the entire form can be bound to a variable. This is using the source of the program as the literal value that gets destructively modified! Now, in the paper, this is used in this example:
> ;this prints out the numbers from 0 to 9.
(do ((n 0 (gen-counter 1)))
((= n 10) t)
(princ n))
0.1.2.3.4.5.6.7.8.9.T
However, in Common Lisp, we'd expect the macro to be expanded just once. That is, we expect (gen-counter 1) to be replaced by some piece of code. We can still generate a piece of code like this, though:
(defmacro make-counter (&whole form initial-value)
(declare (ignore initial-value))
(let ((text (gensym (string 'text-))))
`(let ((,text ',form))
(incf (second ,text)))))
CL-USER> (macroexpand '(make-counter 3))
(LET ((#:TEXT-1002 '(MAKE-COUNTER 3)))
(INCF (SECOND #:TEXT-1002)))
Then we can recreate the example with do
CL-USER> (do ((n 0 (make-counter 1)))
((= n 10) t)
(princ n))
023456789
Of course, this is undefined behavior, since it's modifying literal data. It won't work in all Lisps (the run above is from CCL; it didn't work in SBCL).
But don't miss the point
The whole article is sort of interesting, but recognize that it's sort of a joke, too. It's pointing out that you can do some funny things in an evaluator that doesn't compile code. It's mostly satire that's pointing out the inconsistencies of Lisp systems that have different behaviors under evaluation and compilation. Note the last paragraph:
Some short-sighted individuals will point out that these programming
techniques, while certainly laudable for their increased clarity and
efficiency, would fail on compiled code. Sadly, this is true. At least
two of the above techniques will send most compilers into an infinite
loop. But it is already known that most lisp compilers do not
implement full lisp semantics -- dynamic scoping, for instance. This
is but another case of the compiler failing to preserve semantic
correctness. It remains the task of the compiler implementor to
adjust his system to correctly implement the source language, rather
than the user to resort to ugly, dangerous, non-portable, non-robust
``hacks'' in order to program around a buggy compiler.
I hope this provides some insight into the nature of clean, elegant
Lisp programming techniques.
—Olin Shivers
Common Lisp:
(defmacro gen-counter (&rest x)
(let ((ans (car x)))
(rplaca x (+ 1 ans))
ans))
But above only works in the Interpreter, not with a compiler.
With compiled code, the macro call is gone - it is expanded away - and there is nothing to modify.
Note to unsuspecting readers: you might want to read the paper by Olin Shivers very careful and try to find out what he actually means...

Can I use Common Lisp for SICP or is Scheme the only option?

Also, even if I can use Common Lisp, should I? Is Scheme better?
You have several answers here, but none is really comprehensive (and I'm not talking about having enough details or being long enough). First of all, the bottom line: you should not use Common Lisp if you want to have a good experience with SICP.
If you don't know much Common Lisp, then just take it as that. (Obviously you can disregard this advice as anything else, some people only learn the hard way.)
If you already know Common Lisp, then you might pull it off, but at considerable effort, and at a considerable damage to your overall learning experience. There are some fundamental issues that separate Common Lisp and Scheme, which make trying to use the former with SICP a pretty bad idea. In fact, if you have the knowledge level to make it work, then you're likely above the level of SICP anyway. I'm not saying that it's not possible -- it is of course possible to implement the whole book in Common Lisp (for example, see Bendersky's pages) just as you can do so in C or Perl or whatever. It's just going to harder with languages that are further apart from Scheme. (For example, ML is likely to be easier to use than Common Lisp, even when its syntax is very different.)
Here are some of these major issues, in increasing order of importance. (I'm not saying that this list is exhaustive in any way, I'm sure that there are a whole bunch of additional issues that I'm omitting here.)
NIL and related issues, and different names.
Dynamic scope.
Tail call optimization.
Separate namespace for functions and values.
I'll expand now on each of these points:
The first point is the most technical. In Common Lisp, NIL is used both as the empty list and as the false value. In itself, this is not a big issue, and in fact the first edition of SICP had a similar assumption -- where the empty list and false were the same value. However, Common Lisp's NIL is still different: it is also a symbol. So, in Scheme you have a clear separation: something is either a list, or one of the primitive types of values -- but in Common Lisp, NIL is not only false and the empty list: it is also a symbol. In addition to this, you get a host of slightly different behavior -- for example, in Common Lisp the head and the tail (the car and cdr) of the empty list is itself the empty list, while in Scheme you'll get a runtime error if you try that. To top it off, you have different names and naming convention, for example -- predicates in Common Lisp end by convention with P (eg, listp) while predicates in Scheme end in a question mark (eg, list?); mutators in Common Lisp have no specific convention (some have an N prefix), while in Scheme they almost always have a suffix of !. Also, plain assignment in Common Lisp is usually setf and it can operate on combinations too (eg, (setf (car foo) 1)), while in Scheme it is set! and limited to setting bound variables only. (Note that Common Lisp has the limited version too, it's called setq. Almost nobody uses it though.)
The second point is a much deeper one, and possibly one that will lead to completely incomprehensible behavior of your code. The thing is that in Common Lisp, function arguments are lexically scoped, but variables that are declared with defvar are dynamically scoped. There is a whole range of solutions that rely on lexically scoped bindings -- and in Common Lisp they just won't work. Of course, the fact that Common Lisp has lexical scope means that you can get around this by being very careful about new bindings, and possibly using macros to get around the default dynamic scope -- but again, this requires a much more extensive knowledge than a typical newbie has. Things get even worse than that: if you declare a specific name with a defvar, then that name will be bound dynamically even if they're arguments to functions. This can lead to some extremely difficult to track bugs which manifest themselves in an extremely confusing way (you basically get the wrong value, and you'll have no clue why that happens). Experienced Common Lispers know about it (especially those that have been burnt by it), and will always follow the convention of using stars around dynamically scoped names (eg, *foo*). (And by the way, in Common Lisp jargon, these dynamically scoped variables are called just "special variables" -- which is another source of confusion for newbies.)
The third point was also discussed in some of the previous comments. In fact, Rainer had a pretty good summary of the different options that you have, but he didn't explain just how hard it can make things. The thing is that proper tail-call-optimization (TCO) is one of the fundamental concepts in Scheme. It is important enough that it is a language feature rather than merely an optimization. A typical loop in Scheme is expressed as a tail-calling function (for example, (define (loop) (loop))) and proper Scheme implementations are required to implement TCO which will guarantee that this is, in fact, an infinite loop rather than running for a short while until you blow up the stack space. This is all the essence of Rainer's first non solution, and the reason he labeled it as "BAD".
His third option -- rewriting functional loops (expressed as recursive functions) as Common Lisp loops (dotimes, dolist, and the infamous loop) can work for a few simple cases, but at a very high cost: the fact that Scheme is a language that does proper TCO is not only fundamental to the language -- it is also one of the major themes in the book, so by doing so, you will have lost that point completely. In addition, there are some cases that you just cannot translate Scheme code into a Common Lisp loop construct -- for example, as you work your way through the book, you'll get to implement a meta-circular-interpreter which is an implementation of a mini-Scheme language. It takes a certain click to realize that this meta evaluator implements a language that is itself doing TCO if the language that you implement this evaluator in is itself doing TCO. (Note that I'm talking about the "simple" interpreters -- later in the book you implement this evaluator as something close to a register machine, where you kind of explicitly make it do TCO.) The bottom line to all of this, is that this evaluator -- when implemented in Common Lisp -- will result in a language that is itself not doing TCO. People who are familiar with all of this should not be surprised: after all, the "circularity" of the evaluator means that you're implementing a language with semantics that are very close to the host language -- so in this case you "inherit" the Common Lisp semantics rather than the Scheme TCO semantics. However, this means that your mini-evaluator is now crippled: it has no TCO, so it has no way of doing loops! To get loops in, you will need to implement new constructs in your interpreter, which will usually use the iteration constructs in Common Lisp. But now you're going further away from what's in the book, and you're investing considerable effort in approximately implementing the ideas in SICP to the different language. Note also that all of this is related to the previous point I raised: if you follow the book, then the language that you implement will be lexically scoped, taking it further away from the Common Lisp host language. So overall, you completely lose the "circular" property in what the book calls "meta circular evaluator". (Again, this is something that might not bother you, but it will damage the overall learning experience.) All in all, very few languages get close to Scheme in being able to implement the semantics of the language inside the language as a non-trivial (eg, not using eval) evaluator that easily.
In fact, if you do go with a Common Lisp, then in my opinion, Rainer's second suggestion -- use a Common Lisp implementation that supports TCO -- is the best way to go. However, in Common Lisp this is fundamentally a compiler optimization: so you will likely need to (a) know about the knobs in the implementation that you need to turn to make TCO happen, (b) you will need to make sure that the Common Lisp implementation is actually doing proper TCO, and not just optimization of self calls (which is the much simpler case that is not nearly as important), (c) you would hope that the Common Lisp implementation that does TCO can do so without damaging debugging options (again, since this is considered an optimization in Common Lisp, then turning this knob on, might also be taken by the compiler as saying "I don't care much for debuggability").
Finally, my last point is not too hard to overcome, but it is conceptually the most important one. In Scheme, you have a uniform rule: identifiers have a value, which is determined lexically -- and that's it. It's a very simple language. In Common Lisp, in addition to the historical baggage of sometimes using dynamic scope and sometimes using lexical scope, you have symbols that have two different value -- there's the function value that is used whenever a variable appears at the head of an expression, and there is a different value that is used otherwise. For example, in (foo foo), each of the two instances of foo are interpreted differently -- the first is the function value of foo and the second is its variable value. Again, this is not hard to overcome -- there are a number of constructs that you need to know about to deal with all of this. For example, instead of writing (lambda (x) (x x)) you need to write (lambda (x) (funcall x x)), which makes the function that is being called appear in a variable position, therefore the same value will be used there; another example is (map car something) which you will need to translate to (map #'car something) (or more accurately, you will need to use mapcar which is Common Lisp's equivalent of the car function); yet another thing that you'll need to know is that let binds the value slot of the name, and labels binds the function slot (and has a very different syntax, just like defun and defvar.)
But the conceptual result of all of this is that Common Lispers tend to use higher-order code much less than Schemers, and that goes all the way from the idioms that are common in each language, to what implementations will do with it. (For example, many Common Lisp compilers will never optimize this call: (funcall foo bar), while Scheme compilers will optimize (foo bar) like any function call expression, because there is no other way to call functions.)
Finally, I'll note that much of the above is very good flamewar material: throw any of these issues into a public Lisp or Scheme forum (in particular comp.lang.lisp and comp.lang.scheme), and you'll most likely see a long thread where people explain why their choice is far better than the other, or why some "so called feature" is actually an idiotic decision that was made by language designers that were clearly very drunk at the time, etc etc. But the thing is that these are just differences between the two languages, and eventually people can get their job done in either one. It just happens that if the job is "doing SICP" then Scheme will be much easier considering how it hits each of these issues from the Scheme perspective. If you want to learn Common Lisp, then going with a Common Lisp textbook will leave you much less frustrated.
Using SICP with Common Lisp is possible and fun
You can use Common Lisp for learning with SICP without much problems. The Scheme subset that is used in the book is not very sophisticated. SICP does not use macros and it uses no continuations. There are DELAY and FORCE, which can be written in Common Lisp in a few lines.
Also for a beginner using (function foo) and (funcall foo 1 2 3) is actually better (IMHO !), because the code gets clearer when learning the functional programming parts. You can see where variables and lambda functions are being called/passed.
Tail call optimization in Common Lisp
There is only one big area where using Common Lisp has a drawback: tail call optimization (TCO). Common Lisp does not support TCO in its standard (because of unclear interaction with the rest of the language, not all computer architectures support it directly (think JVM), not all compilers support it (some Lisp Machine), it makes some debugging/tracing/stepping harder, ...).
There are three ways to live with that:
Hope that the stack does not blow out. BAD.
Use a Common Lisp implementation that supports TCO. There are some. See below.
Rewrite the functional loops (and similar constructs) into loops (and similar constructs) using DOTIMES, DO, LOOP, ...
Personally I would recommend 2 or 3.
Common Lisp has excellent and easy to use compilers with TCO support (SBCL, LispWorks, Allegro CL, Clozure CL, ...) and as a development environment use either the built-in ones or GNU Emacs/SLIME.
For use with SICP I would recommend SBCL, since it compiles always by default, has TCO support by default and the compiler catches a lot of coding problems (undeclared variables, wrong argument lists, a bunch of type errors, ...). This helps a lot during learning. Generally make sure the code is compiled, since Common Lisp interpreters will usually not support TCO.
Sometimes it might also helpful to write one or two macros and provide some Scheme function names to make code look a bit more like Scheme. For example you could have a DEFINE macro in Common Lisp.
For the more advanced users, there is an old Scheme implementation written in Common Lisp (called Pseudo Scheme), that should run most of the code in SICP.
My recommendation: if you want to go the extra mile and use Common Lisp, do it.
To make it easier to understand the necessary changes, I've added a few examples - remember, it needs a Common Lisp compiler with support for tail call optimization:
Example
Let's look at this simple code from SICP:
(define (factorial n)
(fact-iter 1 1 n))
(define (fact-iter product counter max-count)
(if (> counter max-count)
product
(fact-iter (* counter product)
(+ counter 1)
max-count)))
We can use it directly in Common Lisp with a DEFINE macro:
(defmacro define ((name &rest args) &body body)
`(defun ,name ,args ,#body))
Now you should use SBCL, CCL, Allegro CL or LispWorks. These compilers support TCO by default.
Let's use SBCL:
* (define (factorial n)
(fact-iter 1 1 n))
; in: DEFINE (FACTORIAL N)
; (FACT-ITER 1 1 N)
;
; caught STYLE-WARNING:
; undefined function: FACT-ITER
;
; compilation unit finished
; Undefined function:
; FACT-ITER
; caught 1 STYLE-WARNING condition
FACTORIAL
* (define (fact-iter product counter max-count)
(if (> counter max-count)
product
(fact-iter (* counter product)
(+ counter 1)
max-count)))
FACT-ITER
* (factorial 1000)
40238726007709....
Another Example: symbolic differentiation
SICP has a Scheme example for differentiation:
(define (deriv exp var)
(cond ((number? exp) 0)
((variable? exp)
(if (same-variable? exp var) 1 0))
((sum? exp)
(make-sum (deriv (addend exp) var)
(deriv (augend exp) var)))
((product? exp)
(make-sum
(make-product (multiplier exp)
(deriv (multiplicand exp) var))
(make-product (deriv (multiplier exp) var)
(multiplicand exp))))
(else
(error "unknown expression type -- DERIV" exp))))
Making this code run in Common Lisp is easy:
some functions have different names, number? is numberp in CL
CL:COND uses T instead of else
CL:ERROR uses CL format strings
Let's define Scheme names for some functions. Common Lisp code:
(loop for (scheme-symbol fn) in
'((number? numberp)
(symbol? symbolp)
(pair? consp)
(eq? eq)
(display-line print))
do (setf (symbol-function scheme-symbol)
(symbol-function fn)))
Our define macro from above:
(defmacro define ((name &rest args) &body body)
`(defun ,name ,args ,#body))
The Common Lisp code:
(define (variable? x) (symbol? x))
(define (same-variable? v1 v2)
(and (variable? v1) (variable? v2) (eq? v1 v2)))
(define (make-sum a1 a2) (list '+ a1 a2))
(define (make-product m1 m2) (list '* m1 m2))
(define (sum? x)
(and (pair? x) (eq? (car x) '+)))
(define (addend s) (cadr s))
(define (augend s) (caddr s))
(define (product? x)
(and (pair? x) (eq? (car x) '*)))
(define (multiplier p) (cadr p))
(define (multiplicand p) (caddr p))
(define (deriv exp var)
(cond ((number? exp) 0)
((variable? exp)
(if (same-variable? exp var) 1 0))
((sum? exp)
(make-sum (deriv (addend exp) var)
(deriv (augend exp) var)))
((product? exp)
(make-sum
(make-product (multiplier exp)
(deriv (multiplicand exp) var))
(make-product (deriv (multiplier exp) var)
(multiplicand exp))))
(t
(error "unknown expression type -- DERIV: ~a" exp))))
Let's try it in LispWorks:
CL-USER 19 > (deriv '(* (* x y) (+ x 3)) 'x)
(+ (* (* X Y) (+ 1 0)) (* (+ (* X 0) (* 1 Y)) (+ X 3)))
Streams example from SICP in Common Lisp
See the book code in chapter 3.5 in SICP. We use the additions to CL from above.
SICP mentions delay, the-empty-stream and cons-stream, but does not implement it. We provide here an implementation in Common Lisp:
(defmacro delay (expression)
`(lambda () ,expression))
(defmacro cons-stream (a b)
`(cons ,a (delay ,b)))
(define (force delayed-object)
(funcall delayed-object))
(defparameter the-empty-stream (make-symbol "THE-EMPTY-STREAM"))
Now comes portable code from the book:
(define (stream-null? stream)
(eq? stream the-empty-stream))
(define (stream-car stream) (car stream))
(define (stream-cdr stream) (force (cdr stream)))
(define (stream-enumerate-interval low high)
(if (> low high)
the-empty-stream
(cons-stream
low
(stream-enumerate-interval (+ low 1) high))))
Now Common Lisp differs in stream-for-each:
we need to use cl:progn instead of begin
function parameters need to be called with cl:funcall
Here is a version:
(defmacro begin (&body body) `(progn ,#body))
(define (stream-for-each proc s)
(if (stream-null? s)
'done
(begin (funcall proc (stream-car s))
(stream-for-each proc (stream-cdr s)))))
We also need to pass functions using cl:function:
(define (display-stream s)
(stream-for-each (function display-line) s))
But then the example works:
CL-USER 20 > (stream-enumerate-interval 10 20)
(10 . #<Closure 1 subfunction of STREAM-ENUMERATE-INTERVAL 40600010FC>)
CL-USER 21 > (display-stream (stream-enumerate-interval 10 1000))
10
11
12
...
997
998
999
1000
DONE
Do you already know some Common Lisp? I assume that is what you mean by 'Lisp'. In that case you might want to use it instead of Scheme. If you don't know either, and you are working through SICP solely for the learning experience, then probably you are better off with Scheme. It has much better support for new learners, and you won't have to translate from Scheme to Common Lisp.
There are differences; specifically, SICP's highly functional style is wordier in Common Lisp because you have to quote functions when passing them around and use funcall to call a function bound to a variable.
However, if you want to use Common Lisp, you can try using Eli Bendersky's Common Lisp translations of the SICP code under the tag SICP.
They are similar but not the same.
I believe If you go with Scheme it would be easier.
Edit: Nathan Sanders' comment is correct. It's clearly been a while since I last read the book, but I just checked and it does not use call/cc directly. I've upvoted Nathan's answer.
Whatever you use needs to implement continuations, which SICP uses a lot. Not even all Scheme interpreters implement them, and I'm not aware of any Common Lisp that does.