Related
This question already has answers here:
What makes Lisp macros so special?
(15 answers)
Closed 3 months ago.
I keep reading that Lisp macros are one of the most powerful features of the language. But reading over the specifications and manuals, they are just functions whose arguments are unevaluated.
Given any macro (defmacro example (arg1 ... argN) (body-forms)) I could just write (defun example (arg1 ... argN) ... (body-forms)) with the last body-form turned into a list and then call it like (eval (example 'arg1 ... 'argN)) to emulate the same behavior of the macro. If this were the case, then macros would just be syntactic sugar, but I doubt that syntactic sugar would be called a powerful language feature. What am I missing? Are there cases where I cannot carry out this procedure to emulate a macro?
I can't talk about powerful because it can be a little bit subjective, but macros are regular Lisp functions that work on Lisp data, so they are as expressive as other functions. This isn't the case with templates or generic functions in other languages that rely more on static types and are more restricted (on purpose).
In some way, yes macros are simple syntactic facilities, but you are focused in your emulation on the dynamic semantics of macros, ie. how you can run code that evaluates macros at runtime. However:
the code using eval is not equivalent to expanded code
the preprocessing/compile-time aspect of macros is not emulated
Lexical scope
Function, like +, do not inherit the lexical scope:
(let ((x 30))
(+ 3 4))
Inside the definition of +, you cannot access x. Being able to do so is what "dynamic scope" is about (more precisely, see dynamic extent, indefinite scope variables). But nowadays it is quite the exception to rely on dynamic scope. Most functions use lexical scope, and this is the case for eval too.
The eval function evaluates a form in the null lexical environment, and it never has access to the surrounding lexical bindings. As such, it behaves like any regular function.
So, in you example, calling eval on the transformed source code will not work, since arg1 to argnN will probably be unbound (it depends on what your macro does).
In order to have an equivalent form, you have to inject bindings in the transformed code, or expand at a higher level:
(defun expand-square (var)
(list '* var var))
;; instead of:
(defun foo (x) (eval (expand-square 'x))) ;; x unbound during eval
;; inject bindings
(defun foo (x) (eval `(let ((z ,x)) (expand-square z))))
;; or expand the top-level form
(eval `(defun foo (x) ,(expand-square 'x)))
Note that macros (in Common Lisp) also have access to the lexical environment through &environment parameters in their lambda-list. The use of this environment is implementation dependent, but can be used to access the declarations associated with a variable, for example.
Notice also how in the last example you evaluate the code when defining the function, and not when running it. This is the second thing about macro.
Expansion time
In order to emulate macros you could locally replace a call to a macro by a form that emulates it at runtime (using let to captures all the bindings you want to see inside the expanded code, which is tedious), but then you would miss the useful aspect of macros that is: generating code ahead of time.
The last example above shows how you can quote defun and wrap it in eval, and basically you would need to do that for all functions if you wanted to emulate the preprocessing work done by macros.
The macro system is a way to integrate this preprocessing step in the language in a way that is simple to use.
Conclusion
Macros themselves are a nice way to abstract things when functions can't. For example you can have a more human-friendly, stable syntax that hides implementation details. That's how you define pattern-matching abilities in Common Lisp that make it look like they are part of the language, without too much runtime penalty or verbosity.
They rely on simple term-rewriting functions that are integrated in the language, but you can emulate their behavior either at compile-time or runtime yourself if you want. They can be used to perform different kinds of abstraction that are usually missing or more cumbersome to do in other languages, but are also limited: they don't "understand" code by themselves, they don't give access to all the facilities of the compiler (type propagation, etc.). If you want more you can use more advanced libraries or compiler tools (see deftransform), but macros at least are portable.
Macros are not just functions whose arguments are unevaluated. Macros are functions between programming languages. In other words a macro is a function whose argument is a fragment of source code of a programming language which includes the macro, and whose value is a fragment of source code of a language which does not include the macro (or which includes it in a simpler way).
In very ancient, very rudimentary, Lisps, before people really understood what macros were, you could simulate macros with things called FEXPRs combined with EVAL. A FEXPR was simply a function which did not evaluate its arguments. This worked in such Lisps only because they were completely dynamically scoped, and the cost of it working was that compilation of such things was not possible at all. Those are two enormous costs.
In any modern Lisp, this won't work at all. You can write a toy version of FEXPRs as a macro (this may be buggy):
(defmacro deffex (fx args &body body)
(assert (every (lambda (arg)
(and (symbolp arg)
(not (member arg lambda-list-keywords))))
args)
(args) "not a simple lambda list")
`(defmacro ,fx ,args
`(let ,(mapcar (lambda (argname argval)
`(,argname ',argval))
',args (list ,#args))
,#',body)))
So now we could try to write a trivial binding construct I'll call with using this thing:
(deffex with (var val form)
(eval `(let ((,var ,val)) ,form)))
And this seems to work:
> (with a 1 a)
1
Of course, we're paying the cost that no code which uses this construct can ever be compiled so all our programs will be extremely slow, but perhaps that is a cost we're willing to accept (it's not, but never mind).
Except, of course, it doesn't work, at all:
> (with a 1
(with b 2
(+ a b)))
Error: The variable a is unbound.
Oh dear.
Why doesn't it work? It doesn't work because Common Lisp is lexically scoped, and eval is a function: it can't see the lexical bindings.
So not only does this kind of approach prevent compilation in a modern Lisp, it doesn't work at all.
People often, at this point, suggest some kind of kludge solution which would allow eval to be able to see lexical bindings. The cost of such a solution is that all the lexical bindings need to exist in compiled code: no variable can ever be compiled away, not even its name. That's essentially saying that no good compilers can ever be used, even for the small part of your programs you can compile at all in a language which makes extensive use of macros like CL. For instance, if you ever use defun you're not going to be able to compile the code in its body. People do use defun occasionally, I think.
So this approach simply won't work: it worked by happenstance in very old Lisps but it can't work, even at the huge cost of preventing compilation, in any modern Lisp.
More to the point this approach obfuscates the understanding of what macros are: as I said at the start, macros are functions between programming languages, and understanding that is critical. When you are designing macros you are implementing a new programming language.
Say I have macros foo and bar. If I write (foo (bar)) my understanding is that in most (all?) lisps foo is going to be given '(bar), not whatever bar would have expanded to had it been expanded first. Some lisps have something like local-expand where the implementation of foo can explicitly request the expansion of its argument before continuing, but why isn't that the default? It seems more natural to me. Is this an accident of history or is there a strong reason to do it the way most lisps do it?
I've been noticing in Rust that I want the macros to work this way. I'd like to be able to wrap a bunch of declarations inside a macro call so that the macro can then crawl the declarations and generate reflection information. But if I use a macro to generate the definitions that I want crawled, my macro wrapping the declarations sees the macro invocations that would generate the declarations rather than the actual declarations.
If I write (foo (bar)) my understanding is that in most (all?) lisps foo is going to be given '(bar), not whatever bar would have expanded to had it been expanded first.
That would restrict Lisp such that (bar) would need to be something that can be expanded -> probably something which is written in the Lisp language.
Lisp developers would like to see macros where the inner stuff can be a completely new language with different syntax rules. For example something, where FOO does not expand it's subforms, but transpiles/compiles a fully/partially different language to Lisp. Something which does not have the usual prefix expression syntax:
Examples
(postfix (a b +) sin)
-> (sin (+ a b))
Here the + in the macro form is not the infix +.
or
(query-all (person name)
where (person = "foo") in database DB)
Lisp macros don't work on language parse trees, but arbitrary, possibly nested, s-expressions. Those don't need to be valid Lisp code outside of that macro -> don't need to follow the usual syntax / semantics.
Common Lisp has the function MACROEXPAND and MACROEXPAND-1, such that the outer macro can expand inner code if it wants to do it during its own macro expansion:
CL-USER 26 > (defmacro bar (a) `(* ,a ,a))
BAR
CL-USER 27 > (bar 10)
100
CL-USER 28 > (defmacro foo (a &environment e)
(let ((f (macroexpand a e)))
(print (list a '-> f))
`(+ ,(second f) ,(third f))))
FOO
CL-USER 29 > (foo (bar 10))
((bar 10) -> (* 10 10))
20
In above macro, if FOO would see only an expanded form, it could not print both the source and the expansion.
This works also with scoped macros. Here the BAR macro gets locally redefined and the MACROEXPAND generates different code inside FOO for the same form:
CL-USER 30 > (macrolet ((bar (a)
`(expt ,a ,a)))
(foo (bar 10)))
((bar 10) -> (EXPT 10 10))
20
If foo is a macro then (foo (bar)) must pass the raw syntax (bar) to the foo macro expander. This is absolutely essential.
This is because foo can give any meaning whatsoever to bar.
Consider the defmacro macro itself:
(defmacro foo (bar) body)
Here, the argument (bar) is a parameter list ("macro lambda list") and not a form (Common Lisp jargon for to-be-evaluated expression). It says that the macro shall have a single parameter called bar. Therefore it is nonsensically wrong to try to expand (bar) before handing it to defmacro's expander.
Only if we know that an expression is going to be evaluated is it legitimate to expand it as a macro. But we don't know that about an expression which is the argument to a macro.
Other counterexamples are easy to come up with. (defstruct point (x 0) (y 0)): (x 0) isn't a call to operator x, but a slot x whose default value is 0. (dolist (x list) ...): x is a variable to be stepped over list.
That said, there are implementation choices regarding the timing of macro expansion.
A Lisp implementation can macro-expand an entire top-level form before evaluating or compiling any of it. Or it can expand incrementally, so that for instance when (+ x y) is being processed, x is macro-expanded and evaluated or compiled into some intermediate form already before y is even looked at.
A pure syntax tree interpreter for Lisp which always keeps the code in the original form and always expands (and re-expands) the code as it is evaluating has certain interactivity advantages. Any macro that you rewrite goes instantly "live" in all the existing code that you have input into the REPL, like existing function definitions. It is obviously quite inefficient in terms of execution speed, but any code that you call uses the latest definition of your macros without any hassle of telling the system to reload that code to have it expanded again. That also eliminates the risk that you're testing something that is still based on the old, buggy version of some macro that you fixed. If you're ever writing a Lisp, the range of timing choices for expansion is good to keep in mind, so that you consciously reject the choices you don't go with.
In turn, that said, there are some constraints on the timing of macro expansion. Conceivably, a Lisp interpreter or compiler, when processing an entire file, could go through all the top level forms and expand all of them at once before processing any of them. The implementor will quickly learn that this is bad, because some of the later forms depend on the side effects of the earlier forms. Such as, oh, macros being defined! If the first form defines a macro, which the second one uses, then we cannot expand the second form without evaluating the effect of the first.
It makes sense, in a Lisp, to split up physical top-level forms into logical ones. Suppose that someone writes (or uses a macro to generate) codde like (progn (defmacro foo ...) (foo)). This entire progn cannot be macro expanded up-front before evaluation; it won't work! There has to be a rule such as "whenever a top-level form is based on the progn operator, then the children of the progn operator are considered top-level forms by all the processing which treats top-level forms specially, and this rule is recursively applied." The top-level entry point into the macro-expanding code walker then has to contain special case hacks to do this recognition of logical top-level forms, breaking them up and recursing into a lower level expander which doesn't do those checks any more.
I've been noticing in Rust that I want the macros to work this way.
I'd like to be able to wrap a bunch of declarations inside a macro
call so that the macro can then crawl the declarations and generate
reflection information.
It does sound like local-expand is the right tool for that job.
However, an alternative approach would be something like this:
Suppose that wrapper is our outer macro, and that the intended
syntax is:
(wrapper decl1 decl2 ...)
where decl is a declaration that potenteally uses some standard form declare.
We can let
(wrapper decl1 decl2 ...)
expand to
(let-syntax ([declare our-declare])
decl1 decl2 ...
(post-process-reflection-information))
where our-declare is a helper macro that expands both to the standard declaration as well as some form that stores the reflection information,
also post-process-reflection-information is another macro that
does any needed post processing.
I think you are trying to use macros for something they are not designed to solve. Macros are primarily a text/code substitution mechanism, and in the case of Lisp this looks a lot like a simplified term-rewriting system (see also How does term-rewriting based evaluation work?). There are different strategies possible for how to substitute a pattern of code, and in which order, but in C/C++ preprocessor macros, in LaTeX, and in Lisp, the process is typically done by computing the expansion until the form is no longer expandable, starting from the topmost terms. This order is quite natural and because it is distinct from normal evaluation rules, it can be used to implement things the normal evaluation rules cannot.
In your case, you are interested in getting access to all the declarations of some object/type, something which falls under the introspection/reflection category (as you said yourself). But implementing reflection/introspection with macros doesn't look totally doable, since macros work on abstract syntax trees and this might be a poor way to access the metadata you want.
Typically the compiler is going to parse/analyze the struct definitions and build the definitive, canonical representation of the struct, even if there are different way to express that syntactically; it may even use prior information not available directly as source code to compute more interesting metadata (e.g. if you had inheritance, there could be a set of properties inherited from a type defined in another module (I don't think this applies to Rust)).
I think currently Rust does not offer compile-time or runtime introspection facilities, which explains why are you going with the macro route. In Common Lisp macros are definitely not used for introspection, the actual values obtained after evaluation (at different times) is used to gain information about an object. For example, defclass expands as a set of instructions that register a class in the language, but in order to get all the slots of a class, you ask the language to give it to you, e.g:
(defclass foo () (x)) ;; define class foo with slot X
(defclass bar () (y)) ;; define class bar with slot Y
(defclass zot (foo bar) ()) ;; define class zot with foo and bar as superclasses
USER> (c2mop:class-slots (find-class 'zot))
(#<SB-MOP:STANDARD-EFFECTIVE-SLOT-DEFINITION X>
#<SB-MOP:STANDARD-EFFECTIVE-SLOT-DEFINITION Y>)
I don't know what the solution for your problem is, but in addition to the other answers, I think it is not specifically a fault of the macro system. If a macro is defined as done usually as only a term rewriting system, it will always have difficulties to perform some tasks on the semantic level. But Rust is still evolving so there might be better ways to do things in the future.
Are there any practical differences between special forms and macros? In what do they differ?
The terms aren't quite synonymous, but they aren't exclusive either (this answer assumes Scheme):
A special form (also known as a syntax in the Scheme Reports) is an expression that's not evaluated according to the default rule for function application. (The default rule, just to be explicit, is to eval all of the subexpressions, and then apply the result of the first one to the list of the results of the others.)
The macro system is a language feature that allows definition of new special forms within the language itself. A macro is a special form defined using the macro system.
So you could say that "special form" is a term that pertains to interface or semantics, whereas "macro" is a term that pertains to implementation. "Special form" means "these expressions are evaluated with a special rule," while "macro" means "here's an implementation of a special rule for evaluating some expressions."
Now one important thing is that most Scheme special forms can be defined as macros from a really small core of primitives: lambda, if and macros. A minimal Scheme implementation that provides only these can still implement the rest as macros; recent Scheme Reports have made that distinction by referring to such special forms as "library syntax" that can be defined in terms of macros. In practice, however, practical Scheme systems often implement a richer set of forms as primitives.
Semantically speaking, the only thing that matters about an expression is what rule is used to evaluate it, not how that rule is implemented. So in that sense, it's not important whether a special form is implemented as a macro or a primitive. But on the other hand, the implementation details of a Scheme system often "leak," so you may find yourself caring about it...
Lisp has certain language primitives, which make up Lisp forms:
literal data: numbers, strings, structures, ...
function calls, like (sin 2.1) or like ((lambda (a b) (+ a b 2)) 3 4)
special operators used in special forms. These are the primitive built-in language elements. See Special Operators in Common Lisp. These need to be implemented in the interpreter and compiler. Common Lisp provides no way for the developer to introduce new special operators or to provide your own version of these. A code parsing tool will need to understand these special operators; these tools are usually called 'code walkers' in the Lisp community. During the definition of the Common Lisp standard, it was made sure that the number is very small and that all extensions otherwise are done via new functions and new macros.
macros: macros are functions which are transforming source code. The transformation will happen recursively until no macro is left in the source code. Common Lisp has built-in macros and allows the user to write new ones.
So the most important practical difference between special forms and macros is this: special operators are built-in syntax and semantics. They can't be written by the developer. Macros can be written by the developer.
In contrast to special forms, macro forms can be macroexpanded:
CL-USER(1): (macroexpand '(with-slots (x y z)
foo
(format t "~&X = ~A" x)))
(LET ((#:G925 FOO))
(DECLARE (IGNORABLE #:G925))
(DECLARE (SB-PCL::%VARIABLE-REBINDING #:G925 FOO))
#:G925
(SYMBOL-MACROLET ((X (SLOT-VALUE #:G925 'X))
(Y (SLOT-VALUE #:G925 'Y))
(Z (SLOT-VALUE #:G925 'Z)))
(FORMAT T "~&X = ~A" X)))
T
For me the most practical difference has been in the debugger: Macros don't show up in the debugger; instead, the (typically) obscure code from the macro's expansion shows up in the debugger. It is a real pain to debug such code and a good reason to ensure your macros are rock solid before you start relying upon them.
the super short answer for the lazy
You can write your own macroes any time you want, though you can't add special forms without recompiling clojure.
Why is the function/macro dichotomy present in Common Lisp?
What are the logical problems in allowing the same name representing both a macro (taking precedence when found in function position in compile/eval) and a function (usable for example with mapcar)?
For example having second defined both as a macro and as a function would allow to use
(setf (second x) 42)
and
(mapcar #'second L)
without having to create any setf trickery.
Of course it's clear that macros can do more than functions and so the analogy cannot be complete (and I don't think of course that every macro shold also be a function) but why forbidding it by making both sharing a single namespace when it could be potentially useful?
I hope I'm not offending anyone, but I don't really find a "Why doing that?" response really pertinent... I'm looking for why this is a bad idea. Imposing an arbitrary limitation because no good use is known is IMO somewhat arrogant (sort of assumes perfect foresight).
Or are there practical problems in allowing it?
Macros and Functions are two very different things:
macros are using source (!!!) code and are generating new source (!!!) code
functions are parameterized blocks of code.
Now we can look at this from several angles, for example:
a) how do we design a language where functions and macros are clearly identifiable and are looking different in our source code, so we (the human) can easily see what is what?
or
b) how do we blend macros and functions in a way that the result is most useful and has the most useful rules controlling its behavior? For the user it should not make a difference to use a macro or a function.
We really need to convince ourselves that b) is the way to go and we would like to use a language where macros and functions usage looks the same and is working according to similar principles. Take ships and cars. They look different, their use case is mostly different, they transport people - should we now make sure that the traffic rules for them are mostly identical, should we make them different or should we design the rules for their special usage?
For functions we have problems like: defining a function, scope of functions, life-time of functions, passing functions around, returning functions, calling functions, shadowing of functions, extension of functions, removing the definition a function, compilation and interpretation of functions, ...
If we would make macros appear mostly similar to functions, we need to address most or all above issues for them.
In your example you mention a SETF form. SETF is a macro that analyses the enclosed form at macro expansion time and generates code for a setter. It has little to do with SECOND being a macro or not. Having SECOND being a macro would not help at all in this situation.
So, what is a problem example?
(defmacro foo (a b)
(if (and (numberp b) (zerop b))
a
`(- ,a ,b)))
(defun bar (x list)
(mapcar #'foo (list x x x x) '(1 2 3 4)))
Now what should that do? Intuitively it looks easy: map FOO over the lists. But it isn't. When Common Lisp was designed, I would guess, it was not clear what that should do and how it should work. If FOO is a function, then it was clear: Common Lisp took the ideas from Scheme behind lexically scoped first-class functions and integrated it into the language.
But first-class macros? After the design of Common Lisp a bunch of research went into this problem and investigated it. But at the time of Common Lisp's design, there was no wide-spread use of first-class macros and no experience with design approaches. Common Lisp is standardizing on what was known at the time and what the language users thought necessary to develop (the object-system CLOS is kind of novel, based on earlier experience with similar object-systems) software with. Common Lisp was not designed to have the theoretically most pleasing Lisp dialect - it was designed to have a powerful Lisp which allows the efficient implementation of software.
We could work around this and say, passing macros is not possible. The developer would have to provide a function under the same name, which we pass around.
But then (funcall #'foo 1 2) and (foo 1 2) would invoke different machineries? In the first case the function fooand in the second case we use the macro foo to generate code for us? Really? Do we (as human programmers) want this? I think not - it looks like it makes programming much more complicated.
From a pragmatic point of view: Macros and the mechanism behind it are already complicated enough that most programmers have difficulties dealing with it in real code. They make debugging and code understanding much harder for a human. On the surface a macro makes code easier to read, but the price is the need to understand the code expansion process and result.
Finding a way to further integrate macros into the language design is not an easy task.
readscheme.org has some pointers to Macro-related research wrt. Scheme: Macros
What about Common Lisp
Common Lisp provides functions which can be first-class (stored, passed around, ...) and lexically scoped naming for them (DEFUN, FLET, LABELS, FUNCTION, LAMBDA).
Common Lisp provides global macros (DEFMACRO) and local macros (MACROLET).
Common Lisp provides global compiler macros (DEFINE-COMPILER-MACRO).
With compiler macros it is possible to have a function or macro for a symbol AND a compiler macro. The Lisp system can decide to prefer the compiler macro over the macro or function. It can also ignore them entirely. This mechanism is mostly used for the user to program specific optimizations. Thus it does not solve any macro related problems, but provides a pragmatic way to program global optimizations.
I think that Common Lisp's two namespaces (functions and values), rather than three (macros, functions, and values), is a historical contingency.
Early Lisps (in the 1960s) represented functions and values in different ways: values as bindings on the runtime stack, and functions as properties attached to symbols in the symbol table. This difference in implementation led to the specification of two namespaces when Common Lisp was standardized in the 1980s. See Richard Gabriel's paper Technical Issues of Separation in Function Cells and Value Cells for an explanation of this decision.
Macros (and their ancestors, FEXPRs, functions which do not evaluate their arguments) were stored in many Lisp implementations in the symbol table, in the same way as functions. It would have been inconvenient for these implementations if a third namespace (for macros) had been specified, and would have caused backwards-compatibility problems for many programs.
See Kent Pitman's paper Special Forms in Lisp for more about the history of FEXPRs, macros and other special forms.
(Note: Kent Pitman's website is not working for me, so I've linked to the papers via archive.org.)
Because then the exact same name would represent two different objects, depending on the context. It makes the programme unnecessarily difficult to understand.
My TXR Lisp dialect allows a symbol to be simultaneously a macro and function. Moreover, certain special operators are also backed by functions.
I put a bit of thought into the design, and haven't run into any problems. It works very well and is conceptually clean.
Common Lisp is the way it is for historic reasons.
Here is a brief rundown of the system:
When a global macro is defined for symbol X with defmacro, the symbol X does not become fboundp. Rather, what becomes fboundp is the compound function name (macro X).
The name (macro X) is then known to symbol-function, trace and in other situations. (symbol-function '(macro X)) retrieves the two-argument expander function which takes the form and an environment.
It's possible to write a macro using (defun (macro X) (form env) ...).
There are no compiler macros; regular macros do the job of compiler macros.
A regular macro can return the unexpanded form to indicate that it's declining to expand. If a lexical macrolet declines to expand, the opportunity goes to a more lexically outer macrolet, and so on up to the global defmacro. If the global defmacro declines to expand, the form is considered expanded, and thus is necessarily either a function call or special form.
If we have both a function and macro called X, we can call the function definition using (call (fun X) ...) or (call 'X ...), or else using the Lisp-1-style dwim evaluator (dwim X ...) that is almost always used through its [] syntactic sugar as [X ...].
For a sort of completeness, the functions mboundp, mmakunbound and symbol-macro are provided, which are macro analogs of fboundp, fmakunbound and symbol-function.
The special operators or, and, if and some others have function definitions also, so that code like [mapcar or '(nil 2 t) '(1 0 3)] -> (1 2 t) is possible.
Example: apply constant folding to sqrt:
1> (sqrt 4.0)
2.0
2> (defmacro sqrt (x :env e :form f)
(if (constantp x e)
(sqrt x)
f))
** warning: (expr-2:1) defmacro: defining sqrt, which is also a built-in defun
sqrt
3> (sqrt 4.0)
2.0
4> (macroexpand '(sqrt 4.0))
2.0
5> (macroexpand '(sqrt x))
(sqrt x)
However, no, (set (second x) 42) is not implemented via a macro definition for second. That would not work very well. The main reason is that it would be too much of a burden. The programmer may want to have, for a given function, a macro definition which has nothing to do with implementing assignment semantics!
Moreover, if (second x) implements place semantics, what happens when it is not embedded in an assignment operation, such that the semantics is not required at all? Basically, to hit all the requirements would require concocting a scheme for writing macros whose complexity would equal or exceed that of existing logic for handling places.
TXR Lisp does, in fact, feature a special kind of macro called a "place macro". A form is only recognized as a place macro invocation when it is used as a place. However, place macros do not implement place semantics themselves; they just do a straightforward rewrite. Place macros must expand down to a form that is recognized as a place.
Example: specify that (foo x), when used as a place, behaves as (car x):
1> (define-place-macro foo (x) ^(car ,x))
foo
2> (macroexpand '(foo a)) ;; not a macro!
(foo a)
3> (macroexpand '(set (foo a) 42)) ;; just a place macro
(sys:rplaca a 42)
If foo expanded to something which is not a place, things would fail:
4> (define-place-macro foo (x) ^(bar ,x))
foo
5> (macroexpand '(foo a))
(foo a)
6> (macroexpand '(set (foo a) 42))
** (bar a) is not an assignable place
I've learned Clojure previously and really like the language. I also love Emacs and have hacked some simple stuff with Emacs Lisp. There is one thing which prevents me mentally from doing anything more substantial with Elisp though. It's the concept of dynamic scoping. I'm just scared of it since it's so alien to me and smells like semi-global variables.
So with variable declarations I don't know which things are safe to do and which are dangerous. From what I've understood, variables set with setq fall under dynamic scoping (is that right?) What about let variables? Somewhere I've read that let allows you to do plain lexical scoping, but somewhere else I read that let vars also are dynamically scoped.
I quess my biggest worry is that my code (using setq or let) accidentally breaks some variables from platform or third-party code that I call or that after such call my local variables are messed up accidentally. How can I avoid this?
Are there a few simple rules of thumb that I can just follow and know exactly what happens with the scope without being bitten in some weird, hard-to-debug way?
It isn't that bad.
The main problems can appear with 'free variables' in functions.
(defun foo (a)
(* a b))
In above function a is a local variable. b is a free variable. In a system with dynamic binding like Emacs Lisp, b will be looked up at runtime. There are now three cases:
b is not defined -> error
b is a local variable bound by some function call in the current dynamic scope -> take that value
b is a global variable -> take that value
The problems can then be:
a bound value (global or local) is shadowed by a function call, possibly unwanted
an undefined variable is NOT shadowed -> error on access
a global variable is NOT shadowed -> picks up the global value, which might be unwanted
In a Lisp with a compiler, compiling the above function might generate a warning that there is a free variable. Typically Common Lisp compilers will do that. An interpreter won't provide that warning, one just will see the effect at runtime.
Advice:
make sure that you don't use free variables accidentally
make sure that global variables have a special name, so that they are easy to spot in source code, usually *foo-var*
Don't write
(defun foo (a b)
...
(setq c (* a b)) ; where c is a free variable
...)
Write:
(defun foo (a b)
...
(let ((c (* a b)))
...)
...)
Bind all variables you want to use and you want to make sure that they are not bound somewhere else.
That's basically it.
Since GNU Emacs version 24 lexical binding is supported in its Emacs Lisp. See: Lexical Binding, GNU Emacs Lisp Reference Manual.
In addition to the last paragraph of Gilles answer, here is how RMS argues in favor of dynamic scoping in an extensible system:
Some language designers believe that
dynamic binding should be avoided, and
explicit argument passing should be
used instead. Imagine that function A
binds the variable FOO, and calls the
function B, which calls the function
C, and C uses the value of FOO.
Supposedly A should pass the value as
an argument to B, which should pass it
as an argument to C.
This cannot be done in an extensible
system, however, because the author of
the system cannot know what all the
parameters will be. Imagine that the
functions A and C are part of a user
extension, while B is part of the
standard system. The variable FOO does
not exist in the standard system; it
is part of the extension. To use
explicit argument passing would
require adding a new argument to B,
which means rewriting B and everything
that calls B. In the most common case,
B is the editor command dispatcher
loop, which is called from an awful
number of places.
What's worse, C must also be passed an
additional argument. B doesn't refer
to C by name (C did not exist when B
was written). It probably finds a
pointer to C in the command dispatch
table. This means that the same call
which sometimes calls C might equally
well call any editor command
definition. So all the editing
commands must be rewritten to accept
and ignore the additional argument. By
now, none of the original system is
left!
Personally, I think that if there is a problem with Emacs-Lisp, it is not dynamic scoping per se, but that it is the default, and that it is not possible to achieve lexical scoping without resorting to extensions. In CL, both dynamic and lexical scoping can be used, and -- except for top-level (which is adressed by several deflex-implementations) and globally declared special variables -- the default is lexical scoping. In Clojure, too, you can use both lexical and dynamic scoping.
To quote RMS again:
It is not necessary for dynamic scope to be the only scope rule provided, just useful
for it to be available.
Are there a few simple rules of thumb that I can just follow and know exactly what happens with the scope without being bitten in some weird, hard-to-debug way?
Read Emacs Lisp Reference, you'll have many details like this one :
Special Form: setq [symbol form]...
This special form is the most common method of changing a
variable's value. Each SYMBOL is given a new value, which is the
result of evaluating the corresponding FORM. The most-local
existing binding of the symbol is changed.
Here is an example :
(defun foo () (setq tata "foo"))
(defun bar (tata) (setq tata "bar"))
(foo)
(message tata)
===> "foo"
(bar tata)
(message tata)
===> "foo"
As Peter Ajtai pointed out:
Since emacs-24.1 you can enable lexical scoping on a per file basis by putting
;; -*- lexical-binding: t -*-
on top of your elisp file.
First, elisp has separate variable and function bindings, so some pitfalls of dynamic scoping are not relevant.
Second, you can still use setq to set variables, but the value set does not survive the exit of the dynamic scope it is done in. This isn't, fundamentally, different from lexical scoping, with the difference that with dynamic scoping a setq in a function you call can affect the value you see after the function call.
There's lexical-let, a macro that (essentially) imitates lexical bindings (I believe it does this by walking the body and changing all occurrences of the lexically let variables to a gensymmed name, eventually uninterning the symbol), if you absolutely need to.
I'd say "write code as normal". There are times when the dynamic nature of elisp will bite you, but I've found that in practice that is surprisingly seldom.
Here's an example of what I was saying about setq and dynamically-bound variables (recently evaluated in a nearby scratch buffer):
(let ((a nil))
(list (let ((a nil))
(setq a 'value)
a)
a))
(value nil)
Everything that has been written here is worthwhile. I would add this: get to know Common Lisp -- if nothing else, read about it. CLTL2 presents lexical and dynamic binding well, as do other books. And Common Lisp integrates them well in a single language.
If you "get it" after some exposure to Common Lisp then things will be clearer for you for Emacs Lisp. Emacs 24 uses lexical scoping to a greater extent by default than older versions, but Common Lisp's approach will still be clearer and cleaner (IMHO). Finally, it is definitely the case that dynamic scope is important for Emacs Lisp, for the reasons that RMS and others have emphasized.
So my suggestion is to get to know how Common Lisp deals with this. Try to forget about Scheme, if that is your main mental model of Lisp -- it will limit you more than help you in understanding scoping, funargs, etc. in Emacs Lisp. Emacs Lisp, like Common Lisp, is "dirty and low-down"; it is not Scheme.
Dynamic and lexical scoping have different behaviors when a piece of code is used in a different scope than the one it was defined in. In practice, there are two patterns that cover most troublesome cases:
A function shadows a global variable, then calls another function that uses that global variable.
(defvar x 3)
(defun foo ()
x)
(defun bar (x)
(+ (foo) x))
(bar 0) ⇒ 0
This doesn't come up often in Emacs because local variables tend to have short names (often single-word) whereas global variables tend to have long names (often prefixed by packagename-). Many standard functions have names that are tempting to use as local variables like list and point, but functions and variables live in separate name spaces are local functions are not used very often.
A function is defined in one lexical context and used outside this lexical context because it's passed to a higher-order function.
(let ((cl-y 10))
(mapcar* (lambda (elt) (* cl-y elt)) '(1 2 3)))
⇒ (10 20 30)
(let ((cl-x 10))
(mapcar* (lambda (elt) (* cl-x elt)) '(1 2 3)))
⇑ (wrong-type-argument number-or-marker-p (1 2 3))
The error is due to the use of cl-x as a variable name in mapcar* (from the cl package). Note that the cl package uses cl- as a prefix even for its local variables in higher-order functions. This works reasonably well in practice, as long as you take care not to use the same variable as a global name and as a local name, and you don't need to write a recursive higher-order function.
P.S. Emacs Lisp's age isn't the only reason why it's dynamically scoped. True, in those days, lisps tended towards dynamic scoping — Scheme and Common Lisp hadn't really taken on yet. But dynamic scoping is also an asset in a language targeted towards extending a system dynamically: it lets you hook into more places without any special effort. With great power comes great rope to hang yourself: you risk accidentally hooking into a place you didn't know about.
The other answers are good at explaining the technical details on how to work with dynamic scoping, so here's my non-technical advice:
Just do it
I've been tinkering with Emacs lisp for 15+ years and don't know that I've ever been bitten by any problems due to the differences between lexical/dynamic scope.
Personally, I've not found the need for closures (I love 'em, just don't need them for Emacs). And, I generally try to avoid global variables in general (whether the scoping was lexical or dynamic).
So I suggest jumping in and writing customizations that suit your needs/desires, chances are you won't have any problems.
I entirely feel your pain. I find the lack of lexical binding in emacs rather annoying - especially not being able to use lexical closures, which seems to be a solution I think of a lot, coming from more modern languages.
While I don't have any more advice on working around the lacking features that the previous answers didn't cover yet, I'd like to point out the existance of an emacs branch called `lexbind', implementing lexical binding in a backward-compatible way. In my experience lexical closures are still a little buggy in some circumstances, but that branch appears to a promising approach.
Just don't.
Emacs-24 lets you use lexical-scope. Just run
(setq lexical-binding t)
or add
;; -*- lexical-binding: t -*-
at the beginning of your file.