Related
Is the distinction between the 3 different let forms (as in Scheme's let, let*, and letrec) useful in practice?
I am current in the midst of developing a lisp-style language that does current support all 3 forms, yet I have found:
regular "let" is the most inefficient form, effectively having to translate to an immediately called lambda form and the instructions generated are nearly identical. Additionally, I haven't found myself needing this form very often.
let* (sequential binding) seems to be the most practically useful and most often used. This form can be translated to a sequence of nested "lets", each environment storing a single variable. But this again is highly inefficient, wasting space and lookup time.
letrec (recursive binding) can be efficiently implemented, given that no initializer expression refers to an unbound variable. Typically the case is that all initializers are lambda expressions and the above is true.
The question is: since letrec can be efficiently implemented and also subsumes the behavior of let*, regular let is not often used and can be converted to a lambda form with no great loss of efficiency, why not make default "let" have the behavior of the current "letrec" and be rid of the original "let"?
This [let*] form can be translated to a sequence of nested "lets", each environment storing a single variable. But this again is highly inefficient, wasting space and lookup time.
While what you are saying here is not incorrect, in fact there is no need for such a transformation. A compiling strategy for the simple let can handle the semantics of let* with just simple modifications (possibly supporting both with just a flag passed to common code).
let* just alters the scoping rules, which are settled at compile time; it's mostly a matter of which compile-time environment object is used when compiling a given variable init form.
A compiler can use a single environment object for the sequential bindings of a let*, and destructively update it as it compiles the variable init forms, so that each successive init form sees a more and more extended version of that environment which contains more and more variables. At the end of that, the complete environment is available with all the variables, for doing the code generation for generating the frame and whatnot.
One issue to watch out for is that a flat environment representation for let* means that lexical closures captured during the variable binding phase can capture future variables which are lexically invisible to them:
(let* ((past 42)
(present (lambda () (do-something-with past)))
(future (construct-huge-cumbersome-object)))
...))
If there is a single run-time environment object here containing the compiled versions of the variables past, present and future, then it means that the lambda must capture that environment. Which means that although ostensibly the lambda "sees" only the past variable, because future is not in scope, it has de facto captured future.
Thus, garbage collection will consider the huge-cumbersome-object to be reachable for as long as the lambda remains reachable.
There are ways to address this, like accompanying the environmental reference emanating from the lambda with some kind of frame index which says, "I'm only referencing part of the environment vector up to index 13". Then when the garbage collector traverses this fenced reference, it will only mark the indicated part of the environment vector: cells 0 to 13.
Anyway, about whether to implement both let and let*. I suspect if Lisp were being "green field" designed from scratch today, many designers would like reach for the sequentially binding version to be called let. The parallel construct would be the one available under the special name let*. The situations when you actually need let to be parallel are fewer. For instance, let allows us to re-bind a pair of variable symbols such that their contents appear exchanged; but this is rarely something that comes up in application programming. In some programming language cultures, variable shadowing is frowned up on entirely; GNU C has a -Wshadow warning against it, for instance.
Note how in ANSI Common Lisp, which has let and let*, the optional parameters of a function behave sequentially, like let*, and this is the only binding strategy supported! So that is to say:
(lambda (required &optional opt1 (opt2 opt1)) ...)
Here the value of opt2 is defaulted from whatever the value of opt1 is at the time of the call. The initialization expression of opt2 has the opt1 parameter in scope.
Also, in the same Lisp dialect, the regular setf is sequential; if you want parallel assignment you must use psetf, which is the longer name of the two.
Common Lisp already shows evidence of design decisions more recent than let tend to favor sequential operation, and designate the parallel as the extraordinary variant.
Think of metaprogramming. If your default let will sequentially create nested scopes, you'll have to make sure that none of the initialiser expressions are referring to the names from the wrong scopes. You have such a guarantee with a regular let. Control over name scoping is very important when you're generating code.
Letrec is even worse, it's introducing a very complicated scope rules that cannot be easily reasoned with.
In my ongoing quest to learn lisp, I'm running into a conceptual problem. It's somewhat akin to the question here, but maybe it's thematically appropriate to lisp that my question is a level of abstraction up.
As a rule, when should you create a macro vs. a function? It seems to me, maybe naively, that there would be very few cases where you must create a macro instead of a function, and that in most remainder cases, a function would generally suffice. Of these remainder cases, it seems like the main additional value of a macro would be in clarity of syntax. And if that's the case, then it seems like not just the decision to opt for macro use but also the design of their structures might be fundamentally idiosyncratic to the individual programmer.
Is this wrong? Is there a general case outlining when to use macros over functions? Am I right that the cases where a macro is required by the language are generally few? And lastly, is there a general syntactic form that's expected of macros, or are they generally used as shorthands by programmers?
I found a detailed answer, from Paul Graham's On Lisp, bold emphases added:
Macros can do two things that functions can’t: they can control (or prevent) the evaluation of their arguments, and they are expanded right into the calling context. Any application which requires macros requires, in the end, one or both of these properties.
...
Macros use this control in four major ways:
Transformation. The Common Lisp setf macro is one of a class of macros which pick apart their arguments before evaluation. A built-in access function will often have a converse whose purpose is to set what the access function retrieves. The converse of car is rplaca, of cdr, rplacd, and so on. With setf we can use calls to such access functions as if they were variables to be set, as in (setf (car x) ’a), which could expand into (progn (rplaca x ’a) ’a).
To perform this trick, setf has to look inside its first argument. To know that the case above requires rplaca, setf must be able to see that the first argument is an expression beginning with car. Thus setf, and any other operator which transforms its arguments, must be written as a macro.
Binding. Lexical variables must appear directly in the source code. The first argument to setq is not evaluated, for example, so anything built on setq must be a macro which expands into a setq, rather than a function which calls it. Likewise for operators like let, whose arguments are to appear as parameters in a lambda expression, for macros like do which expand into lets, and so on. Any new operator which is to alter the lexical bindings of its arguments must be written as a macro.
Conditional evaluation. All the arguments to a function are evaluated. In constructs like when, we want some arguments to be evaluated only under certain conditions. Such flexibility is only possible with macros.
Multiple evaluation. Not only are the arguments to a function all evaluated, they are all evaluated exactly once. We need a macro to define a construct like do, where certain arguments are to be evaluated repeatedly.
There are also several ways to take advantage of the inline expansion of macros. It’s important to emphasize that the expansions thus appear in the lexical context of the macro call, since two of the three uses for macros depend on that fact. They are:
Using the calling environment. A macro can generate an expansion containing a variable whose binding comes from the context of the macro call. The behavior of the following macro:
(defmacro foo (x) ‘(+ ,x y))
depends on the binding of y where foo is called.
This kind of lexical intercourse is usually viewed more as a source of contagion than a source of pleasure. Usually it would be bad style to write such a macro. The ideal of functional programming applies as well to macros: the preferred way to communicate with a macro is through its parameters. Indeed, it is so rarely necessary to use the calling environment that most of the time it happens, it happens by mistake...
Wrapping a new environment. A macro can also cause its arguments to be evaluated in a new lexical environment. The classic example is let, which could be implemented as a macro on lambda. Within the body of an expression like (let ((y 2)) (+ x y)), y will refer to a new variable.
Saving function calls. The third consequence of the inline insertion of macro expansions is that in compiled code there is no overhead associated with a macro call. By runtime, the macro call has been replaced by its expansion. (The same is true in principle of functions declared inline.)
...
What about those operators which could be written either way [i.e. as a function or a macro]?... Here are several points to consider when we face such choices:
THE PROS
Computation at compile-time. A macro call involves computation at two times: when the macro is expanded, and when the expansion is evaluated. All the macro expansion in a Lisp program is done when the program is compiled, and every bit of computation which can be done at compile-time is one bit that won’t slow the program down when it’s running. If an operator could be written to do some of its work in the macro expansion stage, it will be more efficient to make it a macro, because whatever work a smart compiler can’t do itself, a function has to do at runtime. Chapter 13 describes macros like avg which do some of their work during the expansion phase.
Integration with Lisp. Sometimes, using macros instead of functions will make a program more closely integrated with Lisp. Instead of writing a program to solve a certain problem, you may be able to use macros to transform the problem into one that Lisp already knows how to solve. This approach, when possible, will usually make programs both smaller and more efficient: smaller because Lisp is doing some of your work for you, and more efficient because production Lisp systems generally have had more of the fat sweated out of them than user programs. This advantage appears mostly in embedded languages, which are described starting in Chapter 19.
Saving function calls. A macro call is expanded right into the code where it appears. So if you write some frequently used piece of code as a macro, you can save a function call every time it’s used. In earlier dialects of Lisp, programmers took advantage of this property of macros to save function calls at runtime. In Common Lisp, this job is supposed to be taken over by functions declared inline.
By declaring a function to be inline, you ask for it to be compiled right into the calling code, just like a macro. However, there is a gap between theory and practice here; CLTL2 (p. 229) says that “a compiler is free to ignore this declaration,” and some Common Lisp compilers do. It may still be reasonable to use macros to save function calls, if you are compelled to use such a compiler...
THE CONS
Functions are data, while macros are more like instructions to the compiler. Functions can be passed as arguments (e.g. to apply), returned by functions, or stored in data structures. None of these things are possible with macros.
In some cases, you can get what you want by enclosing the macro call within a lambda-expression. This works, for example, if you want to apply or funcall certain macros:> (funcall #’(lambda (x y) (avg x y)) 1 3) --> 2. However, this is an inconvenience. It doesn’t always work, either: even if, like avg, the macro has an &rest parameter, there is no way to pass it a varying number of arguments.
Clarity of source code. Macro definitions can be harder to read than the equivalent function definitions. So if writing something as a macro would only make a program marginally better, it might be better to use a function instead.
Clarity at runtime. Macros are sometimes harder to debug than functions. If you get a runtime error in code which contains a lot of macro calls, the code you see in the backtrace could consist of the expansions of all those macro calls, and may bear little resemblance to the code you originally wrote.
And because macros disappear when expanded, they are not accountable at runtime. You can’t usually use trace to see how a macro is being called. If it worked at all, trace would show you the call to the macro’s expander function, not the macro call itself.
Recursion. Using recursion in macros is not so simple as it is in functions. Although the expansion function of a macro may be recursive, the expansion itself may not be. Section 10.4 deals with the subject of recursion in macros...
Having considered what can be done with macros, the next question to ask is: in what sorts of applications can we use them? The closest thing to a general description of macro use would be to say that they are used mainly for syntactic transformations. This is not to suggest that the scope for macros is restricted. Since Lisp programs are made from lists, which are Lisp data structures, “syntactic transformation” can go a long way indeed...
Macro applications form a continuum between small general-purpose macros like while, and the large, special-purpose macros defined in the later chapters. On one end are the utilities, the macros resembling those that every Lisp has built-in. They are usually small, general, and written in isolation. However, you can write utilities for specific classes of programs too, and when you have a collection of macros for use in, say, graphics programs, they begin to look like a programming language for graphics. At the far end of the continuum, macros allow you to write whole programs in a language distinctly different from Lisp. Macros used in this way are said to implement embedded languages.
Yes, the first rule is: don't use a macro where a function will do.
There are a few things you can't do with functions, for example conditional evaluation of code. Others become quite unwieldy.
In general I am aware of three recurring use cases for macros (which doesn't mean that there aren't any others):
Defining forms (e. g. defun, defmacro, define-frobble-twiddle)
These often have to take some code snippet, wrap it (e. g. in a lamdba form), and register it somewhere, maybe even multiple places. The users (programmers) should only concern themselves with the code snippet. This is thus mostly about removing boilerplate. Additionally, the macro can process the body, e. g. registering docstrings, handle declarations etc.
Example: Imagine that you are writing a sort of event mini-framework. Your event handlers are pure functions that take some input and produce an effect declaration (think re-frame from the Clojure world). You want these functions to be normal named functions so that you can just test them with the usual testing frameworks, but also register them in a lookup table for your event loop mechanism. You'd maybe want to have something like a define-handler macro:
(defvar *handlers* (make-hash-table)) ; internal for the framework
(defmacro define-handler (&whole whole name lambda-list &body body)
`(progn (defun ,#(rest whole))
(setf (gethash ,name *handlers*)
(lambda ,lambda-list ,#body)))) ; could also be #',name
Control constructs (e. g. case, cond, switch, some->)
These use conditional evaluation and convenient re-arrangement of the expression.
With- style wrappers
This is an idiom to provide unwind-protect functionality to some arbitrary resource. The difference to a general with construct (as in Clojure) is that the resource type can be anything, you don't have to reify it with something like a Closable interface.
Example:
(defmacro with-foo-bar-0 (&body body)
(let ((foo-bar (gensym "FOO-BAR")))
`(let (,foo-bar))
(shiftf ,foo-bar (aref (gethash :foo *buzz*) 0) 0)
(unwind-protect (progn ,#body)
(setf (aref (gethash :foo *buzz*) 0) ,foo-bar)))))
This sets something inside a nested data structure to 0, and ensures that it is reset to the value it had before on any, even non-local, exit.
[This is a much-reduced version of a longer, incomplete answer which I decided was not appropriate for SE.]
There are no cases where you must use a macro. Indeed, there are no cases where you must use a programming language at all: if you are happy to learn the order code for the machine you are using and competent with a keypunch then you can program that way.
Most of us are not happy doing that: we like to use programming languages. These have two obvious benefits and one less-obvious but far more important one. The two obvious benefits:
programming languages make programming easier;
programming languages make programs portable across machines.
The more important reason is that building languages is an enormously successful approach to problem solving for human beings. It's so successful that we do it all the time, without even thinking we are doing it. Every time we invent some new term for something we are in fact inventing a language; every time a mathematician invents some new bit of notation they are inventing a language. People like to sneer at these languages by calling them 'jargon', 'slang' or 'dialect' but, famously: a shprakh iz a dialekt mit an armey un flot (translated: a language is a dialect with an army and navy).
The same thing is true for programming languages as is true for natural languages, except that programming languages are designed to communicate both with other humans and with a machine, and the machine requires very precise instructions. This means that it can be rather hard to build programming languages, so people tend to stick with the languages they know.
Except that they don't: the approach of building a language to describe some problem is so powerful that people in fact do this anyway. But they don't know that they are doing it and they don't have the tools to do it so what they end up with tends to be a hideous monster stitched together from pieces of other things with the robustness and readability of custard. We've all dealt with such things. A common characteristic is 'language in a string' where one language appears within strings of another language, with constructs of this inner language being put together by string operations in the outer language. If you are really lucky this will go several levels deep (I have seen three).
These things are abominations, but they are still the best way of dealing with large problem areas. Well, they are the best way if you live in a world where constructing a new programming language is so hard that only special clever people can do it
But it's hard only because if your only tool is C then everything looks like a PDP-11. If instead we used a tool which made the incremental construction of programming languages easy by allowing them to be defined in terms of simpler versions of themselves in a lightweight way, then we could just construct whole families of programming languages in which to talk about various problems, each of which would simply be a point in the space of possible languages. And anyone could do this: it would be a little bit harder than just writing functions, because working out grammar rules is a little bit harder than thinking up new words, but it would not be a lot harder.
And that's what macros do: they let you define programming languages to talk about a particular problem area in a way which is extremely lightweight. One such language is Common Lisp, but it's just one starting point in the space of Lisp-family languages: a point from which you can build the language you actually want (and people, of course, will belittle these languages by calling them 'dialects': well, a programming language is only a dialect with a standards committee).
Functions let you add to the vocabulary of the language you are building. Macros let you add to the grammar of the language. Between them they let you define a new language in which to talk about the problem area you are interested in. And doing that is the whole point of programming in Lisp: Lisp is about building languages to talk about problem areas.
An soon as you are little familiar to macros, you will wonder why you ever had this question. :-)
Macros are in no way alternatives to functions and neither vice versa. It just seems to be so, if you are working on the REPL, because macro expansion, compilation and running is happening within the moment you are pressing [enter].
Macros are running at compile time, so any macro-processing is finished, as son as your definition runs. There is no way to "call" a macro at the runtime of the definition that involves this very macro.
Macros just calculate S-exprs, that will be passed to the compiler.
Just think of a macro as something, that is coding for you.
This is easier to understand with little more code in your editor than with small definitions the REPL. Good luck!
I'm learning blocks in Common lisp and did this example to see how blocks and the return-from command work:
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from b1)
(print 6)
)
(print 7))
It will print 1, 2, 3, 4, and 5, as expected. Changing the return-from to (return-from b2) it'll print 1, 2, 3, 4, 5, and 7, as one would expect.
Then I tried turn this into a function and paremetrize the label on the return-from:
(defun test-block (arg) (block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from (eval arg))
(print 6)
)
(print 7)))
and using (test-block 'b1) to see if it works, but it doesn't. Is there a way to do this without conditionals?
Using a conditional like CASE to select a block to return from
The recommended way to do it is using case or similar. Common Lisp does not support computed returns from blocks. It also does not support computed gos.
Using a case conditional expression:
(defun test-block (arg)
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(case arg
(b1 (return-from b1))
(b2 (return-from b2)))
(print 6))
(print 7)))
One can't compute lexical go tags, return blocks or local functions from names
CLTL2 says about the restriction for the go construct:
Compatibility note: The ``computed go'' feature of MacLisp is not supported. The syntax of a computed go is idiosyncratic, and the feature is not supported by Lisp Machine Lisp, NIL (New Implementation of Lisp), or Interlisp. The computed go has been infrequently used in MacLisp anyway and is easily simulated with no loss of efficiency by using a case statement each of whose clauses performs a (non-computed) go.
Since features like go and return-from are lexically scoped constructs, computing the targets is not supported. Common Lisp has no way to access lexical environments at runtime and query those. This is for example also not supported for local functions. One can't take a name and ask for a function object with that name in some lexical environment.
Dynamic alternative: CATCH and THROW
The typically less efficient and dynamically scoped alternative is catch and throw. There the tags are computed.
I think these sorts of things boils down to the different types of namespaces bindings and environments in Common Lisp.
One first point is that a slightly more experienced novice learning Lisp might try to modify your attempted function to say (eval (list 'return-from ,arg)) instead. This seems to make more sense but still does not work.
Namespaces
A common beginner mistake in a language like scheme is having a variable called list as this shadows the top level definition of this as a function and stops the programmer from being able to make lists inside the scope for this binding. The corresponding mistake in Common Lisp is trying to use a symbol as a function when it is only bound as a variable.
In Common Lisp there are namespaces which are mappings from names to things. Some namespaces are:
The functions. To get the corresponding thing either call it: (foo a b c ...), or get the function for a static symbol (function foo) (aka #'foo) or for a dynamic symbol (fdefinition 'foo). Function names are either symbols or lists of setf and one symbol (e.g. (serf bar)). Symbols may alternatively be bound to macros in this namespace in which case function and fdefinition signal errors.
The variables. This maps symbols to the values in the corresponding variable. This also maps symbols to constants. Get the value of a variable by writing it down, foo or dynamically as (symbol-value). A symbol may also be bound as a symbol-macro in which case special macro expansion rules apply.
Go tags. This maps symbols to labels to which one can go (like goto in other languages).
Blocks. This maps symbols to places you can return from.
Catch tags. This maps objects to the places which catch them. When you throw to an object, the implementation effectively looks up the corresponding catch in this namespace and unwinds the stack to it.
classes (and structs, conditions). Every class has a name which is a symbol (so different packages may have a point class)
packages. Each package is named by a string and possibly some nicknames. This string is normally the name of a symbol and therefore usually in uppercase
types. Every type has a name which is a symbol. Naturally a class definition also defines a type.
declarations. Introduced with declare, declaim, proclaim
there might be more. These are all the ones I can think of.
The catch-tag and declarations namespaces aren’t like the others as they don’t really map symbols to things but they do have bindings and environments in the ways described below (note that I have used declarations to refer to the things that have been declared, like the optimisation policy or which variables are special, rather than the namespace in which e.g. optimize, special, and indeed declaration live which seems too small to include).
Now let’s talk about the different ways that this mapping may happen.
The binding of a name to a thing in a namespace is the way in which they are associated, in particular, how it may come to be and how it may be inspected.
The environment of a binding is the place where the binding lives. It says how long the binding lives for and where it may be accessed from. Environments are searched for to find the thing associated with some name in some namespace.
static and dynamic bindings
We say a binding is static if the name that is bound is fixed in the source code and a binding is dynamic if the name can be determined at run time. For example let, block and tags in a tagbody all introduce static bindings whereas catch and progv introduce dynamic bindings.
Note that my definition for dynamic binding is different from the one in the spec. The spec definition corresponds to my dynamic environment below.
Top level environment
This is the environment where names are searched for last and it is where toplevel definitions go to, for example defvar, defun, defclass operate at this level. This is where names are looked up last after all other applicable environments have been searched, e.g. if a function or variable binding can not be found at a closer level then this level is searched. References can sometimes be made to bindings at this level before they are defined, although they may signal warnings. That is, you may define a function bar which calls foo before you have defined foo. In other cases references are not allowed, for example you can’t try to intern or read a symbol foo::bar before the package FOO has been defined. Many namespaces only allow bindings in the top level environment. These are
constants (within the variables namespace)
classes
packages
types
Although (excepting proclaim) all bindings are static, they can effectively be made dynamic by calling eval which evaluates forms at the top level.
Functions (and [compiler] macros) and special variables (and symbol macros) may also be defined top level. Declarations can be defined toplevel either statically with the macro declaim or dynamically with the function proclaim.
Dynamic environment
A dynamic environment exists for a region of time during the programs execution. In particular, a dynamic environment begins when control flow enters some (specific type of) form and ends when control flow leaves it, either by returning normally or by some nonlocal transfer of control like a return-from or go. To look up a dynamically bound name in a namespace, the currently active dynamic environments are searched (effectively, ie a real system wouldn’t be implemented this way) from most recent to oldest for that name and the first binding wins.
Special variables and catch tags are bound in dynamic environments. Catch tags are bound dynamically using catch while special variables are bound statically using let and dynamically using progv. As we shall discuss later, let can make two different kinds of binding and it knows to treat a symbol as special if it has been defined with defvar or ‘defparameteror if it has been declared asspecial`.
Lexical environment
A lexical environment corresponds to a region of source code as it is written and a specific runtime instantiation of it. It (slightly loosely) begins at an opening parenthesis and ends at the corresponding closing parenthesis, and is instantiated when control flow hits the opening parenthesis. This description is a little complicated so let’s have an example with variables which are bound in a lexically environment (unless they are special. By convention the names special variables are wrapped in * symbols)
(defun foo ()
(let ((x 10))
(bar (lambda () x))))
(defun bar (f)
(let ((x 20))
(funcall f)))
Now what happens when we call (foo)? Well if x were bound in a dynamic environment (in foo and bar) then the anonymous function would be called in bar and the first dynamic environment with a binding for x would have it bound to 20.
But this call returns 10 because x is bound in a lexical environment so even though the anonymous function gets passed to bar, it remembers the lexical environment corresponding to the application of foo which created it and in that lexical environment, x is bound to 10. Let’s now have another example to show what I mean by ‘specific runtime instantiation’ above.
(defun baz (islast)
(let ((x (if islast 10 20)))
(let ((lx (lambda () x)))
(if islast
lx
(frob lx (baz t))))))
(defun frob (a b)
(list (funcall a) (funcall b)))
Now running (baz nil) will give us (20 10) because the first function passed to frob remembers the lexical environment for the outer call to baz (where islast is nil) whilst the second remembers the environment for the inner call.
For variables which are not special, let creates static lexical bindings. Block names (introduced statically by block), go tags (scopes inside a tagbody), functions (by felt or labels), macros (macrolet), and symbol macros (symbol-macrolet) are all bound statically in lexical environments. Bindings from a lambda list are also lexically bound. Declarations can be created lexically using (declare ...) in one of the allowed places or by using (locally (declare ...) ...) anywhere.
We note that all lexical bindings are static. The eval trick described above does not work because eval happens in the toplevel environment but references to lexical names happen in the lexical environment. This allows the compiler to optimise references to them to know exactly where they are without running code having to carry around a list of bindings or accessing global state (e.g. lexical variables can live in registers and the stack). It also allows the compiler to work out which bindings can escape or be captured in closures or not and optimise accordingly. The one exception is that the (symbol-)macro bindings can be dynamically inspected in a sense as all macros may take an &environment parameter which should be passed to macroexpand (and other expansion related functions) to allow the macroexpander to search the compile-time lexical environment for the macro definitions.
Another thing to note is that without lambda-expressions, lexical and dynamic environments would behave the same way. But note that if there were only a top level environment then recursion would not work as bindings would not be restored as control flow leaves their scope.
Closure
What happens to a lexical binding captured by an anonymous function when that function escapes the scope it was created in? Well there are two things that can happen
Trying to access the binding results in an error
The anonymous function keeps the lexical environment alive for as long as the functions referencing it are alive and they can read and write it as they please.
The second case is called a closure and happens for functions and variables. The first case happens for control flow related bindings because you can’t return from a form that has already returned. Neither happens for macro bindings as they cannot be accessed at run time.
Nonlocal control flow
In a language like Java, control (that is, program execution) flows from one statement to the next, branching for if and switch statements, looping for others with special statements like break and return for certain kinds of jumping. For functions control flow goes into the function until it eventually comes out again when the function returns. The one nonlocal way to transfer control is by using throw and try/catch where if you execute a throw then the stack is unwound piece by piece until a suitable catch is found.
In C there are is no throw or try/catch but there is goto. The structure of C programs is secretly flat with the nesting just specifying that “blocks” end in the opposite order to the order they start. What I mean by this is that it is perfectly legal to have a while loop in the middle of a switch with cases inside the loop and it is legal to goto the middle of a loop from outside of that loop. There is a way to do nonlocal control transfer in C: you use setjmp to save the current control state somewhere (with the return value indicating whether you have successfully saved the state or just nonlocally returned there) and longjmp to return control flow to a previously saved state. No real cleanup or freeing of memory happens as the stack unwinds and there needn’t be checks that you still have the function which called setjmp on the callstack so the whole thing can be quite dangerous.
In Common Lisp there’s a range of ways to do nonlocal control transfer but the rules are more strict. Lisp doesn’t really have statements but rather everything is built out of a tree of expressions and so the first rule is that you can’t nonlocally transfer control into a deeper expression, you may only transfer out. Let’s look at how these different methods of control transfer work.
block and return-from
You’ve already seen how these work inside a single function but recall that I said block names are lexically scoped. So how does this interact with anonymous functions?
Well suppose you want to search some big nested data structure for something. If you were writing this function in Java or C then you might implement a special search function to recurse through your data structure until it finds the right thing and then return it all the way up. If you were implementing it in Haskell then you would probably want to do it as some kind of fold and rely on lazy evaluation to not do too much work. In Common Lisp you might have a function which applies some other function passed as a parameter to each item in the data structure. And now you can call that with a searching function. How might you get the result out? Well just return-from to the outer block.
tagbody and go
A tagbody is like a progn but instead of evaluating single symbols in the body, they are called tags and any expression within the tagbody can go to them to transfer control to it. This is partly like goto, if you’re still in the same function but if your go expression happens inside some anonymous function then it’s like a safe lexically scoped longjmp.
catch and throw
These are most similar to the Java model. The key difference between block and catch is that block uses lexical scoping and catch uses dynamic scoping. Therefore their relationship is like that between special and regular variables.
Finally
In Java one can execute code to tidy things up if the stack has to unwind through it as an exception is thrown. This is done with try/finally. The Common Lisp equivalent is called unwind-protect which ensures a form is executed however control flow may leave it.
Errors
It’s perhaps worth looking a little at how errors work in Common Lisp. Which of these methods do they use?
Well it turns out that the answer is that errors instead of generally unwinding the stack start by calling functions. First they look up all the possible restarts (ways to deal with an error) and save them somewhere. Next they look up all applicable handlers (a list of handlers could, for example, be stored in a special variable as handlers have dynamic scope) and try each one at a time. A handler is just a function so it might return (ie not want to handle the error) or it might not return. A handler might not return if it invokes a restart. But restarts are just normal functions so why might these not return? Well restarts are created in a dynamic environment below the one where the error was raised and so they can transfer control straight out of the handler and the code that threw the error to some code to try to do something and then carry on. Restarts can transfer control using go or return-from. It is worth noting that it is important here that we have lexical scope. A recursive function could define a restart on each successive call and so it is necessary to have lexical scope for variables and tags/block names so that we can make sure we transfer control to the right level on the call stack with the right state.
I am not sure if I understand correctly, why in older versions of Lisp there was not static scoping implemented, only dynamic one. Sussman and Guy L. Steele Jr. who have invented Scheme, have implemented only the static scoping into Scheme.
I find sometimes static variables more convenient to use, since they can be used as perfect state holders, although we should be careful to avoid undesired name collisions, as undesired side effect.
I know that static scoping is detected during compile time, while dynamic scoping only during the run time. And dynamic scoping is considered is difficult to dubug, and sometimes to reason about.
If we put above mentioned facts aside, I am not sure that I understand WHY is static scoping often considered better than, dynamic one?
The fundamental problem with dynamic scoping is that it is not compositional, and thereby violates abstraction. In particular, the behaviour of a piece of code (say, a function) generally depends on where it is called from, and what definitions are visible at the caller's site. Thus, a caller has to be careful not to define names that clash with (non-local) ones a callee uses. Consequently, the caller has to know the implementation details of every callee. That makes for terrible modularity. In particular, a change to the implementation of a function may break all callers.
Lexical scoping is better for the programmer because it enables them to reason better about their program. In particular, they can reason about a program from its source code (locally), not from its dynamic behavior.
Also, you can add a constrained form of dynamic scoping to a language using dynamic binding features like parameters. SRFI-39 is a general proposal for Scheme. Also see parameters in the Racket guide. This lets you get most of the benefits of dynamic scoping in a lexically scoped language (which is the better default).
Consider the following definition:
(define (make-adder n)
(lambda (x) (+ x n)))
Now, when you call (make-adder 2), you get back a function. What does this function do? Let's have a look:
(let ((add2 (make-adder 2)))
(let ((n 10))
(add2 n)))
;=> ?
What will the above code evaluate to? It depends on the scoping rules:
In a lexically-scoped Scheme, the result will be 12, since the n in the definition of make-adder is distinct from the n in the sample code.
In a (hypothetical) dynamically-scoped Scheme, the result will be 20, since n has been rebound to 10 at the time + is called.
Now, both kinds of behavior follow simple, predictable rules. But note that in the lexically-scoped case, you can look at the definition of make-adder and are able to relate the reference to n to its declaration locally, without knowing about the context in which make-adder will be used. This is not true in the dynamically-scoped case. (In fact, the argument to make-adder is completely superfluous under dynamic scoping.)
Because of this, lexical scoping is an advantage when reasoning about the behavior of make-adder, which is why it is usually (but not always) preferred.
I have seen one answer of How does Lisp let you redefine the language itself?
Stack Overflow question (answered by Noah Lavine):
Macros aren't quite a complete redefinition of the language, at least as far as I know (I'm actually a Schemer; I could be wrong), because there is a restriction. A macro can only take a single subtree of your code, and generate a single subtree to replace it. Therefore you can't write whole-program-transforming macros, as cool as that would be.
After reading this I am curious about whether there are "whole-program-transforming macros" in Lisp or Scheme (or some other language).
If not then why?
It is not useful and never required?
Same thing could be achieved by some other ways?
It is not possible to implement it even in Lisp?
It is possible, but not tried or implemented ever?
Update
One kind of use case
e.g.
As in stumpwm code
here are some functions all in different lisp source files
uses a dynamic/global defvar variable *screen-list* that is defined in primitives.lisp , but used in screen.lisp, user.lisp, window.lisp.
(Here each files have functions, class, vars related to one aspect or object)
Now I wanted to define these functions under the closure where
*screen-list* variable available by let form, it should not be
dynamic/global variable, But without moving these all functions into
one place (because I do not want these functions to lose place from their
related file)
So that this variable will be accessible to only these functions.
Above e.g. equally apply to label and flet, so that it will further possible
that we could make it like that only required variable, function will be available,
to those who require it.
Note one way might be
implement and use some macro defun_with_context for defun where first argument is
context where let, flet variables definend.
But apart from it could it be achieved by reader-macro as
Vatine and Gareth Rees answered.
You quoted Noah Lavine as saying:
A macro can only take a single subtree of your code, and generate a single subtree to replace it
This is the case for ordinary macros, but reader macros get access to the input stream and can do whatever they like with it.
See the Hyperspec section 2.2 and the set-macro-character function.
In Racket, you can implement whole-program-transforming macros. See the section in the documentation about defining new languages. There are many examples of this in Racket, for example the lazy language and Typed Racket.
Off the top of my head, a few approaches:
First, you can. Norvig points out that:
We can write a compiler as a set of macros.
so you can transform an entire program, if you want to. I've only seen it done rarely, because typically the intersection between "things you want to do to every part of your program" and "things that you need macro/AST-type transformations for" is a pretty small set. One example is Parenscript, which transforms your Lisp code ("an extended subset of CL") into Javascript. I've used it to compile entire files of Lisp code into Javascript which is served directly to web clients. It's not my favorite environment, but it does what it advertises.
Another related feature is "advice", which Yegge describes as:
Great systems also have advice. There's no universally accepted name for this feature. Sometimes it's called hooks, or filters, or aspect-oriented programming. As far as I know, Lisp had it first, and it's called advice in Lisp. Advice is a mini-framework that provides before, around, and after hooks by which you can programmatically modify the behavior of some action or function call in the system.
Another is special variables. Typically macros (and other constructs) apply to lexical scope. By declaring a variable to be special, you're telling it to apply to dynamic scope (I think of it as "temporal scope"). I can't think of any other language that lets you (the programmer) choose between these two. And, apart from the compiler case, these two really span the space that I'm interested in as a programmer.
A typical approach is to write your own module system. If you just want access to all the code, you can have some sort of pre-processor or reader extension wrap source files with your own module annotation. If you then write your own require or import form, you will ultimately be able to see all the code in scope.
To get started, you could write your own module form that lets you define several functions which you then compile in some clever way before emitting optimized code.
There's always the choice of using compiler macros (they can do whole-function transformation based on a lew of criteria, but shouldn't change the value returned, as that would be confusing).
There's reader macros, they transform the input "as it is read" (or "before it is read", if you prefer). I haven't done much large-scale reader-macro hacking, but I have written some code to allow elisp sourec to be (mostly) read in Common Lisp, with quite a few subtle differences in syntactic sugar between the two.
I believe those sorts of macros are called code-walking macros. I haven't implemented a code walker myself, so I am not familiar with the limits.
In Common LISP, at least, you may wrap top-level forms in PROGN and they still retain their status as top-level forms (see CLTL2, section 5.3). Therefore, the limitation of a macro generating a single subtree is not much of a limitation since it could wrap any number of resulting subtrees within PROGN. This makes whole-program macros quite possible.
E.g.
(my-whole-program-macro ...)
= expands to =>
(progn
(load-system ...)
(defvar ...)
(defconstant ...)
(defmacro ...)
(defclass ...)
(defstruct ...)
(defun ...)
(defun ...)
...
)