All Lisp developers seem to know what an S-Expression is. But can anybody explain this for non Lisp developers?
There is already a Wikipedia entry (https://en.wikipedia.org/wiki/S-expression). But that is not really helpful if you don't want to go deeply into detail.
What is an S-Expression? What can I express with an S-Expression? For what purpose uses Lisp normally S-Expressions? Are S-Expressions only relevant for Lisp developers?
The term S-expression refers to the printed form(s) of a Lisp object. For instance, the integer zero object can appear as a written S-expression like 0, 000, or #x0. The text (0 . 1) is the S-expression denoting a cons cell object whose fields are the integers zero and one. In Common Lisp, under the default read table, the tokens Foo, fOO, FOO, |FOO| and foo, are all S-expressions denoting the same symbol. They are different read syntax, equivalent through their semantics of denoting the same object.
Why don't we just call those things expressions? Firstly, there are times when we do, when it is clear from the context that we are speaking about character syntax.
The term expression is ambiguous for that reason: it can sometimes refer to a textual, printed expression that, for instance, someone typed into a text file or interactive listener. Most of the time, expression refers to a Lisp object in memory representing code.
We could say printed expression instead of S-expression, but the term is historically entrenched, dating back to a time when Lisp also had M-expressions. Plus, printed expression would only have the same meaning as S-expression when we know we are already talking about nothing but Lisp. The term S-expression in a context outside of Lisp means something like "one of the printed object notations coming from the Lisp family, with symbols written without quotes, and nested lists with parentheses in which items are separated by only whitespace."
Note that the ANSI Common Lisp standard doesn't use the terms S-expression or symbolic expression. No such terms appear in the Glossary, only expression, which is defined like this:
expression n. 1. an object, often used to emphasize the use of the object to encode or represent information in a specialized format, such as program text. "The second expression in a let form is a list of bindings." 2. the textual notation used to notate an object in a source file. "The expression 'sample is equivalent to (quote sample)."
S-expression is more or less the (2) meaning, with historic ties and a broader interpretation outside of any one Lisp dialect. For instance, Ron Rivest, perhaps best known as one of the authors of the RSA cryptosystem. wrote an Internet Draft describing a form of S-expressions for data exchange.
An S-expression is the fundamental unit of storage in Lisp. By the original definition, an S-expression is one of two things.
An atom, or
a cons cell
An atom is the base case. In classical Lisp (the original language proposed by John McCarthy), an atom is just a distinct unit, that we conventionally designate with a name. Conceptually, you can think of it as a string, even though that's not how any modern Lisp would store it internally. So foobar is an atom, and so is potato. They're just strings that are atomic, in the sense that they don't recursively contain any more S-expressions.
Note that modern Lisp dialects extend the definition of "atom" to include things like numbers, so in Common Lisp, 1.0 would be a valid atom which represents a number.
A cons cell is the fundamental unit of composition in Lisp. A cons cell is a structure that points to two other S-expressions. We call the first of these S-expressions the car and the second the cdr. These names are archaic and were originally references to how cons cells were stored on old computers, but Lisp programmers today still use them. You'll hear some people refer to the car as the "first" or "head", and you'll hear some people refer to the cdr as the "tail" or "rest". (Try not to refer to the cdr as the "second" term, as that's ambiguous and could be interpreted as something else, which we'll talk about in a minute)
Now, we write cons cells in parentheses with a dot between them. So a cons cell where the car and cdr are both atoms would look like
(foo . bar)
This is a cons cell whose car is the atom foo and whose cdr is the atom bar. We can also nest cons cells.
((foo . bar) . (baz . potato))
And then we end up with a sort of binary-tree-like structure, where each branch has a left and a right (a car and a cdr, in our terminology), and each leaf is an atom.
So what can we do with this? Well, for one, we can store linked lists. There are several ways to do this, but the prevailing convention in the Lisp community is to use the car to store the current value and the cdr to store the cons cell pointing to the rest of the list. Then, when we reach the end of the list (where we might store a null pointer if we were doing this in C or Java), we pick out a particular atom, called NIL. There's nothing special about the NIL atom in the definition above; we just picked one out because we needed one to use as a convention.
So to represent the list [a, b, c, d], we would store it as
(a . (b . (c . (d . NIL))))
The car of the outermost cons cell is the first element of the list, or a. The cdr stores the rest of the list. The car of the cdr is the second element, b, and so on. (This is why I said not to call the cdr the "second" element, as "second" is often used to mean "car of cdr")
In fact, we do this so often that there's another notational convention in Lisp. If the cdr is another cons cell, then we simply drop the . and the parentheses and understand what it means. So, in general, we say that the following two are equivalent, for any S-expressions a, b, and c.
(a . (b . c)) === (a b . c)
Again, I haven't changed the definition. There's still only two valid S-expressions: atoms and cons cells. I've just invented a more compact way to write them.
Likewise, since we're going to be using NIL a lot to end lists, we simply drop it. If we have a NIL as the cdr of a cons cell, then by convention we remove the . and the NIL. So the following are equivalent for any S-expression a.
(a . NIL) === (a)
Again, I'm just inventing new, compact ways to write things, not changing the definitions.
Finally, as a notational convenience, we might sometimes write the atom NIL as a pair of empty parentheses, since it's supposed to look like the empty list.
NIL === ()
Now, looking at our list from earlier
(a . (b . (c . (d . NIL))))
we can use these rules to simplify it
(a . (b . (c . (d . NIL))))
(a b . (c . (d . NIL)))
(a b c . (d . NIL))
(a b c d . NIL)
(a b c d)
And now this looks remarkably like Lisp syntax. And that's the beauty of S-expressions. The Lisp code you're writing is just a bunch of S-expressions. For example, consider the following Lisp code
(mapcar (lambda (x) (+ x 1)) my-list)
This is ordinary Lisp code, the kind you would see in any everyday program. In Common Lisp, it adds one to each element of my-list. But the beauty is that it's just a big S-expression. If we remove all of the syntax sugar, we get
(mapcar . ((lambda . ((x . NIL) . ((+ . (x . (1 . NIL))) . NIL))) . (my-list . NIL)))
Not pretty, at least aesthetically, but now it's easier to see how this really is just a bunch of cons cells terminated in atoms. Your entire Lisp syntax tree is just that: a binary tree full of code. And you can manipulate it as such. You can write macros that take this tree, as a data structure, and do whatever on earth they want with it. The abstract syntax tree of your Lisp program isn't some opaque construct internal to the language; it's just a tree: an incredibly simple data structure that you already use in everyday programming anyway. The same lists and other structures that you use to store data in your Lisp program are also used to store code.
Modern Lisp dialects extend this with new conventions and, in some cases, new data types. Common Lisp, for instance, adds an array type, so #(1 2 3 4 5) is an array of five elements. It's not a linked list (since, in practice, linked lists are slow for random access), it's something else entirely. Likewise, Lisp dialects add new conventions on top of the NIL ones we've already discussed. In most Lisp dialects, the apostrophe, or single quote, is used to represent a call to the quote special form. That is,
'x === (quote x) (quote . (x . NIL))
For any S-expression x. Different dialects add different features to the original McCarthy definition, but the core concept is: What is the absolute minimum definition we need to be able to comfortably store the code and data of our Lisp program.
The other answers are very Lisp-specific, but actually S-expressions are useful outside of the Lisp world.
An S-expression is a (convenient) way of representing a tree whose leaf are symbols (names, strings, call them how you like). Each parenthesized part of an S-expression is a node, containing exactly the list of its children.
Example: (this (s expression) (could (be represented)) as (this tree))
[..........]
/| | | |
/ . | as .
/ / \ | / \
/ s | . this |
this | |\ tree
| | \
expression | \
could .
|\
be represented
In Lisp, the tree represented by S-expression corresponds to the Concrete Syntax Tree, which is why Lisp is so easy to parse.
However, since this representation of trees is convenient (it's relatively compact, it's very human-friendly and it's straightforward both to parse and to produce for a machine), it's also used in other contexts. For instance, Ocaml's Core library (which is an alternative standard library for that language) provides serialization and deserialization as S-expressions.
Besides this, Lisp also names some of its data structures S-expressions. This goes well with Lisp's homoiconicity, that is, the fact that the code can be manipulated pretty much like data.
So, to answer your questions:
S-expressions are both a syntactic way to represent trees and a tree data structure in Lisp.
With S-expressions you can express trees; the meaning you attach to the tree (its interpretation, if you will) is not specific to S-expressions. S-expression tell you how to write a tree, not what it means — and, in fact, people use them for different purposes, with different meanings.
Lisp uses S-expressions both to represent its own source code, to print values and as a data structure, recursively built from nil and cons (the exact details of all of this vary a lot between all the Lisp dialects).
S-expressions are not only relevant for Lisp developers, see for example the Ocaml serializing / deserializing library Sexp. In practice, other ways of representing data that have stronger typing are more commonly used where S-expressions could be used, such as JSON.
s-expressions are short for Symbolic Expressions.
Basically they are Symbols and nested lists of Symbols.
A Symbol is made of alphanumeric characters.
Examples for symbols and nested lists of symbols:
foo
berlin
fruit
de32211
(apple peach)
(fruit (seller fruit-co))
((apple one) (peach two))
these lists were made of cons cells written as (one . two) and nil as the empty list.
Examples:
(a . (b . nil)) -> (a b)
((a . nil) (b . nil)) -> ((a) (b))
The programming language Lisp (short for List Processor) was designed to process these lists. Lisp contains all kinds of basic operations dealing with nested lists. There the elements of s-expressions can also be numbers, characters, strings, arrays and other data structures.
Symbolic Expressions have the same purpose as JSON and XML: they encode data.
Symbolic Expressions in Lisp also have the purpose to encode Lisp programs themselves.
Example:
((lambda (a b)
(+ a (* 2 b)))
10
20)
Above is both an s-expression and a valid Common Lisp / Scheme program.
Symbolics Expressions were thought to be an universal notation for humans and machines to read/write/process all kinds of data in some calculus.
For example s-expressions could encode a mathematical formula, a Lisp program, a logic expression or data about the configuration of a planning problem. What was missing at the time was a way to describe declaratively valid data schema. s-expressions were typically processed and validated procedural.
How are s-expressions used in Lisp?
for encoding source code
for all kinds of data
mixed source code and data
Are S-Expressions only relevant for Lisp developers?
Mostly, but sometimes code or data exists in the form of s-expressions and programs written in other languages than Lisp want to process this data. Sometimes even developer not using Lisp are choosing s-expressions as a data representation format.
Generally the usage of s-expresssions outside of Lisp is rare. Still, there are a few examples. XML and JSON got much more popular than s-expressions.
Related
In Lisps that have vectors, why are cons cells still necessary? As I understand it, a cons cell is:
A structure with exactly 2 elements
Ordered
Access is O(1)
All these also apply to a 2-vector, though. So what's the difference? Are cons cells just a vestige from before Lisps had vectors? Or are there other differences I'm unaware of?
Although, physically, conses resemble any other two-element aggregate structure, they are not simply an obsolete form of a 2-vector.
Firstly, all types in Lisp are partitioned into cons and atom. Only conses are of type cons; everything else is an atom. A vector is an atom!
Conses form the representational basis for nested lists, which of course are used to write code. They have a special printed notation, such that the object produced by (cons 1 (cons 2 nil)) conveniently prints as (1 2) and the object produced by (cons 1 (cons 2 3)) prints as (1 2 . 3).
The cons versus atom distinction is important in the syntax, because an expression which satisfies the consp test is treated as a compound form. Whereas atoms that are not keyword symbols, t or nil evaluate to themselves.
To get the list itself instead of the value of the compound form, we use quote, for which we have a nice shorthand.
It's useful to have a vector type which is free from being entangled into the evaluation semantics this way: whose instances are just self-evaluating atoms.
Cons cells are not a vestige from before Lisps had vectors. Firstly, there was almost no such a time. The Lisp 1 manual from 1960 already describes arrays. Secondly, new dialects since then still have conses.
Objects that have a similar representation are not simply redundant for each other. Type distinctions are important. For instance, we would not consider the following two to be redundant for each other just because they both have three slots:
(defstruct name first initial last)
(defstruct bank-transaction account type amount)
In the TXR Lisp dialect, I once had it so that the syntactic sugar a..b denoted (cons a b) for ranges. But this meant that ranges were consp, which was silly due to the ambiguity against lists. I eventually changed it so that a..b denotes (rcons a b): a form which constructs a range object, which prints as #R(x y). (and can be specified that way as a literal). This creates a useful nuance because we can distinguish whether a function argument is a range (rangep) or a list (consp). Just like we care whether some object is a bank-transaction or name. Range objects are represented exactly like conses and allocated from the same heaps; just they have a different type which makes them atoms. If evaluated as forms, they evaluate to themselves.
Basically, we must regard type as if it were an extra slot. A two-element vector really has (at least) three properties, not just two: it has a first element, a second element and a type. The vector #(1 1) differs from the cons cell (1 . 1) in that they both have this third aspect, type, which is not the same.
The immutable properties of an object which it shares with all other objects of its kind can still be regarded as "slots". Effectively, all objects have a "type slot". So conses are actually three-property objects having a car, cdr and type:
(car '(a . b)) -> A
(cdr '(a . b)) -> B
(type-of '(a . b)) -> CONS
Her is a fourth "slot":
(class-of '(a . b)) -> #<BUILT-IN-CLASS CONS>
We can't look at objects in terms of just their per-instance storage vector allocated on the heap.
By the way, the 1960's MacLisp dialect extended the concept of a cons into fixed-size aggregate objects that had more named fields (in addition to car and cdr): the cxr-s. These objects were called "hunks" and are documented in Kent Pitman's MacLisp manual. Hunks do not satisify the predicate consp, but hunkp; i.e. they are considered atoms. However, they extend the cons notation with multiple dots.
In a typical Common Lisp implementation, a cons cell will be represented as "two machine words" (one for the car pointer, one for the cdr pointer; the fact that it's a cons cell is encoded in the pointer constructed to reference it). However, arrays are more complicated object and unless you have a dedicated "two-element-only vector of type T", you'd end up with an array header, containing type information and size, in addition to the storage needed to store elements (probably hard to squeeze to less than "four machine words").
So while it would be eminently possible to use two-element vectors/arrays as cons cells, there's some efficiency to be had by using a dedicated type, based on the fact that cons cells and lists are so frequently used in existing Lisp code.
I think that their are different data structures, for example java has vector and list classes. One is suitable for random access and lists are more suitable for sequential access. So in any language vectors and list can coexists.
For implementing a Lisp using your approach, I believe that it is posible, it depends on your implementations details but for ANSI Common Lisp there is a convention because there is not a list datatype:
CL-USER> (type-of (list 1 2 3))
CONS
This is a CONS and the convention says something similar to this (looking at the common lisp hypersec):
list n.
1. a chain of conses in which the car of each cons is an element of the list, and the cdr of each cons is either the next link in the
chain or a terminating atom. See also proper list, dotted list, or
circular list.
2. the type that is the union of null and cons.
So if you create a Lisp using vectors instead of cons, it will be not the ANSI CL
so you can create lists "consing" things, nil is a list and there are diferrent types of list that you can create with consing:
normally you create a proper list:
(list 1 2 3) = (cons 1 (cons 2 (cons 3 nil)))) = '(1 2 3)
when the list does not end with nil it is a dotted list, and a circular list has a reference to itself
So for example if we create a string common lisp, implements it as a simple-array, which is faster for random acces than a list
CL-USER> (type-of "this is a string")
(SIMPLE-ARRAY CHARACTER (16))
Land of lisp (a great book about common lisp) define cons as the glue for building common lisp, and for processing lists, so of course if you replace cons with other thing similar you will build something similar to common lisp.
Finally a tree of the common lisp sequences types, you can find here the complete
Are cons cells just a vestige from before Lisps had vectors?
Exactly. cons, car, cdr were the constructor and accessors of the only compound data structure in the first lisp. To differentiate it with the only atomic type symbols one had atom that were T for symbols but false for cons. This was extended to other types in Lisp 1.5 including vectors (called arrays, see page 35). Common Lisp were a combination of commercial lisps that all built upon lisp 1.5. Perhaps they would have been different if both types were made from the beginning.
If you were to make a Common Lisp implementation you don't need to have two different ways to make them as long as your implementation works according to the spec. If I remember correctly I think racket actually implements struct with vector and vector? is overloaded to be #f for the vectors that are indeed representing an object. In CL you could implement defstruct the same way and implement cons struct and the functions it needs to be compatible with the hyperspec. You might be using vectors when you create cons in your favorite implementation without even knowing it.
Historically you still have the old functions so that John McCarthy code still works even 58 years after the first lisp. It didn't need to but it doesn't hurt to have a little legacy in a language that had features modern languages are getting today.
If you used two-element vectors you would store their size (and type) in every node of the list.
This is ridiculously wasteful.
You can get around this wastefulness by introducing a special 2-element vector type whose elements can be anything.
Or in other words: by re-introducing the cons cell.
On one hand this is an "implementation detail": given vectors, one can implement cons cells (and thus linked lists) using vectors of length 2.
On the other hand this is a fairly important detail: the ANSI Common Lisp standard specifies that the types vector and cons are disjoint, so, in fact, you cannot use the trick to implement an ANSI CL.
I'm trying to emulate Lisp-like list in JavaScript (just an exercise with no practical reason), but I'm struggling to figure out how to best represent an empty list.
Is an empty list just a nil value or is it under the hood stored in a cons cell?
I can:
(car '())
NIL
(cdr '())
NIL
but an empty list for sure can not be (cons nil nil), because it would be indistinguishable from a list storing a single nil. It would need to store some other special value.
On the other hand, if an empty list is not built from a cons cell, it seems impossible to have a consistent high-level interface for appending a single value to an existing list. A function like:
(defun append-value (list value) ...
Would modify its argument, but only if it is not an empty list, which seems ugly.
Believe it or not, this is actually a religious question.
There are dialects that people dare to refer to as some kind of Lisp in which empty lists are conses or aggregate objects of some kind, rather than just an atom like nil.
For example, in "MatzLisp" (better known as Ruby) lists are actually arrays.
In NewLisp, lists are containers: objects of list type which contain a linked list of the items, so empty lists are empty containers. [Reference].
In Lisp languages that aren't spectacular cluster-fumbles of this sort, empty lists are atoms, and non-empty lists are binary cells with a field which holds the first item, and another field that holds the rest of the list. Lists can share suffixes. Given a list like (1 2 3) we can use cons to create (a 1 2 3) and (b c 1 2 3) both of which share the storage for (1 2 3).
(In ANSI Common Lisp, the empty list atom () is the same object as the symbol nil, which evaluates to itself and also serves as Boolean false. In Scheme, () isn't a symbol, and is distinct from the Boolean false #f object. However Scheme lists are still made up of pairs, and terminated by an atom.)
The ability to evaluate (car nil) does not automatically follow from the cons-and-nil representation of lists, and if we look at ancient Lisp documentation, such as the Lisp 1.5 manual from early 1960-something, we will find that this was absent. Initially, car was strictly a way to access a field of the cons cell, and required strictly a cons cell argument.
Good ideas like allowing (car nil) to Just Work (so that hackers could trim many useless lines of code from their programs) didn't appear overnight. The idea of allowing (car nil) may have appeared from InterLisp. In any case, Evolution Of Lisp paper claims that MacLisp (one of the important predecessors of Common Lisp, unrelated to the Apple Macintosh which came twenty years later), imitated this feature from InterLisp (another one of the significant predecessors).
Little details like this make the difference between pleasant programming and swearing at the monitor: see for instance A Short Ballad Dedicated to the Growth of Programs inspired by one Lisp programmer's struggle with a bletcherous dialect in which empty lists cannot be accessed with car, and do not serve as a boolean false.
An empty list is simply the nil symbol (and symbols, by definition, are not conses). car and cdr are defined to return nil if given nil.
As for list-mutation functions, they return a value that you are supposed to reassign to your variable. For example, look at the specification for the nreverse function: it may modify the given list, or not, and you are supposed to use the return value, and not rely on it to be modified in-place.
Even nconc, the quintessential destructive-append function, works that way: its return value is the appended list that you're supposed to use. It is specified to modify the given lists (except the last one) in-place, but if you give it nil as the first argument, it can't very well modify that, so you still have to use the return value.
NIL is somewhat a strange beast in Common Lisp because
it's a symbol (meaning that symbolp returns T)
is a list
is NOT a cons cell (consp returns NIL)
you can take CAR and CDR of it anyway
Note that the reasons behind this are probably also historical and you shouldn't think that this is the only reasonable solution. Other Lisp dialects made different choices.
Try it with your Lisp interpreter:
(eq nil '())
=> t
Several operations are special-cased to do unorthogonal (or even curious :-) things when operating on nil / an empty list. The behavior of car and cdr you were investigating is one of those things.
The idenity of nil as the empty list is one of the first things you learn about Lisp. I tried to come up with a good Google hit but I'll just pick one because there are so many: http://www.cs.sfu.ca/CourseCentral/310/pwfong/Lisp/1/tutorial1.html
Is it good style to use cons for pairs of things or would it be preferable to stick to lists?
like for instance questions and answers:
(list
(cons
"Favorite color?"
"red")
(cons
"Favorite number?"
"123")
(cons
"Favorite fruit?"
"avocado"))
I mean, some things come naturally in pairs; there is no need for something that can hold more than two, so I feel like cons would be the natural choice. However, I also feel like I should be sticking to one thing (lists).
What would be the better or more accepted style?
What you have there is an association list (alist). Alist entries are, indeed, often simple conses rather than lists (though that is a matter of preference: some people use lists for alist entries too), so what you have is fine. Though, I usually prefer to use literal syntax:
'(("Favorite color?" . "red")
("Favorite number?" . "123")
("Favorite fruit?" . "avocado"))
Alists usually use a symbol as the key, because symbols are interned, and so symbol alists can be looked up using assq instead of assoc. Here's how it might look:
'((color . "red")
(number . "123")
(fruit . "avocado"))
The default data-structure for such case should be a HASH-TABLE.
An association list of cons pairs is also a possible variant and was widely used historically. It is a valid variant, because of tradition and simplicity. But you should not use it, when the number of pairs exceeds several (probably, 10 is a good threshold), because search time is linear, while in hash-table it is constant.
Using a list for this task is also possible, but will be both ugly and inefficient.
You would need to decide for yourself based upon circumstances. There isn't a universal answer. Different tasks work differently with structures. Consider the following:
It is faster to search in a hash-table for keys, then it is in the alist.
It is easier to have an iterator and save its state, when working with alist (hash-table would need to export all of its keys as an array or a list and have a pointer into that list, while it is enough to only remember the pointer into alist to be able to restore the iterator's state and continue the iteration.
Alist vs list: they use the same amount of conses for even number of elements, given all other characters are atoms. When using lists vs alists you would have to thus make sure there isn't an odd number of elements (and you may discover it too late), which is bad.
But there are a lot more functions, including the built-in ones, which work on proper lists, and don't work on alists. For example, nth will error on alist, if it hits the cdr, which is not a list.
Some times certain macros would not function as you'd like them to with alists, for example, this:
(destructuring-bind (a b c d)
'((100 . 200) (300 . 400))
(format t "~&~{~s~^,~}" (list a b c d)))
will not work as you might've expected.
On the other hand, certain procedures may be "tricked" into doing something which they don't do for proper lists. For instance, when copying an alist with copy-list, only the conses, whose cdr is a list will be copied anew (depending upon the circumstances this may be a desired result).
I'm working my way through Graham's book "On Lisp" and can't understand the following example at page 37:
If we define exclaim so that its return value
incorporates a quoted list,
(defun exclaim (expression)
(append expression ’(oh my)))
> (exclaim ’(lions and tigers and bears))
(LIONS AND TIGERS AND BEARS OH MY)
> (nconc * ’(goodness))
(LIONS AND TIGERS AND BEARS OH MY GOODNESS)
could alter the list within the function:
> (exclaim ’(fixnums and bignums and floats))
(FIXNUMS AND BIGNUMS AND FLOATS OH MY GOODNESS)
To make exclaim proof against such problems, it should be written:
(defun exclaim (expression)
(append expression (list ’oh ’my)))
Does anyone understand what's going on here? This is seriously screwing with my mental model of what quoting does.
nconc is a destructive operation that alters its first argument by changing its tail. In this case, it means that the constant list '(oh my) gets a new tail.
To hopefully make this clearer. It's a bit like this:
; Hidden variable inside exclaim
oh_my = oh → my → nil
(exclaim '(lions and tigers and bears)) =
lions → and → tigers → and → bears → oh_my
(nconc * '(goodness)) destructively appends goodness to the last result:
lions → and → tigers → and → bears → oh → my → goodness → nil
so now, oh_my = oh → my → goodness → nil
Replacing '(oh my) with (list 'oh 'my) fixes this because there is no longer a constant being shared by all and sundry. Each call to exclaim generates a new list (the list function's purpose in life is to create brand new lists).
The observation that your mental model of quoting may be flawed is an excellent one—although it may or may not apply depending on what that mental model is.
First, remember that there are various stages to program execution. A Lisp environment must first read the program text into data structures (lists, symbols, and various literal data such as strings and numbers). Next, it may or may not compile those data structures into machine code or some sort of intermediary format. Finally, the resulting code is evaluated (in the case of machine code, of course, this may simply mean jumping to the appropriate address).
Let's put the issue of compilation aside for now and focus on the reading and evaluation stages, assuming (for simplicity) that the evaluator's input is the list of data structures read by the reader.
Consider a form (QUOTE x) where x is some textual representation of an object. This may be symbol literal as in (QUOTE ABC), a list literal as in (QUOTE (A B C)), a string literal as in (QUOTE "abc"), or any other kind of literal. In the reading stage, the reader will read the form as a list (call it form1) whose first element is the symbol QUOTE and whose second element is the object x' whose textual representation is x. Note that I'm specifically saying that the object x' is stored within the list that represents the expression, i.e. in a sense, it's stored as a part of the code itself.
Now it's the evaluator's turn. The evaluator's input is form1, which is a list. So it looks at the first element of form1, and, having determined that it is the symbol QUOTE, it returns as the result of the evaluation the second element of the list. This is the crucial point. The evaluator returns the second element of the list to be evaluated, which is what the reader read in in the first execution stage (prior to compilation!). That's all it does. There's no magic to it, it's very simple, and significantly, no new objects are created and no existing ones are copied.
Therefore, whenever you modify a “quoted list”, you're modifying the code itself. Self-modifying code is a very confusing thing, and in this case, the behaviour is actually undefined (because ANSI Common Lisp permits implementations to put code in read-only memory).
Of course, the above is merely a mental model. Implementations are free to implement the model in various ways, and in fact, I know of no implementation of Common Lisp that, like my explanation, does no compilation at all. Still, this is the basic idea.
In Common Lisp.
Remember:
'(1 2 3 4)
Above is a literal list. Constant data.
(list 1 2 3 4)
LIST is a function that when call returns a fresh new list with its arguments as list elements.
Avoid modifying literal lists. The effects are not standardized. Imagine a Lisp that compiles all constant data into a read only memory area. Imagine a Lisp that takes constant lists and shares them across functions.
(defun a () '(1 2 3)
(defun b () '(1 2 3))
A Lisp compiler may create one list that is shared by both functions.
If you modify the list returned by function a
it might not be changed
it might be changed
it might be an error
it might also change the list returned by function b
Implementations have the freedom to do what they like. This leaves room for optimizations.
Reading Paul Graham's essays on programming languages one would think that Lisp macros are the only way to go. As a busy developer, working on other platforms, I have not had the privilege of using Lisp macros. As someone who wants to understand the buzz, please explain what makes this feature so powerful.
Please also relate this to something I would understand from the worlds of Python, Java, C# or C development.
To give the short answer, macros are used for defining language syntax extensions to Common Lisp or Domain Specific Languages (DSLs). These languages are embedded right into the existing Lisp code. Now, the DSLs can have syntax similar to Lisp (like Peter Norvig's Prolog Interpreter for Common Lisp) or completely different (e.g. Infix Notation Math for Clojure).
Here is a more concrete example:Python has list comprehensions built into the language. This gives a simple syntax for a common case. The line
divisibleByTwo = [x for x in range(10) if x % 2 == 0]
yields a list containing all even numbers between 0 and 9. Back in the Python 1.5 days there was no such syntax; you'd use something more like this:
divisibleByTwo = []
for x in range( 10 ):
if x % 2 == 0:
divisibleByTwo.append( x )
These are both functionally equivalent. Let's invoke our suspension of disbelief and pretend Lisp has a very limited loop macro that just does iteration and no easy way to do the equivalent of list comprehensions.
In Lisp you could write the following. I should note this contrived example is picked to be identical to the Python code not a good example of Lisp code.
;; the following two functions just make equivalent of Python's range function
;; you can safely ignore them unless you are running this code
(defun range-helper (x)
(if (= x 0)
(list x)
(cons x (range-helper (- x 1)))))
(defun range (x)
(reverse (range-helper (- x 1))))
;; equivalent to the python example:
;; define a variable
(defvar divisibleByTwo nil)
;; loop from 0 upto and including 9
(loop for x in (range 10)
;; test for divisibility by two
if (= (mod x 2) 0)
;; append to the list
do (setq divisibleByTwo (append divisibleByTwo (list x))))
Before I go further, I should better explain what a macro is. It is a transformation performed on code by code. That is, a piece of code, read by the interpreter (or compiler), which takes in code as an argument, manipulates and the returns the result, which is then run in-place.
Of course that's a lot of typing and programmers are lazy. So we could define DSL for doing list comprehensions. In fact, we're using one macro already (the loop macro).
Lisp defines a couple of special syntax forms. The quote (') indicates the next token is a literal. The quasiquote or backtick (`) indicates the next token is a literal with escapes. Escapes are indicated by the comma operator. The literal '(1 2 3) is the equivalent of Python's [1, 2, 3]. You can assign it to another variable or use it in place. You can think of `(1 2 ,x) as the equivalent of Python's [1, 2, x] where x is a variable previously defined. This list notation is part of the magic that goes into macros. The second part is the Lisp reader which intelligently substitutes macros for code but that is best illustrated below:
So we can define a macro called lcomp (short for list comprehension). Its syntax will be exactly like the python that we used in the example [x for x in range(10) if x % 2 == 0] - (lcomp x for x in (range 10) if (= (% x 2) 0))
(defmacro lcomp (expression for var in list conditional conditional-test)
;; create a unique variable name for the result
(let ((result (gensym)))
;; the arguments are really code so we can substitute them
;; store nil in the unique variable name generated above
`(let ((,result nil))
;; var is a variable name
;; list is the list literal we are suppose to iterate over
(loop for ,var in ,list
;; conditional is if or unless
;; conditional-test is (= (mod x 2) 0) in our examples
,conditional ,conditional-test
;; and this is the action from the earlier lisp example
;; result = result + [x] in python
do (setq ,result (append ,result (list ,expression))))
;; return the result
,result)))
Now we can execute at the command line:
CL-USER> (lcomp x for x in (range 10) if (= (mod x 2) 0))
(0 2 4 6 8)
Pretty neat, huh? Now it doesn't stop there. You have a mechanism, or a paintbrush, if you like. You can have any syntax you could possibly want. Like Python or C#'s with syntax. Or .NET's LINQ syntax. In end, this is what attracts people to Lisp - ultimate flexibility.
You will find a comprehensive debate around lisp macro here.
An interesting subset of that article:
In most programming languages, syntax is complex. Macros have to take apart program syntax, analyze it, and reassemble it. They do not have access to the program's parser, so they have to depend on heuristics and best-guesses. Sometimes their cut-rate analysis is wrong, and then they break.
But Lisp is different. Lisp macros do have access to the parser, and it is a really simple parser. A Lisp macro is not handed a string, but a preparsed piece of source code in the form of a list, because the source of a Lisp program is not a string; it is a list. And Lisp programs are really good at taking apart lists and putting them back together. They do this reliably, every day.
Here is an extended example. Lisp has a macro, called "setf", that performs assignment. The simplest form of setf is
(setf x whatever)
which sets the value of the symbol "x" to the value of the expression "whatever".
Lisp also has lists; you can use the "car" and "cdr" functions to get the first element of a list or the rest of the list, respectively.
Now what if you want to replace the first element of a list with a new value? There is a standard function for doing that, and incredibly, its name is even worse than "car". It is "rplaca". But you do not have to remember "rplaca", because you can write
(setf (car somelist) whatever)
to set the car of somelist.
What is really happening here is that "setf" is a macro. At compile time, it examines its arguments, and it sees that the first one has the form (car SOMETHING). It says to itself "Oh, the programmer is trying to set the car of somthing. The function to use for that is 'rplaca'." And it quietly rewrites the code in place to:
(rplaca somelist whatever)
Common Lisp macros essentially extend the "syntactic primitives" of your code.
For example, in C, the switch/case construct only works with integral types and if you want to use it for floats or strings, you are left with nested if statements and explicit comparisons. There's also no way you can write a C macro to do the job for you.
But, since a lisp macro is (essentially) a lisp program that takes snippets of code as input and returns code to replace the "invocation" of the macro, you can extend your "primitives" repertoire as far as you want, usually ending up with a more readable program.
To do the same in C, you would have to write a custom pre-processor that eats your initial (not-quite-C) source and spits out something that a C compiler can understand. It's not a wrong way to go about it, but it's not necessarily the easiest.
Lisp macros allow you to decide when (if at all) any part or expression will be evaluated. To put a simple example, think of C's:
expr1 && expr2 && expr3 ...
What this says is: Evaluate expr1, and, should it be true, evaluate expr2, etc.
Now try to make this && into a function... thats right, you can't. Calling something like:
and(expr1, expr2, expr3)
Will evaluate all three exprs before yielding an answer regardless of whether expr1 was false!
With lisp macros you can code something like:
(defmacro && (expr1 &rest exprs)
`(if ,expr1 ;` Warning: I have not tested
(&& ,#exprs) ; this and might be wrong!
nil))
now you have an &&, which you can call just like a function and it won't evaluate any forms you pass to it unless they are all true.
To see how this is useful, contrast:
(&& (very-cheap-operation)
(very-expensive-operation)
(operation-with-serious-side-effects))
and:
and(very_cheap_operation(),
very_expensive_operation(),
operation_with_serious_side_effects());
Other things you can do with macros are creating new keywords and/or mini-languages (check out the (loop ...) macro for an example), integrating other languages into lisp, for example, you could write a macro that lets you say something like:
(setvar *rows* (sql select count(*)
from some-table
where column1 = "Yes"
and column2 like "some%string%")
And thats not even getting into Reader macros.
Hope this helps.
I don't think I've ever seen Lisp macros explained better than by this fellow: http://www.defmacro.org/ramblings/lisp.html
A lisp macro takes a program fragment as input. This program fragment is represented a data structure which can be manipulated and transformed any way you like. In the end the macro outputs another program fragment, and this fragment is what is executed at runtime.
C# does not have a macro facility, however an equivalent would be if the compiler parsed the code into a CodeDOM-tree, and passed that to a method, which transformed this into another CodeDOM, which is then compiled into IL.
This could be used to implement "sugar" syntax like the for each-statement using-clause, linq select-expressions and so on, as macros that transforms into the underlying code.
If Java had macros, you could implement Linq syntax in Java, without needing Sun to change the base language.
Here is pseudo-code for how a lisp-style macro in C# for implementing using could look:
define macro "using":
using ($type $varname = $expression) $block
into:
$type $varname;
try {
$varname = $expression;
$block;
} finally {
$varname.Dispose();
}
Since the existing answers give good concrete examples explaining what macros achieve and how, perhaps it'd help to collect together some of the thoughts on why the macro facility is a significant gain in relation to other languages; first from these answers, then a great one from elsewhere:
... in C, you would have to write a custom pre-processor [which would probably qualify as a sufficiently complicated C program] ...
—Vatine
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming [which is still not as powerful].
—Matt Curtis
... in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack.
—Miguel Ping
DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201. Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably ''still'' can't use the new feature until they're allowed to break source compatibility with older versions of Java. So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.
—Peter Seibel, in "Practical Common Lisp"
Think of what you can do in C or C++ with macros and templates. They're very useful tools for managing repetitive code, but they're limited in quite severe ways.
Limited macro/template syntax restricts their use. For example, you can't write a template which expands to something other than a class or a function. Macros and templates can't easily maintain internal data.
The complex, very irregular syntax of C and C++ makes it difficult to write very general macros.
Lisp and Lisp macros solve these problems.
Lisp macros are written in Lisp. You have the full power of Lisp to write the macro.
Lisp has a very regular syntax.
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming. Or all the crazy tricks in (excellent) books like Modern C++ Design, which are still tough to debug and (in practice) non-portable between real-world compilers even though the language has been standardised for a decade. All of that melts away if the langauge you use for metaprogramming is the same language you use for programming!
I'm not sure I can add some insight to everyone's (excellent) posts, but...
Lisp macros work great because of the Lisp syntax nature.
Lisp is an extremely regular language (think of everything is a list); macros enables you to treat data and code as the same (no string parsing or other hacks are needed to modify lisp expressions). You combine these two features and you have a very clean way to modify code.
Edit: What I was trying to say is that Lisp is homoiconic, which means that the data structure for a lisp program is written in lisp itself.
So, you end up with a way of creating your own code generator on top of the language using the language itself with all its power (eg. in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack).
In practice, with macros you end up building your own mini-language on top of lisp, without the need to learn additional languages or tooling, and with using the full power of the language itself.
Lisp macros represents a pattern that occurs in almost any sizeable programming project. Eventually in a large program you have a certain section of code where you realize it would be simpler and less error prone for you to write a program that outputs source code as text which you can then just paste in.
In Python objects have two methods __repr__ and __str__. __str__ is simply the human readable representation. __repr__ returns a representation that is valid Python code, which is to say, something that can be entered into the interpreter as valid Python. This way you can create little snippets of Python that generate valid code that can be pasted into your actually source.
In Lisp this whole process has been formalized by the macro system. Sure it enables you to create extensions to the syntax and do all sorts of fancy things, but it's actual usefulness is summed up by the above. Of course it helps that the Lisp macro system allows you to manipulate these "snippets" with the full power of the entire language.
In short, macros are transformations of code. They allow to introduce many new syntax constructs. E.g., consider LINQ in C#. In lisp, there are similar language extensions that are implemented by macros (e.g., built-in loop construct, iterate). Macros significantly decrease code duplication. Macros allow embedding «little languages» (e.g., where in c#/java one would use xml to configure, in lisp the same thing can be achieved with macros). Macros may hide difficulties of using libraries usage.
E.g., in lisp you can write
(iter (for (id name) in-clsql-query "select id, name from users" on-database *users-database*)
(format t "User with ID of ~A has name ~A.~%" id name))
and this hides all the database stuff (transactions, proper connection closing, fetching data, etc.) whereas in C# this requires creating SqlConnections, SqlCommands, adding SqlParameters to SqlCommands, looping on SqlDataReaders, properly closing them.
While the above all explains what macros are and even have cool examples, I think the key difference between a macro and a normal function is that LISP evaluates all the parameters first before calling the function. With a macro it's the reverse, LISP passes the parameters unevaluated to the macro. For example, if you pass (+ 1 2) to a function, the function will receive the value 3. If you pass this to a macro, it will receive a List( + 1 2). This can be used to do all kinds of incredibly useful stuff.
Adding a new control structure, e.g. loop or the deconstruction of a list
Measure the time it takes to execute a function passed in. With a function the parameter would be evaluated before control is passed to the function. With the macro, you can splice your code between the start and stop of your stopwatch. The below has the exact same code in a macro and a function and the output is very different. Note: This is a contrived example and the implementation was chosen so that it is identical to better highlight the difference.
(defmacro working-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; not splicing here to keep stuff simple
((- (get-universal-time) start))))
(defun my-broken-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; doesn't even need eval
((- (get-universal-time) start))))
(working-timer (sleep 10)) => 10
(broken-timer (sleep 10)) => 0
One-liner answer:
Minimal syntax => Macros over Expressions => Conciseness => Abstraction => Power
Lisp macros do nothing more than writing codes programmatically. That is, after expanding the macros, you got nothing more than Lisp code without macros. So, in principle, they achieve nothing new.
However, they differ from macros in other programming languages in that they write codes on the level of expressions, whereas others' macros write codes on the level of strings. This is unique to lisp thanks to their parenthesis; or put more precisely, their minimal syntax which is possible thanks to their parentheses.
As shown in many examples in this thread, and also Paul Graham's On Lisp, lisp macros can then be a tool to make your code much more concise. When conciseness reaches a point, it offers new levels of abstractions for codes to be much cleaner. Going back to the first point again, in principle they do not offer anything new, but that's like saying since paper and pencils (almost) form a Turing machine, we do not need an actual computer.
If one knows some math, think about why functors and natural transformations are useful ideas. In principle, they do not offer anything new. However by expanding what they are into lower-level math you'll see that a combination of a few simple ideas (in terms of category theory) could take 10 pages to be written down. Which one do you prefer?
I got this from the common lisp cookbook and I think it explained why lisp macros are useful.
"A macro is an ordinary piece of Lisp code that operates on another piece of putative Lisp code, translating it into (a version closer to) executable Lisp. That may sound a bit complicated, so let's give a simple example. Suppose you want a version of setq that sets two variables to the same value. So if you write
(setq2 x y (+ z 3))
when z=8 both x and y are set to 11. (I can't think of any use for this, but it's just an example.)
It should be obvious that we can't define setq2 as a function. If x=50 and y=-5, this function would receive the values 50, -5, and 11; it would have no knowledge of what variables were supposed to be set. What we really want to say is, When you (the Lisp system) see (setq2 v1 v2 e), treat it as equivalent to (progn (setq v1 e) (setq v2 e)). Actually, this isn't quite right, but it will do for now. A macro allows us to do precisely this, by specifying a program for transforming the input pattern (setq2 v1 v2 e)" into the output pattern (progn ...)."
If you thought this was nice you can keep on reading here:
http://cl-cookbook.sourceforge.net/macros.html
In python you have decorators, you basically have a function that takes another function as input. You can do what ever you want: call the function, do something else, wrap the function call in a resource acquire release, etc. but you don't get to peek inside that function. Say we wanted to make it more powerful, say your decorator received the code of the function as a list then you could not only execute the function as is but you can now execute parts of it, reorder lines of the function etc.