As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Someone is trying to sell Lisp to me, as a super powerful language that can do everything ever, and then some.
Is there a practical code example of Lisp's power?(Preferably alongside equivalent logic coded in a regular language.)
I like macros.
Here's code to stuff away attributes for people from LDAP. I just happened to have that code lying around and fiigured it'd be useful for others.
Some people are confused over a supposed runtime penalty of macros, so I've added an attempt at clarifying things at the end.
In The Beginning, There Was Duplication
(defun ldap-users ()
(let ((people (make-hash-table :test 'equal)))
(ldap:dosearch (ent (ldap:search *ldap* "(&(telephonenumber=*) (cn=*))"))
(let ((mail (car (ldap:attr-value ent 'mail)))
(uid (car (ldap:attr-value ent 'uid)))
(name (car (ldap:attr-value ent 'cn)))
(phonenumber (car (ldap:attr-value ent 'telephonenumber))))
(setf (gethash uid people)
(list mail name phonenumber))))
people))
You can think of a "let binding" as a local variable, that disappears outside the LET form. Notice the form of the bindings -- they are very similar, differing only in the attribute of the LDAP entity and the name ("local variable") to bind the value to. Useful, but a bit verbose and contains duplication.
On the Quest for Beauty
Now, wouldn't it be nice if we didn't have to have all that duplication? A common idiom is is WITH-... macros, that binds values based on an expression that you can grab the values from. Let's introduce our own macro that works like that, WITH-LDAP-ATTRS, and replace it in our original code.
(defun ldap-users ()
(let ((people (make-hash-table :test 'equal))) ; equal so strings compare equal!
(ldap:dosearch (ent (ldap:search *ldap* "(&(telephonenumber=*) (cn=*))"))
(with-ldap-attrs (mail uid name phonenumber) ent
(setf (gethash uid people)
(list mail name phonenumber))))
people))
Did you see how a bunch of lines suddenly disappeared, and was replaced with just one single line? How to do this? Using macros, of course -- code that writes code! Macros in Lisp is a totally different animal than the ones you can find in C/C++ through the use of the pre-processor: here, you can run real Lisp code (not the #define fluff in cpp) that generates Lisp code, before the other code is compiled. Macros can use any real Lisp code, i.e., ordinary functions. Essentially no limits.
Getting Rid of Ugly
So, let's see how this was done. To replace one attribute, we define a function.
(defun ldap-attr (entity attr)
`(,attr (car (ldap:attr-value ,entity ',attr))))
The backquote syntax looks a bit hairy, but what it does is easy. When you call LDAP-ATTRS, it'll spit out a list that contains the value of attr (that's the comma), followed by car ("first element in the list" (cons pair, actually), and there is in fact a function called first you can use, too), which receives the first value in the list returned by ldap:attr-value. Because this isn't code we want to run when we compile the code (getting the attribute values is what we want to do when we run the program), we don't add a comma before the call.
Anyway. Moving along, to the rest of the macro.
(defmacro with-ldap-attrs (attrs ent &rest body)
`(let ,(loop for attr in attrs
collecting `,(ldap-attr ent attr))
,#body))
The ,#-syntax is to put the contents of a list somewhere, instead of the actual list.
Result
You can easily verify that this will give you the right thing. Macros are often written this way: you start off with code you want to make simpler (the output), what you want to write instead (the input), and then you start molding the macro until your input gives the correct output. The function macroexpand-1 will tell you if your macro is correct:
(macroexpand-1 '(with-ldap-attrs (mail phonenumber) ent
(format t "~a with ~a" mail phonenumber)))
evaluates to
(let ((mail (car (trivial-ldap:attr-value ent 'mail)))
(phonenumber (car (trivial-ldap:attr-value ent 'phonenumber))))
(format t "~a with ~a" mail phonenumber))
If you compare the LET-bindings of the expanded macro with the code in the beginning, you'll find that it is in the same form!
Compile-time vs Runtime: Macros vs Functions
A macro is code that is run at compile-time, with the added twist that they can call any ordinary function or macro as they please! It's not much more than a fancy filter, taking some arguments, applying some transformations and then feeding the compiler the resulting s-exps.
Basically, it lets you write your code in verbs that can be found in the problem domain, instead of low-level primitives from the language! As a silly example, consider the following (if when wasn't already a built-in)::
(defmacro my-when (test &rest body)
`(if ,test
(progn ,#body)))
if is a built-in primitive that will only let you execute one form in the branches, and if you want to have more than one, well, you need to use progn::
;; one form
(if (numberp 1)
(print "yay, a number"))
;; two forms
(if (numberp 1)
(progn
(assert-world-is-sane t)
(print "phew!"))))
With our new friend, my-when, we could both a) use the more appropriate verb if we don't have a false branch, and b) add an implicit sequencing operator, i.e. progn::
(my-when (numberp 1)
(assert-world-is-sane t)
(print "phew!"))
The compiled code will never contain my-when, though, because in the first pass, all macros are expanded so there is no runtime penalty involved!
Lisp> (macroexpand-1 '(my-when (numberp 1)
(print "yay!")))
(if (numberp 1)
(progn (print "yay!")))
Note that macroexpand-1 only does one level of expansions; it's possible (most likely, in fact!) that the expansion continues further down. However, eventually you'll hit the compiler-specific implementation details which are often not very interesting. But continuing expanding the result will eventually either get you more details, or just your input s-exp back.
Hope that clarifies things. Macros is a powerful tool, and one of the features in Lisp I like.
The best example I can think of that is widely available is the book by Paul Graham, On Lisp. The full PDF can be downloaded from the link I just gave. You could also try Practical Common Lisp (also fully available on the web).
I have a lot of unpractical examples. I once wrote a program in about 40 lines of lisp which could parse itself, treat its source as a lisp list, do a tree traversal of the list and build an expression that evaluated to WALDO if the waldo identifier existed in the source or evaluate to nil if waldo was not present. The returned expression was constructed by adding calls to car/cdr to the original source that was parsed. I have no idea how to do this in other languages in 40 lines of code. Perhaps perl can do it in even fewer lines.
You may find this article helpful: http://www.defmacro.org/ramblings/lisp.html
That said, it's very, very hard to give short, practical examples of Lisp's power because it really shines only in non-trivial code. When your project grows to a certain size, you will appreciate Lisp's abstraction facilities and be glad that you've been using them. Reasonably short code samples, on the other hand, will never give you a satisfying demonstration of what makes Lisp great because other languages' predefined abbreviations will look more attractive in small examples than Lisp's flexibility in managing domain-specific abstractions.
There are plenty of killer features in Lisp, but macros is one I love particularily, because there's not really a barrier anymore between what the language defines and what I define. For example, Common Lisp doesn't have a while construct. I once implemented it in my head, while walking. It's straightforward and clean:
(defmacro while (condition &body body)
`(if ,condition
(progn
,#body
(do nil ((not ,condition))
,#body))))
Et voilà! You just extended the Common Lisp language with a new fundamental construct. You can now do:
(let ((foo 5))
(while (not (zerop (decf foo)))
(format t "still not zero: ~a~%" foo)))
Which would print:
still not zero: 4
still not zero: 3
still not zero: 2
still not zero: 1
Doing that in any non-Lisp language is left as an exercise for the reader...
Actually, a good practical example is the Lisp LOOP Macro.
http://www.ai.sri.com/pkarp/loop.html
The LOOP macro is simply that -- a Lisp macro. Yet it basically defines a mini looping DSL (Domain Specific Language).
When you browse through that little tutorial, you can see (even as a novice) that it's difficult to know what part of the code is part of the Loop macro, and which is "normal" Lisp.
And that's one of the key components of Lisps expressiveness, that the new code really can't be distinguished from the system.
While in, say, Java, you may not (at a glance) be able to know what part of a program comes from the standard Java library versus your own code, or even a 3rd party library, you DO know what part of the code is the Java language rather than simply method calls on classes. Granted, it's ALL the "Java language", but as programmer, you are limited to only expressing your application as a combination of classes and methods (and now, annotations). Whereas in Lisp, literally everything is up for grabs.
Consider the Common SQL interface to connect Common Lisp to SQL. Here, http://clsql.b9.com/manual/loop-tuples.html, they show how the CL Loop macro is extended to make the SQL binding a "first class citizen".
You can also observe constructs such as "[select [first-name] [last-name] :from [employee] :order-by [last-name]]". This is part of the CL-SQL package and implemented as a "reader macro".
See, in Lisp, not only can you make macros to create new constructs, like data structures, control structures, etc. But you can even change the syntax of the language through a reader macro. Here, they're using a reader macro (in the case, the '[' symbol) to drop in to a SQL mode to make SQL work like embedded SQL, rather than as just raw strings like in many other languages.
As application developers, our task is to convert our processes and constructs in to a form that the processor can understand. That means we, inevitably, have to "talk down" to the computer language, since it "doesn't understand" us.
Common Lisp is one of the few environments where we can not only build our application from the top down, but where we can lift the language and environment up to meet us half way. We can code at both ends.
Mind, as elegant as this can be, it's no panacea. Obviously there are other factors that influence language and environment choice. But it's certainly worth learning and playing with. I think learning Lisp is a great way to advance your programming, even in other languages.
I like Common Lisp Object System (CLOS) and multimethods.
Most, if not all, object-oriented programming languages have the basic notions of classes and methods. The following snippet in Python defines the classes PeelingTool and Vegetable (something similar to the Visitor pattern):
class PeelingTool:
"""I'm used to peel things. Mostly fruit, but anything peelable goes."""
def peel(self, veggie):
veggie.get_peeled(self)
class Veggie:
"""I'm a defenseless Veggie. I obey the get_peeled protocol
used by the PeelingTool"""
def get_peeled(self, tool):
pass
class FingerTool(PeelingTool):
...
class KnifeTool(PeelingTool):
...
class Banana(Veggie):
def get_peeled(self, tool):
if type(tool) == FingerTool:
self.hold_and_peel(tool)
elif type(tool) == KnifeTool:
self.cut_in_half(tool)
You put the peel method in the PeelingTool and have the Banana accept it. But, it must belong to the PeelingTool class, so it can only be used if you have an instance of the PeelingTool class.
The Common Lisp Object System version:
(defclass peeling-tool () ())
(defclass knife-tool (peeling-tool) ())
(defclass finger-tool (peeling-tool) ())
(defclass veggie () ())
(defclass banana (veggie) ())
(defgeneric peel (veggie tool)
(:documentation "I peel veggies, or actually anything that wants to be peeled"))
;; It might be possible to peel any object using any tool,
;; but I have no idea how. Left as an exercise for the reader
(defmethod peel (veggie tool)
...)
;; Bananas are easy to peel with our fingers!
(defmethod peel ((veggie banana) (tool finger-tool))
(with-hands (left-hand right-hand) *me*
(hold-object left-hand banana)
(peel-with-fingers right-hand tool banana)))
;; Slightly different using a knife
(defmethod peel ((veggie banana) (tool knife-tool))
(with-hands (left-hand right-hand) *me*
(hold-object left-hand banana)
(cut-in-half tool banana)))
Anything can be written in any language that's Turing complete; the difference between the languages is how many hoops you have to jump through to get the equivalent result.
A powerful languages like Common Lisp, with functionality such as macros and the CLOS, allows you to achieve results fast and easy without jumping through so many hoops that you either settle for a subpar solution, or find yourself becoming a kangaroo.
I found this article quite interesting:
Programming Language Comparison: Lisp vs C++
The author of the article, Brandon Corfman, writes about a study that compares solutions in Java, C++ and Lisp to a programming problem, and then writes his own solution in C++. The benchmark solution is Peter Norvig's 45 lines of Lisp (written in 2 hours).
Corfman finds that it is difficult to reduce his solution to less than 142 lines of C++/STL. His analysis of why, is an interesting read.
The thing that I like most about Lisp (and Smalltalk) systems, is that they feel alive. You can easily probe & modify Lisp systems while they are running.
If this sounds mysterious, start Emacs, and type some Lisp code. Type C-M-x and voilà! You just changed Emacs from within Emacs. You can go on and redefine all Emacs functions while it is running.
Another thing is that the code = list equivalence make the frontier between code and data very thin. And thanks to macros, it is very easy to extend the language and make quick DSLs.
For instance, it is possible to code a basic HTML builder with which the code is very close to the produced HTML output:
(html
(head
(title "The Title"))
(body
(h1 "The Headline" :class "headline")
(p "Some text here" :id "content")))
=>
<html>
<head>
<title>The title</title>
</head>
<body>
<h1 class="headline">The Headline</h1>
<p id="contents">Some text here</p>
</body>
</html>
In the Lisp code, auto indentation make the code look like the output, except there aren't any closing tags.
One thing I like is the fact that I can upgrade code "run-time" without losing application state. It's a thing only useful in some cases, but when it is useful, having it already there (or, for only a minimal cost during development) is MUCH cheaper than having to implement it from scratch. Especially since this comes at "no to almost no" cost.
I like this macro example from http://common-lisp.net/cgi-bin/viewcvs.cgi/cl-selenium/?root=cl-selenium It's a Common Lisp binding to Selenium (a web browser test framework), but instead of mapping every method, it reads Selenium's own API definition XML document at compile time and generates the mapping code using macros. You can see the generated API here: common-lisp.net/project/cl-selenium/api/selenium-package/index.html
This is essentially driving macros with external data, which happens to be an XML document in this case, but could have been as complex is reading from a database or network. This is the power of having the entire Lisp environment available to you at compile time.
See how you can extend Common Lisp with XML templating: cl-quasi-quote XML example, project page,
(babel:octets-to-string
(with-output-to-sequence (*html-stream*)
<div (constantAttribute 42
someJavaScript `js-inline(print (+ 40 2))
runtimeAttribute ,(concatenate 'string "&foo" "&bar"))
<someRandomElement
<someOther>>>))
=>
"<div constantAttribute=\"42\"
someJavaScript=\"javascript: print((40 + 2))\"
runtimeAttribute=\"&foo&bar\">
<someRandomElement>
<someOther/>
</someRandomElement>
</div>"
This is basically the same thing as Lisp's backtick reader (which is for list quasi quoting), but it also works for various other things like XML (installed on a special <> syntax), JavaScript (installed on `js-inline), etc.
To make it clear, this is implemented in a user library! And it compiles the static XML, JavaScript, etc. parts into UTF-8 encoded literal byte arrays that are ready to be written to the network stream. With a simple , (comma) you can get back to lisp and interleave runtime generated data into the literal byte arrays.
This is not for the faint of heart, but this is what the library compiles the above into:
(progn
(write-sequence
#(60 100 105 118 32 99 111 110 115 116 97 110 116 65 116 116 114 105 98
117 116 101 61 34 52 50 34 32 115 111 109 101 74 97 118 97 83 99 114
105 112 116 61 34 106 97 118 97 115 99 114 105 112 116 58 32 112 114
105 110 116 40 40 52 48 32 43 32 50 41 41 34 32 114 117 110 116 105
109 101 65 116 116 114 105 98 117 116 101 61 34)
*html-stream*)
(write-quasi-quoted-binary
(let ((*transformation*
#<quasi-quoted-string-to-quasi-quoted-binary {1006321441}>))
(transform-quasi-quoted-string-to-quasi-quoted-binary
(let ((*transformation*
#<quasi-quoted-xml-to-quasi-quoted-string {1006326E51}>))
(locally
(declare (sb-ext:muffle-conditions sb-ext:compiler-note))
(let ((it (concatenate 'string "runtime calculated: " "&foo" "&bar")))
(if it
(transform-quasi-quoted-xml-to-quasi-quoted-string/attribute-value it)
nil))))))
*html-stream*)
(write-sequence
#(34 62 10 32 32 60 115 111 109 101 82 97 110 100 111 109 69 108 101 109
101 110 116 62 10 32 32 32 32 60 115 111 109 101 79 116 104 101 114 47
62 10 32 32 60 47 115 111 109 101 82 97 110 100 111 109 69 108 101 109
101 110 116 62 10 60 47 100 105 118 62 10)
*html-stream*)
+void+)
For reference, the two big byte vectors in the above look like this when converted to string:
"<div constantAttribute=\"42\"
someJavaScript=\"javascript: print((40 + 2))\"
runtimeAttribute=\""
And the second one:
"\">
<someRandomElement>
<someOther/>
</someRandomElement>
</div>"
And it combines well with other Lisp structures like macros and functions. now, compare this to JSPs...
I was an AI student at MIT in the 1970s. Like every other student, I thought language was paramount. Nevertheless, Lisp was the primary language. These are some things I still think it is pretty good for:
Symbolic math. It is easy and instructive to write symbolic differentiation of an expression, and algebraic simplification. I still do those, even though I do them in C-whatever.
Theorem proving. Every now & then I go on a temporary AI binge, like trying to prove that insertion sort is correct. For that I need to do symbolic manipulation, and I usually fall back on Lisp.
Little domain-specific-languages. I know Lisp isn't really practical, but if I want to try out a little DSL without having to get all wrapped up in parsing, etc., Lisp macros make it easy.
Little play algorithms like minimax game tree search can be done in like three lines.
Want to try lambda calculus? It's easy in Lisp.
Mainly what Lisp does for me is mental exercise. Then I can carry that over into more practical languages.
P.S. Speaking of lambda calculus, what also started in the 1970s, in that same AI millieu, was that OO started invading everybody's brain, and somehow, interest in what it is seems to have crowded out much interest in what it is good for. I.e. work on machine learning, natural language, vision, problem solving, all sort of went to the back of the room while classes, messages, types, polymorphism, etc. went to the front.
Have you taken a look at this explanation of why macros are powerful and flexible? No examples in other languages though, sorry, but it might sell you on macros.
#Mark,
While there is some truth to what you are saying, I believe it is not always as straight forward.
Programmers and people in general don't always take the time to evaluate all the possibilities and decide to switch languages. Often It's the managers that decide, or the schools that teach the first languages ... and programmers never have the need to invest enough amount of time to get to a certain level were they can decide this language saves me more time than that language.
Plus you have to admit that languages that have the backing of huge commercial entities such as Microsoft or Sun will always have an advantage in the market compared to languages without such backing.
In order to answer the original question, Paul Graham tries to give an example here even though I admit it is not necessarily as practical as I would like :-)
One specific thing that impressed me is the ability to write your own object-oriented programming extension, if you happen not to like the included CLOS.
One of them is in Garnet, and one in Paul Graham's On Lisp.
There's also a package called Screamer that allows nondeterministic programming (which I haven't evaluated).
Any language that allows you to change it to support different programming paradigms has to be flexible.
You might find this post by Eric Normand helpful. He describes how as a codebase grows, Lisp helps by letting you build the language up to your application. While this often takes extra effort early on, it gives you a big advantage later.
The simple fact that it's a multi-paradigm language makes it very very flexible.
John Ousterhout made this interesting observation regarding Lisp in 1994:
Language designers love to argue about why this language or that language
must be better or worse a priori, but none of these arguments really
matter a lot. Ultimately all language issues get settled when users vote
with their feet.
If [a language] makes people more productive then they will use
it; when some other language comes along that is better (or if it is
here already), then people will switch to that language. This is The
Law, and it is good. The Law says to me that Scheme (or any other Lisp
dialect) is probably not the "right" language: too many people have
voted with their feet over the last 30 years.
http://www.vanderburg.org/OldPages/Tcl/war/0009.html
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
AI is implemented in a variety of different languages, e.g., Python, C/C++, Java, so could someone please explain to me how exactly does using Lisp allow one to perform #5 (mentioned here by Peter Norvig):
[Lisp allows for..] A macro system that let developers create a domain-specific level of
abstraction in which to build the next level. ... today, (5) is the
only remaining feature in which Lisp excels compared to other
languages.
Source: https://www.quora.com/Is-it-true-that-Lisp-is-highly-used-programming-language-in-AI
I'm basically confused by what it means to create a domain-specific level of abstraction. Could someone please provide a concrete example/application of when/how this would be useful and, just in general, what it means? I tried reading http://lambda-the-ultimate.org/node/4765 but didn't really "get the big picture." However, I felt like there is some sort of magic here, in that Lisp allows you to write the kind of code that other procedural/OOP/functional languages won't let you. Then I came across this post: https://www.quora.com/Which-programming-language-is-better-for-artificial-intelligence-C-or-C++, where the top answer states:
For generic Artificial Intelligence, I would choose neither and
program in LISP. A real AI would have a lot of self-modifying code
(you don't think a real AI would take what its programmer wrote as The
Last Word, would you?).
This got me even more intrigued, which led me to wonder:
What exactly would it mean for an AI to have "self-inspecting, self-modifying code" (source: Why is Lisp used for AI?) and, again, why/how this is useful? It sounds very cool (almost like as if the AI is self-conscious about its own operations, so to speak), and it sounds like using Lisp allows one to accomplish these kinds of things, where other languages wouldn't even dream of it (sorry if this comes off as naively jolly, I have absolutely no experience in Lisp, but am excited to get started). I read a little bit of: What are the uses of self modifying code? and immediately became interested in the prospect of specific AI applications and future frontiers of self-modifying code.
Anyway, I can definitely see the connection between having the ability to write self-modifying code, and having the ability to tailor the language atop of your specific research problem domain (which is what I assume Peter Norvig implies in his post), but I really am quite unsure in what any of this really means, and I would like to understand the nuts-and-bolts (or even just the essence), of these two aspects presented above ("domain-specific level of abstraction" and "self-inspecting, self-modifying code") in a clear way.
[Lisp allows for..] A macro system that let developers create a domain-specific level of abstraction in which to build the next level. ... today, (5) is the only remaining feature in which Lisp excels compared to other languages.
This might be too broad for Stack Overflow, but the concept of domain-specific abstraction is one that happens in lots of languages, it's just much easier in Common Lisp. For instance, in C, when you make a call to open(), you get back a file descriptor. It's a small integer, but if you're adhering to the domain model, you don't care that it's an integer, you care that it's a file descriptor, and that it makes sense to use it where a file descriptor is intended. It's a leaky abstraction, though, because those calls tend to signal errors by returning negative integers, so you do actually have to care about the fact that a file descriptor is an integer, so that you can reliably compare the result and figure out whether it was an error or not. In C, you can define structures, or record types, that bundle some information together. That provides a slightly higher amount of abstraction.
The idea of abstraction is that you can think about how something is supposed to be used, and what it represents, and think in terms of the domain, not in terms of the representation. All programming languages support this in some sense, but Common Lisp makes it much easier to build up language constructs that look just like the builtins of the language, and help to avoid redundant (and error-prone) boilerplate.
For instance, if you're writing a natural deduction style theorem prover, you'll need to define a bunch of inference rules and make them available a proof system. Some of those rules will be more simplistic, and won't need to know about the current proof scope. For instance, to check whether a use of conjunction elimination (from A∧B, infer A (or B)) is legal, you just need to check the forms of the premise and the conclusion. Without abstraction, you might have to write:
(defun check-conjunction-elimination (premises conclusion context)
(declare (ignore context))
(and (= (length premises) 1)
(typep (first premises) 'conjunction)
(member conclusion (conjunction-conjuncts (first premises))
:test 'proposition=)))
(register-inference-rule "conjunction elimination" 'check-conjunction-elimination)
With the ability to define abstractions, you can write a pattern matcher that could simplify this to:
(defun check-conjunction-elimination (premises conclusion context)
(declare (ignore context))
(proposition-case (premises conclusion)
(((and A B) A) t)
(((and A B) B) t)))
(register-inference-rule "conjunction elimination" 'check-conjunction-elimination)
(Sure, some languages have pattern matching built in (Haskell, Prolog (in a sense)), but the point is that pattern matching is a procedural process, and you can implement it in any language. However, it's a code generation process, and in most languages, you'd have to do the code generation as a separate pass during compilation. With Common Lisp, it's part of the language.)
You could abstract that pattern into:
(define-simple-inference-rule "conjunction elimination" (premises conclusion)
((and A B) A)
((and A B) B)))
And you'd still be generating the original code. This kind of abstraction saves a lot of space, and it means that when someone else comes in, they don't need to know all of Common Lisp, they just need to know how to use define-simple-inference-rule. (Of course, that does add some overhead, since it's something else that they do need to know how to use.) But the idea is still there: the code corresponds to the way you talk about the domain, not the way the programming language works.
As to "self modifying code", I think it's a term that you'll hear more than you'll actually see good uses of. In the sense of macroexpansion described above, there's a kind of self-modifying code (in that the macroexpansion code knows how to "modify" or transform the code into the something else, but I don't think that's a great example). Since you have access to eval, you can modify code as an object and evaluate it, but I don't think many people really advocate for that. Being able to redefine code on the fly can be handy, but again, I think you'll see people doing that much more in the REPL than in programs.
However, being able to return closures (something which more and more languages are supporting) is a big help. For instance, trace is "sort of" self modifying. You could implement it something like this:
(defmacro trace (name)
(let ((f (symbol-function name)))
(setf (symbol-function name)
(lambda (&rest args)
(print (list* name args))
(apply f args)))))
You'd need to do something more to support untrace, but I think the point is fairly clear; you can do things that change the behavior of functions, etc., in predictable ways, at run time. trace and logging facilities are an easy example, but if a system decided to profile some of the methods that it knows are important, it could dynamically decide to start caching some of the results, or doing other interesting things. That's a kind of "self modification" that could be quite helpful.
I am thinking about learning Clojure, but coming from the c-syntax based (java, php, c#) world of imperative languages that's going to be a challenge, so one naturally asks oneself, is it really worth it? And while such question statement can be very subjective and hard to manage, there is one specific trait of Clojure (and more generally, the lisps) that I keep reading about, that is supposed to make it the most flexipowerful language ever: the macros.
Do you have any good examples of macro usage in Clojure, for purposes which in other mainstream languages (consider any of C++, PHP, Perl, Python, Groovy/Java, C#, JavaScript) would require much less elegant solutions/a lot of unnecessary abstraction/hacks/etc.
I find macros pretty useful for defining new language features. In most languages you would need to wait for a new release of the language to get new syntax - in Lisp you can just extend the core language with macros and add the features yourself.
For example, Clojure doesn't have a imperative C-style for(i=0 ;i<10; i++) loop but you can easily add one with a macro:
(defmacro for-loop [[sym init check change :as params] & steps]
(cond
(not (vector? params))
(throw (Error. "Binding form must be a vector for for-loop"))
(not= 4 (count params))
(throw (Error. "Binding form must have exactly 4 arguments in for-loop"))
:default
`(loop [~sym ~init value# nil]
(if ~check
(let [new-value# (do ~#steps)]
(recur ~change new-value#))
value#))))
Usage as follows:
(for-loop [i 0, (< i 10), (inc i)]
(println i))
Whether it is a good idea to add an imperative loop to a functional language is a debate we should probably avoid here :-)
there are a lot of macros in the base of clojure that you don't think about... Which is a sign of a good macro, they let you extend the language in ways that make life easier. With out macros life would be much less exciting. for instance if we didn't have
(with-out-str (somebody else's code that prints to screen))
then you would need to modify their code in ways that you may not have access to.
another great example is
(with-open-file [fh (open-a-file-code ...)]
(do (stuff assured that the file won't leak)))
the whole with-something-do pattern of macros have really added to the clojure eco system.
the other side of the proverbial macro coin is that I spend essentially all of my (current) professional Clojure time using a very macro heavy library and thus i spend a lot of time working around the fact that macros don't compose well and are not first class. The authors of this library are going to great lengths in the next version to make all the functionality available with out going through the macros to allow people like me to use them in higher order functions like map and reduce.
macros improve the world when they make life easier. They can have the opposite effect when they are the only interface to a library. please don't use macros as interfaces
It is hard in general to get the shape of your data truly correct. If as a library author, you have the data structured well for how you envision your library being used it may very well be that there is a way to re-structure things to allow users to employ your library in new and unimaginable ways. In this case the structure of the fantastic library in question was really quite good, it allowed for things the authors had not intended. Unfortunatly a great library was restricted because it's interface was a set of macros not a set of functions. the library was better than it's macros, so they held it back. This is not to say that macros are in any way to blame, just that programming is hard and they are another tool that can have many effects and all the pieces must be used together to work well.
There's also a more esoteric use case for macros that I sometimes use: writing concise, readable code that is also fully optimized. Here's a trivial example:
(defmacro str* [& ss] (apply str (map eval ss)))
What this does is concatenate strings at compile time (they have to be compile-time constants, of course). The regular string concatenation function in Clojure is str so wherever in a tight-loop code I have a long string that I would like to break up into several string literals, I just add the star to str and change runtime concatenation to compile-time. Usage:
(str* "I want to write some very lenghty string, most often it will be a complex"
" SQL query. I'd hate if it meant allocating the string all over every time"
" this is executed.")
Another, less trivial example:
(defmacro jprint [& xs] `(doto *out* ~#(for [x xs] `(.append ~x))))
The & means it accepts a variable number of arguments (varargs, variadic function). In Clojure a variadic function call makes use of a heap-allocated collection to transfer the arguments (like in Java, which uses an array). This is not very optimal, but if I use a macro like above, then there's no function call. I use it like this:
(jprint \" (json-escape item) \")
It compiles into three invocations of PrintWriter.append (basically an unrolled loop).
Finally, I would like to show you something even more radically different. You can use a macro to assist you in defining a clas of similar functions, eliminating vast amounts of boilerplate. Take this familiar example: in an HTTP-client library we want a separate function for each of the HTTP methods. Every function definition is quite complex as it has four overloaded signatures. Also, each function involves a different request class from the Apache HttpClient library, but everything else is exactly the same for all HTTP methods. Look how much code I need to handle this.
(defmacro- def-http-method [name]
`(defn ~name
([~'url ~'headers ~'opts ~'body]
(handle (~(symbol (str "Http" (s/capitalize name) ".")) ~'url) ~'headers ~'opts ~'body))
([~'url ~'headers ~'opts] (~name ~'url ~'headers ~'opts nil))
([~'url ~'headers] (~name ~'url ~'headers nil nil))
([~'url] (~name ~'url nil nil nil))))
(doseq [m ['GET 'POST 'PUT 'DELETE 'OPTIONS 'HEAD]]
(eval `(def-http-method ~m)))
Some actual code from a project I was working on -- I wanted nested for-esque loops using unboxed ints.
(defmacro dofor
[[i begin end step & rest] & body]
(when step
`(let [end# (long ~end)
step# (long ~step)
comp# (if (< step# 0)
>
<)]
(loop [~i ~begin]
(when (comp# ~i end#)
~#(if rest
`((dofor ~rest ~#body))
body)
(recur (unchecked-add ~i step#)))))))
Used like
(dofor [i 2 6 2
j i 6 1]
(println i j))
Which prints out
2 2
2 3
2 4
2 5
4 4
4 5
It compiles to something very close to the raw loop/recurs that I was originally writing out by hand, so there is basically no runtime performance penalty, unlike the equivalent
(doseq [i (range 2 6 2)
j (range i 6 1)]
(println i j))
I think that the resulting code compares rather favorably to the java equivalent:
for (int i = 2; i < 6; i+=2) {
for (int j = i; j < 6; j++) {
System.out.println(i+" "+j);
}
}
A simple example of a useful macro that is rather hard to re-create without macros is doto. It evaluates its first argument and then evaluates the following forms, inserting the result of the evaluation as their first argument. This might not sound like much, but...
With doto this:
(let [tmpObject (produceObject)]
(do
(.setBackground tmpObject GREEN)
(.setThis tmpObject foo)
(.setThat tmpObject bar)
(.outputTo tmpObject objectSink)))
Becomes that:
(doto (produceObject)
(.setBackground GREEN)
(.setThis foo)
(.setThat bar)
(.outputTo objectSink))
The important thing is that doto is not magic - you can (re-)build it yourself using the standard features of the language.
Macros are part of Clojure but IMHO do not believe they are why you should or should not learn Clojure. Data immutability, good constructs to handle concurrent state, and the fact it's a JVM language and can harness Java code are three reasons. If you can find no other reason to learn Clojure, consider the fact that a functional programming language probably should positively affect how you approach problems in any language.
To look at macros, I suggest you start with Clojure's threading macros: thread-first and thread-last -> and ->> respectively; visit this page, and many of the various blogs that discuss Clojure .
Good luck, and have fun.
I realize that the first rule of Macro Club is Don't Use Macros, so the following question is intended more as an exercise in learning Clojure than anything else (I realize this isn't necessarily the best use of macros).
I want to write a simple macro which acts as a wrapper around a regular (defn) macro and winds up adding some metadata to the defined function. So I'd like to have something like this:
(defn-plus f [x] (inc x))
...expand out to something like this:
(defn #^{:special-metadata :fixed-value} f [x] (inc x))
In principle this doesn't seem that hard to me, but I'm having trouble nailing down the specifics of getting the [args] and other forms in the defined function to be parsed out correctly.
As a bonus, if possible I'd like the macro to be able to handle all of the disparate forms of defn (ie, with or without docstrings, multiple arity definitions, etc). I saw some things in the clojure-contrib/def package that looked possibly helpful, but it was difficult to find sample code which used them.
Updated:
The previous version of my answer was not very robust. This seems like a simpler and more proper way of doing it, stolen from clojure.contrib.def:
(defmacro defn-plus [name & syms]
`(defn ~(vary-meta name assoc :some-key :some-value) ~#syms))
user> (defn-plus ^Integer f "Docstring goes here" [x] (inc x))
#'user/f
user> (meta #'f)
{:ns #<Namespace user>, :name f, :file "NO_SOURCE_PATH", :line 1, :arglists ([x]), :doc "Docstring goes here", :some-key :some-value, :tag java.lang.Integer}
#^{} and with-meta are not the same thing. For an explanation of the difference between them, see Rich's discussion on the Clojure mailing list. It's all a bit confusing and it's come up a bunch of times on the mailing list; see also here for example.
Note that def is a special form and it handles metadata a bit oddly compared with some other parts of the language. It sets the metadata of the var you're deffing to the metadata of the symbol that names the var; that's the only reason the above works, I think. See the DefExpr class in Compiler.java in the Clojure source if you want to see the guts of it all.
Finally, page 216 of Programming Clojure says:
You should generally avoid reader macros in macro expansions, since reader macros are evaluated at read time, before macro expansion begins.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've read The Nature of Lisp. The only thing I really got out of that was "code is data." But without defining what these terms mean and why they are usually thought to be separate, I gain no insight. My initial reaction to "code is data" is, so what?
The old fashioned view: 'it' is interactive computation with symbolic expressions.
Lisp enables easy representation of all kinds of expressions:
english sentence
(the man saw the moon)
math
(2 * x ** 3 + 4 * x ** 2 - 3 * x + 3)
rules
(<- (likes Kim ?x) (likes ?x Lee) (likes ?x Kim))
and also Lisp itself
(mapcar (function sqr) (quote (1 2 3 4 5)))
and many many many more.
Lisp now allows to write programs that compute with such expressions:
(translate (quote (the man saw the moon)) (quote german))
(solve (quote (2 * x ** 3 + 4 * x ** 2 - 3 * x + 3)) (quote (x . 3)))
(show-all (quote (<- (likes Kim ?x) (likes ?x Lee) (likes ?x Kim))))
(eval (quote (mapcar (function sqr) (quote (1 2 3 4 5)))))
Interactive means that programming is a dialog with Lisp. You enter an expression and Lisp computes the side effects (for example output) and the value.
So your programming session is like 'talking' with the Lisp system. You work with it until you get the right answers.
What are these expressions? They are sentences in some language. They are part descriptions of turbines. They are theorems describing a floating point engine of an AMD processor. They are computer algebra expressions in physics. They are descriptions of circuits. They are rules in a game. They are descriptions of behavior of actors in games. They are rules in a medical diagnosis system.
Lisp allows you to write down facts, rules, formulas as symbolic expressions. It allows you to write programs that work with these expressions. You can compute the value of a formula. But you can equally easy write programs that compute new formulas from formulas (symbolic math: integrate, derive, ...). That was Lisp designed for.
As a side effect Lisp programs are represented as such expressions too. Then there is also a Lisp program that evaluates or compiles other Lisp programs. So the very idea of Lisp, the computation with symbolic expressions, has been applied to Lisp itself. Lisp programs are symbolic expressions and the computation is a Lisp expression.
Alan Kay (of Smalltalk fame) calls the original definition of Lisp evaluation in Lisp the Maxwell's equations of programming.
Write Lisp code. The only way to really 'get' Lisp (or any language, for that matter) is to roll up your sleeves and implement some things in it. Like anything else, you can read all you want, but if you want to really get a firm grasp on what's going on, you've got to step outside the theoretical and start working with the practical.
The way you "get" any language is by trying to write some code in it.
About the "data is code" thing, in most languages there is a clear separation between the code that gets executed, and the data that is processed.
For example, the following simple C-like function:
void foo(int i){
int j;
if (i % 42 == 0){
bar(i-2);
}
for (j = 0; j < i; ++j){
baz();
}
}
the actual control flow is determined once, statically, while writing the code. The function bar isn't going to change, and the if statement at the beginning of the function isn't going to disappear. This code is not data, it can not be manipulated by the program.
All that can be manipulated is the initial value of i. And on the other hand, that value can not be executed the way code can. You can call the function foo, but you can't call the variable i. So i is data, but it is not code.
Lisp does not have this distinction. The program code is data that can be manipulated too. Your code can, at runtime, take the function foo, and perhaps add another if statement, perhaps change the condition in the for-loop, perhaps replace the call to baz with another function call. All your code is data that can be inspected and manipulated as simply as the above function can inspect and manipulate the integer i.
I would highly recommend Structure and Interpretation of Computer Programs, which actually uses scheme, but that is a dialect of lisp. It will help you "get" lisp by having you do many different exercises and goes on to show some of the ways that lisp is so usefull.
I think you have to have more empathy for compiler writers to understand how fundamental the code is data thing is. I'll admit, I've never taken a compilers course, but converting any sufficiently high-level language into machine code is a hard problem, and LISP, in many ways, resembles an intermediate step in this process. In the same way that C is "close to the metal", LISP is close to the compiler.
This worked for me:
Read "The Little Schemer". It's the shortest path to get you thinking in Lisp mode (minus the macros). As a bonus, it's relatively short/fun/inexpensive.
Find a good book/tutorial to get you started with macros. I found chapter 8 of "The Scheme
Programming Language" to be a good starting point for Scheme.
http://www.ccs.neu.edu/home/matthias/BTLS/
http://www.scheme.com/tspl3/syntax.html
By watching legendary Structure and Interpretation of Computer Programs?
In Common Lisp, "code is data" boils down to this. When you write, for example:
(add 1 2)
your Lisp system will parse that text and generate a list with three elements: the symbol ADD, and the numbers 1 and 2. So now they're data. You can do whatever you want with them, replace elements, insert other stuff, etc.
The fun part is that you can pass this data on to the compiler and, because you can manipulate these data structures using Lisp itself, this means you can write programs that write other programs. This is not as complicated as it sounds, and Lispers do it all the time using macros. So, just get a book about Lisp, and try it out.
Okay, I'm going to take a crack at this. I'm new to Lisp myself, just having arrived from the world of python. I haven't experienced that sudden moment of enlightenment that all the old Lispers talk about, but I'll tell you what I am seeing so far.
First, look at this random bit of python code:
def is_palindrome(st):
l = len(st)/2
return True if st[:l] == st[:-l-1:-1] else False
Now look at this:
"""
def is_palindrome(st):
l = len(st)/2
return True if st[:l] == st[:-l-1:-1] else False
"""
What do you, as a programmer, see? The code is identical, FYI.
If you are like me, you'll tend to think of the first as active code. It consists of a number of syntactic elements.
The second, despite its similarity, is a single syntactic item. It's a string. You interact with it as a single entity. To deal with it as code - to handle it comfortably along its syntactic boundaries - you will have to do some parsing. To execute it, you need to invoke an interpreter. It's not the same thing at all as the first.
So when we do code generation in most languages what are we dealing with? Strings. When I generate HTML or SQL with python I use python strings as the interface between the two languages. Even if I generate python with python, strings are the tool.*
Doesn't the thought of that just... make you want to dance with joy? There's always this grotesque mismatch between that which you are working with and that which you are working on. I sensed that the first time that I generated SQL with perl. Differences in escaping. Differences in formatting: think about trying to get a generated html document to look tidy. Stuff isn't easy to reuse. Etc.
To solve the problem we serially create templating libraries. Scads of them. Why so many? My guess is that they're never quite satisfactory. By the time they start getting powerful enough they've turned into monstrosities. Granted, some of them - such as SQLAlchemy and Genshi in the python world - are very beautiful and admirable monstrosities. Let's... um... avoid mention of PHP.
Because strings make an awkward interface between the worked-on language and the worked-with, we create a third language - templates - to avoid them. ** This also tends to be a little awkward.
Now let's look at a block of quoted Lisp code:
'(loop for i from 1 to 8 do (print i))
What do you see? As a new Lisp coder, I've caught myself looking at that as a string. It isn't. It is inactive Lisp code. You are looking at a bunch of lists and symbols. Try to evaluate it after turning one of the parentheses around. The language won't let you do it: syntax is enforced.
Using quasiquote, we can shoehorn our own values into this inactive Lisp code:
`(loop for i from 1 to ,whatever do (print i))
Note the nature of the shoehorning: one item has been replaced with another. We aren't formatting our value into a string. We're sliding it into a slot in the code. It's all neat and tidy.
In fact if you want to directly edit the text of the code, you are in for a hassle. For example if you are inserting a name <varname> into the code, and you also want to use <varname>-tmp in the same code you can't do it directly like you can with a template string: "%s-tmp = %s". You have to extract the name into a string, rewrite the string, then turn it into a symbol again and finally insert.
If you want to grasp the essence of Lisp, I think that you might gain more by ignoring defmacro and gensyms and all that window dressing for the moment. Spend some time exploring the potential of the quasiquote, including the ,# thing. It's pretty accessible. Defmacro itself only provides an easy way to execute the result of quasiquotes. ***
What you should notice is that the hermetic string/template barrier between the worked-on and the worked-with is all but eliminated in Lisp. As you use it, you'll find that your sense of two distinct layers - active and passive - tends to dissipate. Functions call macros which call macros or functions which have functions (or macros!) passed in with their arguments. It's kind of a big soup - a little shocking for the newcomer. That said, I don't find that the distinction between macros and functions is as seamless as some Lisp people say. Mostly it's ok, but every so often as I wander in the soup I find myself bumping up against the ghost of that old barrier - and it really creeps me out!
I'll get over it, I'm sure. No matter. The convenience pays for the scare.
Now that's Lisp working on Lisp. What about working on other languages? I'm not quite there yet, personally, but I think I see the light at the end of the tunnel. You know how Lisp people keep going on about S-expressions being the same thing as a parse tree? I think the idea is to parse the foreign language into S-expressions, work on them in the amazing comfort of the Lisp environment, then send them back to native code. In theory, every language out there could be turned into S-expressions, or even executable lisp code. You're not working in a first language combined with a third language to produce code in a second language. It is all - while you are working on it - Lisp, and you can generate it all with quasiquotes.
Have a look at this (borrowed from PCL):
(define-html-macro :mp3-browser-page ((&key title (header title)) &body body)
`(:html
(:head
(:title ,title)
(:link :rel "stylesheet" :type "text/css" :href "mp3-browser.css"))
(:body
(standard-header)
(when ,header (html (:h1 :class "title" ,header)))
,#body
(standard-footer))))
Looks like an S-expression version of HTML, doesn't it? I have a feeling that Lisp works just fine as its own templating library.
I've started to wonder about an S-expression version of python. Would it qualify as a Lisp? It certainly wouldn't be Common Lisp. Maybe it would be nicer - for python programmers at least. Hey, and what about P-expressions?
* Python now has something called AST, which I haven't explored. Also a person could use python lists to represent other languages. Relative to Lisp, I suspect that both are a bit of a hack.
** SQLAlchemy is kind of an exception. It's done a nice job of turning SQL directly into python. That said, it appears to have involved significant effort.
*** Take it from a newbie. I'm sure I'm glossing over something here. Also, I realize that quasiquote is not the only way to generate code for macros. It's certainly a nice one, though.
Data is code is an interesting paradigm that supports treating a data structure as a command. Treating data in this way allows you to process and manipulate the structure in various ways - e.g. traversal - by evaluating it. Moreover, the 'data is code' paradigm obviates the need in many cases to develop custom parsers for data structures; the language parser itself can be used to parse the structures.
The first step is forgetting everything you have learned with all the C and Pascal-like languages. Empty your mind. This is the hardest step.
Then, take a good introduction to programming that uses Lisp. Don't try to correlate what you see with anything that you know beforehand (when you catch yourself doing that, repeat step 1). I liked Structure and Interpretation of Computer Programs (uses Scheme), Practical Common Lisp, Paradigms of Artificial Intelligence Programming, Casting Spels in Lisp, among others. Be sure to write out the examples. Also try the exercises, but limit yourself to the constructs you have learned in that book. If you find yourself trying to find, for example, some function to set a variable or some statement that resembles a for loop, repeat step 1, then revisit the chapters before to find out how it is done in Lisp.
Read and understand the legendary page 13 of the Lisp 1.5 Programmer's Manual
According to Alan Kay, at least.
One of the reasons that some university computer science programs use Lisp for their intro courses is that it's generally true that a novice can learn functional, procedural, or object-oriented programming more or less equally well. However, it's much harder for someone who already thinks in procedural statements to begin thinking like a functional programmer than to do the inverse.
When I tried to pick up Lisp, I did it "with a C accent." set! amd begin were my friends and constant companions. It is surprisingly easy to write Lisp code without ever writing any functional code, which isn't the point.
You may notice that I'm not answering your question, which is true. I merely wanted to let you know that it's awfully hard to get your mind thinking in a functional style, and it'll be an exciting practice that will make you a stronger programmer in the long run.
Kampai!
P.S. Also, you'll finally understand that "my other car is a cdr" bumper sticker.
To truly grok lisp, you need to write it.
Learn to love car, cdr, and cons. Don't iterate when you can recurse. Start out writing some simple programs (factorial, list reversal, dictionary lookup), and work your way up to more complex ones (sorting sets of items, pattern matching).
On the code is data and data is code thing, I wouldn't worry about it at this point. You'll understand it eventually, and its not critical to learning lisp.
I would suggest checking out some of the newer variants of Lisp like Arc or Clojure. They clean up the syntax a little and are smaller and thus easier to understand than Common Lisp. Clojure would be my choice. It is written on the JVM and so you don't have the issues with various platform implementations and library support that exist with some Lisp implementations like SBCL.
Read On Lisp and Paradigms in Artificial Intelligence Programming. Both of these have excellent coverage of Lisp macros - which really make the code is data concept real.
Also, when writing Lisp, don't iterate when you can recurse or map (learn to love mapcar).
it's important to see that data is code AND code is data. This feeds the eval/apply loop. Recursion is also fun.
(This link is broken:
![Eval/Apply][1]
[1]: http://ely.ath.cx/~piranha/random_images/lolcode-eval_apply-2.jpg
)
I'd suggest that is a horrible introduction to the language. There are better places to start and better people/articles/books than the one you cited.
Are you a programmer? What language(s)?
To help you with your question more background might be helpful.
About the whole "code is data" thing:
Isn't that due to the "von Neumann architecture"? If code and data were located in physically separate memory locations, the bits in the data memory could not be executed whereas the bits in the program memory could not be interpreted as anything but instructions to the CPU.
Do I understand this correctly?
I think to learn anything you have to have a purpose for it, such as a simple project.
For Lisp, a good simple project is a symbolic differentiator, so for example
(diff 'x 'x) -> 1
(diff 'a 'x) -> 0
(diff `(+ ,xx ,yy) 'x) where xx and yy are subexpressions
-> `(+ ,(diff xx 'x),(diff yy 'x))
etc. etc.
and then you need a simplifier, such as
(simp `(+ ,x 0)) -> x
(simp `(* ,x 0)) -> 0
etc. etc.
so if you start with a math expression, you can eval it to get its value, and you can eval its derivative to get its derivative.
I hope this illustrates what can happen when program code manipulates program code.
As Marvin Minsky observed, computer math is always worried about accuracy and roundoff error, right? Well, this is either exactly right or completely wrong!
You can get LISP in many ways, the most common is by using Emacs or working next to somebody who has developed LISP already.
Sadly, once you get LISP, it's hard to get rid of it, antibiotics won't work.
BTW: I also recommend The Adventures of a Pythonista in Schemeland.
This may be helpful: http://www.defmacro.org/ramblings/fp.html (isn't about LISP but about functional programming as a paradigm)
The way I think about it is that the best part of "code is data" is the face that function are, well, functionally no different than another variable. The fact that you can write code that writes code is one of the single most powerful (and often overlooked) features of Lisp. Functions can accept other functions as parameters, and even return functions as a result.
This lets one code at a much higher level of abstraction than, say, Java. It makes many tasks elegant and concise, and therefore, makes the code easier to modify, maintain, and read, or at least in theory.
I would say that the only way to truly "get" Lisp is to spend a lot of time with it -- once you get the hang of it, you'll wish you had some of the features of Lisp in your other programming languages.
Reading Paul Graham's essays on programming languages one would think that Lisp macros are the only way to go. As a busy developer, working on other platforms, I have not had the privilege of using Lisp macros. As someone who wants to understand the buzz, please explain what makes this feature so powerful.
Please also relate this to something I would understand from the worlds of Python, Java, C# or C development.
To give the short answer, macros are used for defining language syntax extensions to Common Lisp or Domain Specific Languages (DSLs). These languages are embedded right into the existing Lisp code. Now, the DSLs can have syntax similar to Lisp (like Peter Norvig's Prolog Interpreter for Common Lisp) or completely different (e.g. Infix Notation Math for Clojure).
Here is a more concrete example:Python has list comprehensions built into the language. This gives a simple syntax for a common case. The line
divisibleByTwo = [x for x in range(10) if x % 2 == 0]
yields a list containing all even numbers between 0 and 9. Back in the Python 1.5 days there was no such syntax; you'd use something more like this:
divisibleByTwo = []
for x in range( 10 ):
if x % 2 == 0:
divisibleByTwo.append( x )
These are both functionally equivalent. Let's invoke our suspension of disbelief and pretend Lisp has a very limited loop macro that just does iteration and no easy way to do the equivalent of list comprehensions.
In Lisp you could write the following. I should note this contrived example is picked to be identical to the Python code not a good example of Lisp code.
;; the following two functions just make equivalent of Python's range function
;; you can safely ignore them unless you are running this code
(defun range-helper (x)
(if (= x 0)
(list x)
(cons x (range-helper (- x 1)))))
(defun range (x)
(reverse (range-helper (- x 1))))
;; equivalent to the python example:
;; define a variable
(defvar divisibleByTwo nil)
;; loop from 0 upto and including 9
(loop for x in (range 10)
;; test for divisibility by two
if (= (mod x 2) 0)
;; append to the list
do (setq divisibleByTwo (append divisibleByTwo (list x))))
Before I go further, I should better explain what a macro is. It is a transformation performed on code by code. That is, a piece of code, read by the interpreter (or compiler), which takes in code as an argument, manipulates and the returns the result, which is then run in-place.
Of course that's a lot of typing and programmers are lazy. So we could define DSL for doing list comprehensions. In fact, we're using one macro already (the loop macro).
Lisp defines a couple of special syntax forms. The quote (') indicates the next token is a literal. The quasiquote or backtick (`) indicates the next token is a literal with escapes. Escapes are indicated by the comma operator. The literal '(1 2 3) is the equivalent of Python's [1, 2, 3]. You can assign it to another variable or use it in place. You can think of `(1 2 ,x) as the equivalent of Python's [1, 2, x] where x is a variable previously defined. This list notation is part of the magic that goes into macros. The second part is the Lisp reader which intelligently substitutes macros for code but that is best illustrated below:
So we can define a macro called lcomp (short for list comprehension). Its syntax will be exactly like the python that we used in the example [x for x in range(10) if x % 2 == 0] - (lcomp x for x in (range 10) if (= (% x 2) 0))
(defmacro lcomp (expression for var in list conditional conditional-test)
;; create a unique variable name for the result
(let ((result (gensym)))
;; the arguments are really code so we can substitute them
;; store nil in the unique variable name generated above
`(let ((,result nil))
;; var is a variable name
;; list is the list literal we are suppose to iterate over
(loop for ,var in ,list
;; conditional is if or unless
;; conditional-test is (= (mod x 2) 0) in our examples
,conditional ,conditional-test
;; and this is the action from the earlier lisp example
;; result = result + [x] in python
do (setq ,result (append ,result (list ,expression))))
;; return the result
,result)))
Now we can execute at the command line:
CL-USER> (lcomp x for x in (range 10) if (= (mod x 2) 0))
(0 2 4 6 8)
Pretty neat, huh? Now it doesn't stop there. You have a mechanism, or a paintbrush, if you like. You can have any syntax you could possibly want. Like Python or C#'s with syntax. Or .NET's LINQ syntax. In end, this is what attracts people to Lisp - ultimate flexibility.
You will find a comprehensive debate around lisp macro here.
An interesting subset of that article:
In most programming languages, syntax is complex. Macros have to take apart program syntax, analyze it, and reassemble it. They do not have access to the program's parser, so they have to depend on heuristics and best-guesses. Sometimes their cut-rate analysis is wrong, and then they break.
But Lisp is different. Lisp macros do have access to the parser, and it is a really simple parser. A Lisp macro is not handed a string, but a preparsed piece of source code in the form of a list, because the source of a Lisp program is not a string; it is a list. And Lisp programs are really good at taking apart lists and putting them back together. They do this reliably, every day.
Here is an extended example. Lisp has a macro, called "setf", that performs assignment. The simplest form of setf is
(setf x whatever)
which sets the value of the symbol "x" to the value of the expression "whatever".
Lisp also has lists; you can use the "car" and "cdr" functions to get the first element of a list or the rest of the list, respectively.
Now what if you want to replace the first element of a list with a new value? There is a standard function for doing that, and incredibly, its name is even worse than "car". It is "rplaca". But you do not have to remember "rplaca", because you can write
(setf (car somelist) whatever)
to set the car of somelist.
What is really happening here is that "setf" is a macro. At compile time, it examines its arguments, and it sees that the first one has the form (car SOMETHING). It says to itself "Oh, the programmer is trying to set the car of somthing. The function to use for that is 'rplaca'." And it quietly rewrites the code in place to:
(rplaca somelist whatever)
Common Lisp macros essentially extend the "syntactic primitives" of your code.
For example, in C, the switch/case construct only works with integral types and if you want to use it for floats or strings, you are left with nested if statements and explicit comparisons. There's also no way you can write a C macro to do the job for you.
But, since a lisp macro is (essentially) a lisp program that takes snippets of code as input and returns code to replace the "invocation" of the macro, you can extend your "primitives" repertoire as far as you want, usually ending up with a more readable program.
To do the same in C, you would have to write a custom pre-processor that eats your initial (not-quite-C) source and spits out something that a C compiler can understand. It's not a wrong way to go about it, but it's not necessarily the easiest.
Lisp macros allow you to decide when (if at all) any part or expression will be evaluated. To put a simple example, think of C's:
expr1 && expr2 && expr3 ...
What this says is: Evaluate expr1, and, should it be true, evaluate expr2, etc.
Now try to make this && into a function... thats right, you can't. Calling something like:
and(expr1, expr2, expr3)
Will evaluate all three exprs before yielding an answer regardless of whether expr1 was false!
With lisp macros you can code something like:
(defmacro && (expr1 &rest exprs)
`(if ,expr1 ;` Warning: I have not tested
(&& ,#exprs) ; this and might be wrong!
nil))
now you have an &&, which you can call just like a function and it won't evaluate any forms you pass to it unless they are all true.
To see how this is useful, contrast:
(&& (very-cheap-operation)
(very-expensive-operation)
(operation-with-serious-side-effects))
and:
and(very_cheap_operation(),
very_expensive_operation(),
operation_with_serious_side_effects());
Other things you can do with macros are creating new keywords and/or mini-languages (check out the (loop ...) macro for an example), integrating other languages into lisp, for example, you could write a macro that lets you say something like:
(setvar *rows* (sql select count(*)
from some-table
where column1 = "Yes"
and column2 like "some%string%")
And thats not even getting into Reader macros.
Hope this helps.
I don't think I've ever seen Lisp macros explained better than by this fellow: http://www.defmacro.org/ramblings/lisp.html
A lisp macro takes a program fragment as input. This program fragment is represented a data structure which can be manipulated and transformed any way you like. In the end the macro outputs another program fragment, and this fragment is what is executed at runtime.
C# does not have a macro facility, however an equivalent would be if the compiler parsed the code into a CodeDOM-tree, and passed that to a method, which transformed this into another CodeDOM, which is then compiled into IL.
This could be used to implement "sugar" syntax like the for each-statement using-clause, linq select-expressions and so on, as macros that transforms into the underlying code.
If Java had macros, you could implement Linq syntax in Java, without needing Sun to change the base language.
Here is pseudo-code for how a lisp-style macro in C# for implementing using could look:
define macro "using":
using ($type $varname = $expression) $block
into:
$type $varname;
try {
$varname = $expression;
$block;
} finally {
$varname.Dispose();
}
Since the existing answers give good concrete examples explaining what macros achieve and how, perhaps it'd help to collect together some of the thoughts on why the macro facility is a significant gain in relation to other languages; first from these answers, then a great one from elsewhere:
... in C, you would have to write a custom pre-processor [which would probably qualify as a sufficiently complicated C program] ...
—Vatine
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming [which is still not as powerful].
—Matt Curtis
... in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack.
—Miguel Ping
DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201. Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably ''still'' can't use the new feature until they're allowed to break source compatibility with older versions of Java. So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.
—Peter Seibel, in "Practical Common Lisp"
Think of what you can do in C or C++ with macros and templates. They're very useful tools for managing repetitive code, but they're limited in quite severe ways.
Limited macro/template syntax restricts their use. For example, you can't write a template which expands to something other than a class or a function. Macros and templates can't easily maintain internal data.
The complex, very irregular syntax of C and C++ makes it difficult to write very general macros.
Lisp and Lisp macros solve these problems.
Lisp macros are written in Lisp. You have the full power of Lisp to write the macro.
Lisp has a very regular syntax.
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming. Or all the crazy tricks in (excellent) books like Modern C++ Design, which are still tough to debug and (in practice) non-portable between real-world compilers even though the language has been standardised for a decade. All of that melts away if the langauge you use for metaprogramming is the same language you use for programming!
I'm not sure I can add some insight to everyone's (excellent) posts, but...
Lisp macros work great because of the Lisp syntax nature.
Lisp is an extremely regular language (think of everything is a list); macros enables you to treat data and code as the same (no string parsing or other hacks are needed to modify lisp expressions). You combine these two features and you have a very clean way to modify code.
Edit: What I was trying to say is that Lisp is homoiconic, which means that the data structure for a lisp program is written in lisp itself.
So, you end up with a way of creating your own code generator on top of the language using the language itself with all its power (eg. in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack).
In practice, with macros you end up building your own mini-language on top of lisp, without the need to learn additional languages or tooling, and with using the full power of the language itself.
Lisp macros represents a pattern that occurs in almost any sizeable programming project. Eventually in a large program you have a certain section of code where you realize it would be simpler and less error prone for you to write a program that outputs source code as text which you can then just paste in.
In Python objects have two methods __repr__ and __str__. __str__ is simply the human readable representation. __repr__ returns a representation that is valid Python code, which is to say, something that can be entered into the interpreter as valid Python. This way you can create little snippets of Python that generate valid code that can be pasted into your actually source.
In Lisp this whole process has been formalized by the macro system. Sure it enables you to create extensions to the syntax and do all sorts of fancy things, but it's actual usefulness is summed up by the above. Of course it helps that the Lisp macro system allows you to manipulate these "snippets" with the full power of the entire language.
In short, macros are transformations of code. They allow to introduce many new syntax constructs. E.g., consider LINQ in C#. In lisp, there are similar language extensions that are implemented by macros (e.g., built-in loop construct, iterate). Macros significantly decrease code duplication. Macros allow embedding «little languages» (e.g., where in c#/java one would use xml to configure, in lisp the same thing can be achieved with macros). Macros may hide difficulties of using libraries usage.
E.g., in lisp you can write
(iter (for (id name) in-clsql-query "select id, name from users" on-database *users-database*)
(format t "User with ID of ~A has name ~A.~%" id name))
and this hides all the database stuff (transactions, proper connection closing, fetching data, etc.) whereas in C# this requires creating SqlConnections, SqlCommands, adding SqlParameters to SqlCommands, looping on SqlDataReaders, properly closing them.
While the above all explains what macros are and even have cool examples, I think the key difference between a macro and a normal function is that LISP evaluates all the parameters first before calling the function. With a macro it's the reverse, LISP passes the parameters unevaluated to the macro. For example, if you pass (+ 1 2) to a function, the function will receive the value 3. If you pass this to a macro, it will receive a List( + 1 2). This can be used to do all kinds of incredibly useful stuff.
Adding a new control structure, e.g. loop or the deconstruction of a list
Measure the time it takes to execute a function passed in. With a function the parameter would be evaluated before control is passed to the function. With the macro, you can splice your code between the start and stop of your stopwatch. The below has the exact same code in a macro and a function and the output is very different. Note: This is a contrived example and the implementation was chosen so that it is identical to better highlight the difference.
(defmacro working-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; not splicing here to keep stuff simple
((- (get-universal-time) start))))
(defun my-broken-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; doesn't even need eval
((- (get-universal-time) start))))
(working-timer (sleep 10)) => 10
(broken-timer (sleep 10)) => 0
One-liner answer:
Minimal syntax => Macros over Expressions => Conciseness => Abstraction => Power
Lisp macros do nothing more than writing codes programmatically. That is, after expanding the macros, you got nothing more than Lisp code without macros. So, in principle, they achieve nothing new.
However, they differ from macros in other programming languages in that they write codes on the level of expressions, whereas others' macros write codes on the level of strings. This is unique to lisp thanks to their parenthesis; or put more precisely, their minimal syntax which is possible thanks to their parentheses.
As shown in many examples in this thread, and also Paul Graham's On Lisp, lisp macros can then be a tool to make your code much more concise. When conciseness reaches a point, it offers new levels of abstractions for codes to be much cleaner. Going back to the first point again, in principle they do not offer anything new, but that's like saying since paper and pencils (almost) form a Turing machine, we do not need an actual computer.
If one knows some math, think about why functors and natural transformations are useful ideas. In principle, they do not offer anything new. However by expanding what they are into lower-level math you'll see that a combination of a few simple ideas (in terms of category theory) could take 10 pages to be written down. Which one do you prefer?
I got this from the common lisp cookbook and I think it explained why lisp macros are useful.
"A macro is an ordinary piece of Lisp code that operates on another piece of putative Lisp code, translating it into (a version closer to) executable Lisp. That may sound a bit complicated, so let's give a simple example. Suppose you want a version of setq that sets two variables to the same value. So if you write
(setq2 x y (+ z 3))
when z=8 both x and y are set to 11. (I can't think of any use for this, but it's just an example.)
It should be obvious that we can't define setq2 as a function. If x=50 and y=-5, this function would receive the values 50, -5, and 11; it would have no knowledge of what variables were supposed to be set. What we really want to say is, When you (the Lisp system) see (setq2 v1 v2 e), treat it as equivalent to (progn (setq v1 e) (setq v2 e)). Actually, this isn't quite right, but it will do for now. A macro allows us to do precisely this, by specifying a program for transforming the input pattern (setq2 v1 v2 e)" into the output pattern (progn ...)."
If you thought this was nice you can keep on reading here:
http://cl-cookbook.sourceforge.net/macros.html
In python you have decorators, you basically have a function that takes another function as input. You can do what ever you want: call the function, do something else, wrap the function call in a resource acquire release, etc. but you don't get to peek inside that function. Say we wanted to make it more powerful, say your decorator received the code of the function as a list then you could not only execute the function as is but you can now execute parts of it, reorder lines of the function etc.