How far can LISP macros go? [closed] - macros

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read a lot that LISP can redefine syntax on the fly, presumably with macros. I am curious how far does this actually go? Can you redefine the language structure so much that it borderline becomes a compiler for another language? For example, could you change the functional nature of LISP into a more object oriented syntax and semantics, maybe say having syntax closer to something like Ruby?
Especially, is it possible to get rid of the parenthesis hell using macros? I have learned enough (Emacs-)LISP to customize Emacs with my own micro-features, but I am very curious how far macros can go in customizing the language.

That's a really good question.
I think it's nuanced but definitely answerable:
Macros are not stuck in s-expressions. See the LOOP macro for a very complex language written using keywords (symbols). So, while you may start and end the loop with parentheses, inside it has its own syntax.
Example:
(loop for x from 0 below 100
when (even x)
collect x)
That being said, most simple macros just use s-expressions. And you'd be "stuck" using them.
But s-expressions, like Sergio has answered, start to feel right. The syntax gets out of the way and you start coding in the syntax tree.
As for reader macros, yes, you could conceivably write something like this:
#R{
ruby.code.goes.here
}
But you'd need to write your own Ruby syntax parser.
You can also mimic some of the Ruby constructs, like blocks, with macros that compile to the existing Lisp constructs.
#B(some lisp (code goes here))
would translate to
(lambda () (some lisp (code goes here)))
See this page for how to do it.

Yes, you can redefine the syntax so that Lisp becomes a compiler. You do this using "Reader Macros," which are different from the normal "Compiler Macros" that you're probably thinking of.
Common Lisp has the built-in facility to define new syntax for the reader and reader macros to process that syntax. This processing is done at read-time (which comes before compile or eval time). To learn more about defining reader macros in Common Lisp, see the Common Lisp Hyperspec -- you'll want to read Ch. 2, "Syntax" and Ch. 23, "Reader". (I believe Scheme has the same facility, but I'm not as familiar with it -- see the Scheme sources for the Arc programming language).
As a simple example, let's suppose you want Lisp to use curly braces rather than parentheses. This requires something like the following reader definitions:
;; { and } become list delimiters, along with ( and ).
(set-syntax-from-char #\{ #\( )
(defun lcurly-brace-reader (stream inchar) ; this was way too easy to do.
(declare (ignore inchar))
(read-delimited-list #\} stream t))
(set-macro-character #\{ #'lcurly-brace-reader)
(set-macro-character #\} (get-macro-character #\) ))
(set-syntax-from-char #\} #\) )
;; un-lisp -- make parens meaningless
(set-syntax-from-char #\) #\] ) ; ( and ) become normal braces
(set-syntax-from-char #\( #\[ )
You're telling Lisp that the { is like a ( and that the } is like a ). Then you create a function (lcurly-brace-reader) that the reader will call whenever it sees a {, and you use set-macro-character to assign that function to the {. Then you tell Lisp that ( and ) are like [ and ] (that is, not meaningful syntax).
Other things you could do include, for example, creating a new string syntax or using [ and ] to enclose in-fix notation and process it into S-expressions.
You can also go far beyond this, redefining the entire syntax with your own macro characters that will trigger actions in the reader, so the sky really is the limit. This is just one of the reasons why Paul Graham and others keep saying that Lisp is a good language in which to write a compiler.

I'm not a Lisp expert, heck I'm not even a Lisp programmer, but after a bit of experimenting with the language I came to the conclusion that after a while the parenthesis start becoming 'invisible' and you start seeing the code as you want it to be. You start paying more attention to the syntactical constructs you create via s-exprs and macros, and less to the lexical form of the text of lists and parenthesis.
This is specially true if you take advantage of a good editor that helps with the indentation and syntax coloring (try setting the parenthesis to a color very similar to the background).
You might not be able to replace the language completely and get 'Ruby' syntax, but you don't need it. Thanks to the language flexibility you could end having a dialect that feels like you are following the 'Ruby style of programming' if you want, whatever that would mean to you.
I know this is just an empirical observation, but I think I had one of those Lisp enlightenment moments when I realized this.

Over and over again, newcomers to Lisp want to "get rid of all the parenthesis." It lasts for a few weeks. No project to build a serious general purpose programming syntax on top of the usual S-expression parser ever gets anywhere, because programmers invariably wind up preferring what you currently perceive as "parenthesis hell." It takes a little getting used to, but not much! Once you do get used to it, and you can really appreciate the plasticity of the default syntax, going back to languages where there's only one way to express any particular programming construct is really grating.
That being said, Lisp is an excellent substrate for building Domain Specific Languages. Just as good as, if not better than, XML.
Good luck!

The best explanation of Lisp macros I have ever seen is at
https://www.youtube.com/watch?v=4NO83wZVT0A
starting at about 55 minutes in. This is a video of a talk given by Peter Seibel, the author of "Practical Common Lisp", which is the best Lisp textbook there is.
The motivation for Lisp macros is usually hard to explain, because they really come into their own in situations that are too lengthy to present in a simple tutorial. Peter comes up with a great example; you can grasp it completely, and it makes good, proper use of Lisp macros.
You asked: "could you change the functional nature of LISP into a more object oriented syntax and semantics". The answer is yes. In fact, Lisp originally didn't have any object-oriented programming at all, not surprising since Lisp has been around since way before object-oriented programming! But when we first learned about OOP in 1978, we were able to add it to Lisp easily, using, among other things, macros. Eventually the Common Lisp Object System (CLOS) was developed, a very powerful object-oriented programming system that fits elegantly into Lisp. The whole thing can be loaded as an extension -- nothing is built-in! It's all done with macros.
Lisp has an entirely different feature, called "reader macros", that can be used to extend the surface syntax of the language. Using reader macros, you can make sublanguages that have C-like or Ruby-like syntax. They transform the text into Lisp, internally. These are not used widely by most real Lisp programmers, mainly because it is hard to extend the interactive development environment to understand the new syntax. For example, Emacs indentation commands would be confused by a new syntax. If you're energetic, though, Emacs is extensible too, and you could teach it about your new lexical syntax.

Regular macros operate on lists of objects. Most commonly, these objects are other lists (thus forming trees) and symbols, but they can be other objects such as strings, hashtables, user-defined objects, etc. These structures are called s-exps.
So, when you load a source file, your Lisp compiler will parse the text and produce s-exps. Macros operate on these. This works great and it's a marvellous way to extend the language within the spirit of s-exps.
Additionally, the aforementioned parsing process can be extended through "reader macros" that let you customize the way your compiler turns text into s-exps. I suggest, however, that you embrace Lisp's syntax instead of bending it into something else.
You sound a bit confused when you mention Lisp's "functional nature" and Ruby's "object-oriented syntax". I'm not sure what "object-oriented syntax" is supposed to be, but Lisp is a multi-paradigm language and it supports object-oriented programming extremelly well.
BTW, when I say Lisp, I mean Common Lisp.
I suggest you put your prejudices away and give Lisp an honest go.

Parenthesis hell? I see no more parenthesis in:
(function toto)
than in:
function(toto);
And in
(if tata (toto)
(titi)
(tutu))
no more than in:
if (tata)
toto();
else
{
titi();
tutu();
}
I see less brackets and ';' though.

What you are asking is somewhat like asking how to become an expert chocolatier so that you can remove all that hellish brown stuff from your favourite chocolate cake.

Yes, you can fundamentally change the syntax, and even escape "the parentheses hell". For that you will need to define a new reader syntax. Look into reader macros.
I do suspect however that to reach the level of Lisp expertise to program such macros you will need to immerse yourself in the language to such an extent that you will no longer consider parenthese "hell". I.e. by the time you know how to avoid them, you will have come to accept them as a good thing.

If you want lisp to look like Ruby use Ruby.
It's possible to use Ruby (and Python) in a very lisp like way which is one of the main reasons they have gained acceptance so quickly.

see this example of how reader macros can extend the lisp reader with complex tasks like XML templating:
http://common-lisp.net/project/cl-quasi-quote/present-class.html
this user library compiles the static parts of the XML into UTF-8 encoded literal byte arrays at compile time that are ready to be write-sequence'd into the network stream. and they are usable in normal lisp macros, they are orthogonal... the placement of the comma character influences which parts are constant and which should be evaluated at runtime.
more details available at: http://common-lisp.net/project/cl-quasi-quote/
another project that for Common Lisp syntax extensions: http://common-lisp.net/project/cl-syntax-sugar/

#sparkes
Sometimes LISP is the clear language choice, namely Emacs extensions. I'm sure I could use Ruby to extend Emacs if I wanted to, but Emacs was designed to be extended with LISP, so it seems to make sense to use it in that situation.

It's a tricky question. Since lisp is already structurally so close to a parse tree the difference between a large number of macros and implementing your own mini-language in a parser generator isn't very clear. But, except for the opening and closing paren, you could very easily end up with something that looks nothing like lisp.

One of the uses of macros that blew my mind was the compile-time verification of SQL requests against DB.
Once you realize you have the full language at hand at compile-time, it opens up interesting new perspectives. Which also means you can shoot yourself in the foot in interesting new ways (like rendering compilation not reproducible, which can very easily turn into a debugging nightmare).

Related

Lisp source code rewriting system

I would like to take Emacs Lisp code that has been macro expanded and unmacro expand it. I have asked this on the Emacs forum with no success. See:
https://emacs.stackexchange.com/questions/35913/program-rewriting-systems-unexpanded-a-defmacro-given-a-list-of-macros-to-undo
However one would think that this kind of thing, S-expression transformation, is right up Lisp's alley. And defmacro is I believe available in Lisp as it is in Emacs Lisp.
So surely there are program transformation systems, or term-rewriting systems that can be adapted here.
Ideally, in certain situations such a tool would be able to work directly off the defmacro to do its pattern find and replace on. However even if I have to come up with specific search and replace patterns manually to add to the transformation system, having such a framework to work in would still be useful
Summary of results so far: Although there have been a few answers that explore interesting possibilities, right now there is nothing definitive. So I think best to leave this open. I'll summarize some of the suggestions. (I've upvoted all the answers that were in fact answers instead of commentary on the difficulty.)
First, many people suggest considered the special form of macros that do expansion only,or as Drew puts it:
macro-expansion (i.e., not expansion followed by Lisp evaluation).
Macro-expansion is another way of saying reduction semantics, or
rewriting.
The current front-runner to my mind is in phils post where he uses a pattern-matching facility that seems specific to Emacs: pcase. I will be exploring this and will post results of my findings. If anyone else has thoughts on this please chime in.
Drew wrote a program called FTOC whose purpose was to convert Franz Lisp to Common Lisp; googling turns up a comp.lang.lisp posting
I found a Common Lisp package called optima with fare-quasiquote. Paulo thinks however this might not be powerful enough since it doesn't handle backtracking out of the box, but might be programmed in by hand. Although the generality of backtracking might be nice, I'm not convinced I need that for the most-used situations.)
Side note: Some seem put off by the specific application causing my initial interest. (But note that in research, it is not uncommon for good solutions to get applied in ways not initially envisioned.)
So in that spirit, here are a couple of suggestions for changing the end application. A good solution for these would probably translate to a solution for Emacs Lisp. (And if if helps you to pretend I'm not interested in Emacs Lisp, that's okay with me). Instead of a decompiler for Emacs Lisp, suppose I want to write a decompiler for clojure or some Common Lisp system. Or as suggested by Sylwester's answer, suppose I would like to automatically refactor my code by taking into account the benefit of using more concise macros that exist or that have gotten improved. Recall that at one time Emacs Lisp didn't have "when" or "unless" macros.
30-some years ago I did something similar, using macrolet.
(Actually, I used defmacro because we had only an early implementation of Common Lisp, which did not yet have macrolet. But macrolet is the right thing to use.)
I didn't translate macro-expanded code to what it was expanded from, but the idea is pretty much the same. You will come across some different difficulties, I expect, since your translation is even farther away from one-to-one.
I wrote a translator from (what was then) Franz Lisp to Common Lisp, to help with porting lots of existing code to a Lisp+Prolog-machine project. Franz Lisp back then was only dynamically scoped, while Common Lisp is (in general) lexically scoped.
And yes, obviously there is no general way to automatically translate Lisp code (in particular), especially considering that it can generate and then evaluate other code - but even ignoring that special case. Many functions are quite similar, but there is the lexical/dynamic difference, as well as significant differences in the semantics of some seemingly similar functions.
All of that has to be understood and taken for granted from the outset, by anyone wanting to make use of the results of translation.
Still, much that is useful can be done. And if the resulting code is self-documenting, telling you what it was derived from etc., then when in the resulting context you can decide just what to do with this or that bit that might be tricky (e.g., rewrite it manually, from scratch or just tweak it). In practice, lots of code was easily converted from Franz to Common - it saved much reprogramming effort.
The translator program was written in Common Lisp. It could be used interactively as well as in batch. When used interactively it provided, in effect, a Franz Lisp interpreter on top of Common Lisp.
The program used only macro-expansion (i.e., not expansion followed by Lisp evaluation). Macro-expansion is another way of saying reduction semantics, or rewriting.
Input Franz-Lisp code was macro-expanded via function-definition mapping macros to produce Common-Lisp code. Code that was problematic for translation was flagged (in code) with a description/analysis that described the situation.
The program was called FTOC. I think you can still find it, or at least references to it, by googling (ftoc lisp). (It was the first Lisp program I wrote, and I still have fond memories of the experience. It was a good way to learn both Lisp dialects and to learn Lisp in general.)
Have fun!
In general, I don't think you can do this. The expansion of an lisp macro is Turing complete, so you have to be able to predict the output of a program which could have arbitrary input.
There are some simple things that you could do. defmacros with backquoted forms in appear fairly similar in the output form and might be detected. This sort of heuristic would probably get you a long way.
What I don't understand is your use case. The macro-expanded version of a piece of code is usually only present in the compiled (or in emacs-lisp byte-compiled) form.
Ok so other people have pointed out the fact that this problem is impossible in general. There are two hard parts to this problem: one is that it could be a lot of work to find a preimage of some code fragment through a macro and it is also impossible to determine whether a macro was called or not—there are examples where one may write code which could have come from a macro without using that macro. Imagine for the sake of illustration an sha macro which expands to the SHA hash of the string literal passed to it. Then if you see some sha hash in your expanded code, it would obviously be silly to try to unexpand it. But it may be that the hash was put into the code as a literal, e.g. referencing a specific point in the history of a git repository so it would also be unhelpful to unexpand the macro.
Tractable subproblems
Let me preface this by saying that whilst these may be a little tractable, I still wouldn’t try to solve this problem.
Let’s ignore all the macros that do weird things (like the example above) and all the macros that are just as likely to not have been used in the original (e.g. cond vs if) and all the macros which generate complex code which seems like it would be difficult to unravel (e.g. loop, do, and backquote. Annoyingly these difficult cases are some of those which you would perhaps most want to unexpand). The type this leaves us with (that I’d like to focus on) are macros which basically just reduce boilerplate, e.g. save-excursion or with-XXXX. These are macros whose implementation consists of possibly making some fresh symbols (via gensym) and then having a big simple backquoted block of code. I still think it would be too hard to automatically go from defmacro to a function for unexpansion but I think you could attack some of these on a case-by-case basis. Do this by looking for the forms generated by the macro that delimit (I.e. begin/end) the expanded code. I can’t really offer much beyond that. This is still a hard problem and I don’t think any existing solutions (to other problems) will get you very far on your way.
A further complication I understand is that you do not start at the macroexpanded code but rather at the bytecode. Without knowing anything about the elisp compiler, I worry that more information would be lost in the compilation step and you would have to undo that as well, e.g. perhaps it is hard to determine which code goes inside a let or even when a let begins, or bytecode starts using goto type features even though elisp doesn’t have them.
You suggest that the reason you would like to unexpand macros is so you can decompile bytecode which sometimes comes up in the Emacs debugger and that this would be useful as even though the source code is available in theory, it isn’t always at your fingertips. I put it to you that if you want to make your life debugging elisp easier it would be more worthwhile to figure out how to have the Emacs debugger always take you to the source code for internal functions. This might involve installing extra debugging related packages or downloading the Emacs source code and setting some variable so Emacs knows where to find it or compiling Emacs yourself from source. I don’t really know about that but I bet getting thrown into bytecode instead of source would have been enough of a problem for Emacs developers over the past thirty years that a solution to that problem does exist.
If however what you really want to do is to try to implement a decompiler for elisp then I suppose that’s what you should do. A final observation is that while Lisp provides facilities which make manipulating Lisp code easy, this doesn’t help much with decompiling as all these facilities can be used in compilation so there are infinitely more patterns one might want to detect than in e.g. a C decompiler. Perhaps scheme style macros would be easier to unexpand, although they would still be hard.
If you’re decompiling because you want to give a better idea of which exact subexpression rather than line is being evaluated (normally Lisp debuggers work on expressions not lines anyway) in the debugger then perhaps it would actually be useful to see the code at the expanded level rather than the unexpanded one. Or perhaps it would be best to see both and maybe in between as well. Keeping track of what’s what through forwards macroexpansion is already difficult and fiddly. Doing it in reverse certainly won’t be easier. Good luck!
Edit: seeing as your not currently using Lisp anyway, I wonder if you might have more success using something like prolog for your unexpanding. You’d still have to manually write rules but I think it would be a large amount of work to try to derive rules from macro definitions.
I would like to take Emacs Lisp code that has been macro expanded and unmacro expand it.
Macros generate arbitrary expressions, which may contain macros recursively. You have no general way to revert the transformations, because it's not pattern-based.
Even if macros were pattern-based, they could still be infinite.
Even if macros were not infinite, they can certainly contain bugs in expansions of patterns that never matched. Given arbitrary code to try to unwind, it could match an expansion that looks like the code and try to revert to its pattern. Without bugs, you could still abuse this.
Even if you could revert macro expansion, some macros expand to the same code. An approach could be signalling a warning with a restart when all reversions expand equally minus the operator, such that if the restart doesn't handle the signal, it would choose the first expansion; and otherwise signalling an error with a restart, such that if the restart doesn't handle the signal, it errors. Or you could configure it to choose certain macros under certain conditions, such as in which package the code was found.
In practice, there are very few cases where reverting an expansion makes any sense. It could be a useful development tool that suggests macros, but I wouldn't generally rely on it for whole source transformations.
One way you could achieve what you want is through a controlled pattern matching. You could initially create patterns manually, which would already handle cases you care about directly, such as the ones you mention:
(if (not <cond>) <expr>) and (if (not <cond>) (progn <&expr>)) to (unless <cond> <&expr>)
You'd have to decide whether null would be equivalent to not. I personally don't mix the boolean meaning of nil with that of empty list or something else, e.g. no result, nothing found, null object, a designator, etc. But perhaps Lisp code as old as that in Emacs just uses them interchangeably.
(if <cond> <expr>) and (if <cond> (progn <&expr>)) to (when <cond> <&expr>)
If you feel like improving code overall, include cond with a single condition. And be careful with cond clauses with only the condition.
You should have a few dozen more, to see how the pattern matching behaves with more patterns to match in terms of time (CPU) and space (memory).
From the description of fare-quasiquote, optima doesn't support backtracking, which you probably want.
But you can do backtracking with optima by yourself, using recursion on complex inner patterns, and if nothing matches, return a control value to keep searching for matching patterns from the outer input.
Another approach is to treat a pattern as a description of a state machine, and handle each new token to advance the current state machines until one of them reaches the end, discarding the state machines that couldn't advance. This approach may consume more memory, depending on the amount of patterns, the similarity between patterns (if many have the same starting token, many state machines will be generated on a matching token), the length of the patterns and, last but not least, the length of the input (s-expression).
An advantage of this approach is that you can use it interactively to see which patterns have matched the most tokens, and you can give weights to patterns instead of just taking the first that matches.
A disadvantage is that, most probably, you'll have to spend effort to develop it.
EDIT: I just lousily described a kind of trie or radix tree.
Once you got something working, maybe try to obtain patterns automatically. This is really hard, you must probably limit it to simple backquoting and accept the fact you can't generalize for anything that contains more complex code.
I believe the hardest will be code walking, which is hard enough with source code, but much more with macro-expanded code. Perhaps if you could expand the whole picture a bit further to understand the goal, maybe someone could suggest a better approach other than operating on macro-expanded code.
However one would think that this kind of thing, S-expression transformation, is right up Lisp's alley. And defmacro is I believe available in Lisp as it is in Emacs Lisp.
So surely there are program transformation systems, or term-rewriting systems that can be adapted here.
There's a huge step from expanding code with defmacro and all that generality. Most Lisp developers will know about hygienic macros, at least in terms of symbols as variables.
But there's still hygienic macros in terms of symbols as operators1, code walking, interaction with a containing macro (usually using macrolet), etc. It's way too complex.
1.
Common Lisp evaluates the operator in a compound form in the lexical environment, and probably everyone makes macros that assume that the global macro or function definition of a symbol will be used.
But it might not be so:
(defmacro my-macro-1 ()
`1)
(defmacro my-macro-2 ()
`(my-function (my-macro-1)))
(defun my-function (n)
(* n 100))
(macrolet ((my-macro-1 ()
`2))
(flet ((my-function (n)
(* n 1000)))
(my-macro-2)))
That last line will expand to (my-function (my-macro-2)), which will be recursively expanded to (my-function 2). When evaluated, it will yield 2000.
For proper operator hygiene, you'd have to do something like this:
(defmacro my-macro-2 ()
;; capture global bindings of my-macro-1 and my-function-1 by name
(flet ((my-macro-1-global (form env)
(funcall (macro-function 'my-macro-1) form env))
(my-function-global (&rest args)
;; hope the compiler can optimize this
(apply 'my-function args)))
;; store them globally in uninterned symbols
;; hopefully, no one will mess with them
(let ((my-macro-1-symbol (gensym (symbol-name 'my-macro-1)))
(my-function-symbol (gensym (symbol-name 'my-function))))
(setf (macro-function my-macro-1-symbol) #'my-macro-1-global)
(setf (symbol-function my-function-symbol) #'my-function-global)
`(,my-function-symbol (,my-macro-1-symbol)))))
With this definition, the example will yield 100.
Common Lisp has some restrictions to avoid this, but it only states the consequences are undefined when (re)defining symbols in the common-lisp package, globally or locally. It doesn't require errors or warnings to be signaled.
I don't think it is possible to do this in general, but you can undo a pattern back into a macro use for every match if you supply code for each unmacroing. Code that mixed cond and if will end up being just if and your code would remove all if into cond making the reverse not the same as the starting point. The more macros you have and the more they expand into each other the more uncertain of the end result will be of the starting point.
You could have rules such that if is not translated into cond unless you used one of the features, like more than one predicate or implicit progn, but you have no idea if the coder actually did use cond everywhere because he liked in consistent regardless. Thus your unmacroing will acyually be more of a simplification.
I don't believe there's a general solution to that, and you certainly
can't guarantee that the structure of the output would match that of
the original code, and I'm not going near the idea of auto-generating
patterns and desired transformations from macro definitions; but you
might achieve a simple version of this with Emacs' own pcase pattern
matching facility.
Here's the simplest example I could think of:
With reference to the definition of when:
(defmacro when (cond &rest body)
(list 'if cond (cons 'progn body)))
We can transform code using a pcase pattern like so:
(let ((form '(if (and foo bar baz) (progn do (all the) things))))
(pcase form
(`(if ,cond (progn . ,body))
`(when ,cond ,#body))
(_ form)))
=> (when (and foo bar baz) do (all the) things)
Obviously if the macro definitions change, then your patterns will
cease to work (but that's a pretty safe kind of failure).
Caveat: This is the first time I've written a pcase form, and I
don't know what I don't know. It seems to work as intended, though.

How to define infix notation macro in Lisp, without enclosing it in Lisp like syntax [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So having watched 3 hours of youtube videos, and spent equally long reading about Lisp, I've yet to see these "magic macros" that allow one to write DSLs, or even do simple things like 4 + 5 without nesting this inside some braces.
There is some discussion here: Common lisp: is there a less painful way to input math expressions? but the syntax doesn't look any nicer, and still requires some enclosing fluff to define where the macro starts and ends.
So here is the challenge, define an infix macro, then use it without having to enclose it with some kind of additional syntax. I.e.
Some macro definition here
1 + 2
NOT
Some macro definition here
(my-macro 1 + 2)
NOT
Some macro definition here
ugly-syntax 1 + 2 end-ugly-syntax
If this is not possible in Lisp, then what is all the fuss about? It's kinda like saying "you have awesome powerful macros that allow for cool DSLs, provided those DSLs are wrapped in Lisp like syntax".
Of course you can implement what you're talking about in Lisp.
One big difference between Lisp and other languages is that nothing is fixed and you've control on what the reader does on every single character of the input source code. Literally.
Consider for example the project CL-Python and its mixed-syntax Lisp/Python mode.
Basically in that mode if what you type starts with an open parenthesis then it's considered Lisp, otherwise it's considered Python (including multi-line constructs).
...HOWEVER...
this kind of macro/reader-macro library is implemented rarely, because one of the main advantages of the s-expression approach is that it's well composable and you can build other macros on top of existing macros, raising the level of the language.
Once abandoned the regularity of s-expression approach, writing macros becomes annoying because to write code that manipulates code you need to consider the several different constructs of the language, precedence rules, special rules, special syntax forms.
Languages that are not based on s-expression sometimes provide real macro processing capability but working on the AST level, i.e. after some parsing processing has already been done in a fixed way. Moreover the macro code in these languages looks really weird because the code the macro is manipulating, inspecting or building doesn't look like real code.
Sometimes in other languages instead you only find text-based macros that are basically search-replace. To see an example of how ugly things can get consider the Python standard library implementation of collections.namedtuple (around line 284) and its set of absurd limitations induced just because of that implementation.
Another example of how things can get horribly complex once you force yourself to a template-only approach to avoid the complexity of manipulating an irregular and special cased language is C++ template metaprogramming.
A simple s-expression based language and quasi-quotation instead makes macro code much easier to write and read and understand, and that's why Lisp code doesn't move away from it. Not because it cannot, but because it doesn't make sense to go to a worse syntax for no reason.
In Lisp you "bend" the language a little by adding the abstractions that are really needed without however breaking everything else and, most important, without dropping the ability to do more bending in the future if needed. Writing a macro/reader-macro that makes the expression parsing the nightmare of say C++ and at the same time removes the ability to write further macros and add more constructs (or makes it impossibly hard) would be a nonsensical suicide.
Even a macro like (infix x + y * z) is just an exercise... I doubt that any lisper would use that to write real code... why on earth would someone reintroduce the absurd function/operator duality and the nightmare of precedence/associativity rules? If you don't like Lisp then just don't use Lisp.
For a lisper it's not the (infix and ) part that is ugly... it's what is in the middle.
Also why do you think that 2+3*6 is "naturally" 20? Because the teacher hit you on the palms of your hands with a stick when you were a kid until you got it right?

Macros: What's the benefit?

Paul Graham writes:
For example, types seem to be an
inexhaustible source of research
papers, despite the fact that static
typing seems to preclude true macros--
without which, in my opinion, no
language is worth using.
What's the big deal with macros? I haven't spent a whole lot of time with them, but from the legacy C/C++ I've worked with they appear to be mostly used as a hack before templates/generics existed.
It's hard to imagine that
DECLARELIST(StrList, string);
StrList slist;
is somehow preferable to
List<String> slist;
Am I missing something?
Then there's the usage as a pseudo-function, like MAKEPOINTS:
POINTS MAKEPOINTS(
DWORD dwValue
);
Why not define it as a function instead? Is this some optimization, where you avoid code duplication without having the added overhead of another stack frame?
Then there's also tricky control flow things involving GOTO, which seem to be of dubious value.
What's so great about macros? They're less type safe (in C and C++) (right?). Why won't Paul Graham program without them?
LISP macros are an entirely different beast. C/C++ macros can merely replace a piece of text with abother piece of text using an extremely basic language. Whereas a LISP program is (after "reading") is a LISP data structure and can therefore be manipulated using the whole language.
With such macros, you could (given you're a really clever hacker) vastly extend the language and everybody could use it relatively easily, since you did it with macros. Take for example the the Common Lisp Object System. At its core, the language has nothing even remotely like objects. It is entirely implemented in the language itself, including a relatively simple syntax for use - using macros.
Of course macros are less necessary when the language has most things you'd every want built-in. OTOH, the LISP fans are of the opinion that a sufficiently simple language (LISP) with sufficiently powerful metaprogramming capabilities (macros) is better since new concepts can be incorporated into the language without changing the spec or working implementations. But the most compelling example for macro usage is the DSL area. Ruby on Rails and others show every day how useful DSLs can be. Yes, Ruby doesn't have macros, it just exploits how much Ruby syntax can be bent. In other languages, or when even Ruby's syntax isn't flexible enough, you need macros or a fully-blown parser/interpreter to implement a complex DSL.
Macros are really only good for two things in C/C++, and should generally be the tool of last resort (if you can accomplish something without using macros, do so).
Creating new syntactic structures or abstractions that do not exist in the language.
Eliminating duplication, especially between things that must be in sync with each other.
It's almost never to use a macro as a function.
You also have to realize that LISP macros are not C/C++ macros.

What features is lisp lacking? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read that most languages are becoming more and more like lisp, adopting features that lisp has had for a long time. I was wondering, what are the features, old or new, that lisp does not have? By lisp I mean the most common dialects like Common Lisp and Scheme.
This question has been asked a million times, but here goes. Common Lisp was created at a time when humans were considered cheap, and machines were considered expensive. Common Lisp made things easier for humans at the expense of making it harder for computers. Lisp machines were expensive; PCs with DOS were cheap. This was not good for its popularity; better to get a few more humans making mistakes with less expressive languages than it was to buy a better computer.
Fast forward 30 years, and it turns out that this isn't true. Humans are very, very expensive (and in very short supply; try hiring a programmer), and computers are very, very cheap. Cheaper than dirt, even. What today's world needs is exactly what Common Lisp offered; if Lisp were invented now, it would become very popular. Since it's 30-year-old (plus!) technology, though, nobody thought to look at it, and instead created their own languages with similar concepts. Those are the ones you're using today. (Java + garbage collection is one of the big innovations. For years, GC was looked down upon for being "too slow", but of course, a little research and now it's faster than managing your own memory. And easier for humans, too. How times change...)
Pass-by-reference (C++/C#)
String interpolation (Perl/Ruby) (though a feature of CL21)
Nice infix syntax (though it's not clear that it's worth it) (Python)
Monadic 'iteration' construct which can be overloaded for other uses (Haskell/C#/F#/Scala)
Static typing (though it's not clear that it's worth it) (many languages)
Type inference (not in the standard at least) (Caml and many others) (though CL does some type inference, unlike Python)
Abstract Data Types (Haskell/F#/Caml)
Pattern matching (Haskell/F#/Caml/Scala/others) (in CL, there are libs like optima)
Backtracking (though it's not clear that it's worth it) (Prolog)
ad-hoc polymorphism (see Andrew Myers' answer)
immutable data structures (many languages) (available through libraries, like Fsets
lazy evaluation (Haskell) (available through libraries, like clazy or a cl21 module)
(Please add to this list, I have marked it community wiki.)
This just refers to the Common Lisp and Scheme standards, because particular implementations have added a lot of these features independently. In fact, the question is kind of mistaken. It's so easy to add features to Lisp that it's better to have a core language without many features. That way, people can customize their language to perfectly fit their needs.
Of course, some implementations package the core Lisp with a bunch of these features as libraries. At least for Scheme, PLT Scheme provides all of the above features*, mostly as libraries. I don't know of an equivalent for Common Lisp, but there may be one.
*Maybe not infix syntax? I'm not sure, I never looked for it.
For Common Lisp, I think the following features would be worth adding to a future standard, in the ridiculously unlikely hypothetical situation that another standard is produced. All of these are things that are provided by pretty much every actively maintained CL implementation in subtly incompatible ways, or exist in widely used and portable libraries, so having a standard would provide significant benefits to users while not making life unduly difficult for implementors.
Some features for working with an underlying OS, like invoking other programs or handling command line arguments. Every implementation of CL I've used has something like this, and all of them are pretty similar.
Underlying macros or special forms for BACKQUOTE, UNQUOTE and UNQUOTE-SPLICING.
The meta-object protocol for CLOS.
A protocol for user-defined LOOP clauses. There are some other ways LOOP could be enhanced that probably wouldn't be too painful, either, like clauses to bind multiple values, or iterate over a generic sequence (instead of requiring different clauses for LISTs and VECTORs).
A system-definition facility that integrates with PROVIDE and REQUIRE, while undeprecating PROVIDE and REQUIRE.
Better and more extensible stream facilities, allowing users to define their own stream classes. This might be a bit more painful because there are two competing proposals out there, Gray streams and "simple streams", both of which are implemented by some CL implementations.
Better support for "environments", as described in CLTL2.
A declaration for merging tail calls and a description of the situations where calls that look like tail calls aren't (because of UNWIND-PROTECT forms, DYNAMIC-EXTENT declarations, special variable bindings, et c.).
Undeprecate REMOVE-IF-NOT and friends. Eliminate the :TEST-NOT keyword argument and SET.
Weak references and weak hash tables.
User-provided hash-table tests.
PARSE-FLOAT. Currently if you want to turn a string into a floating point number, you either have to use READ (which may do all sorts of things you don't want) or roll your own parsing function. This is silly.
Here are some more ambitious features that I still think would be worthwhile.
A protocol for defining sequence classes that will work with the standard generic sequence functions (like MAP, REMOVE and friends). Adding immutable strings and conses alongside their mutable kin might be nice, too.
Provide a richer set of associative array/"map" data types. Right now we have ad-hoc stuff built out of conses (alists and plists) and hash-tables, but no balanced binary trees. Provide generic sequence functions to work with these.
Fix DEFCONSTANT so it does something less useless.
Better control of the reader. It's a very powerful tool, but it has to be used very carefully to avoid doing things like interning new symbols. Also, it would be nice if there were better ways to manage readtables and custom reader syntaxes.
A read syntax for "raw strings", similar to what Python offers.
Some more options for CLOS classes and slots, allowing for more optimizations and better performance. Some examples are "primary" classes (where you can only have one "primary class" in a class's list of superclasses), "sealed" generic functions (so you can't add more methods to them, allowing the compiler to make a lot more assumptions about them) and slots that are guaranteed to be bound.
Thread support. Most implementations either support SMP now or will support it in the near future.
Nail down more of the pathname behavior. There are a lot of gratuitously annoying incompatibilities between implementations, like CLISP's insistance on signaling an error when you use PROBE-FILE on a directory, or indeed the fact that there's no standard function that tells you whether a pathname is the name of a directory or not.
Support for network sockets.
A common foreign function interface. It would be unavoidably lowest-common-denominator, but I think having something you could portably rely upon would be a real advantage even if using some of the cooler things some implementations provide would still be relegated to the realm of extensions.
This is in response to the discussion in comments under Nathan Sanders reply. This is a bit much for a comment so I'm adding it here. I hope this isn't violating Stackoverflow etiquette.
ad-hoc polymorphism is defined as different implementations based on specified types. In Common Lisp using generic methods you can define something like the following which gives you exactly that.
;This is unnecessary and created implicitly if not defined.
;It can be explicitly provided to define an interface.
(defgeneric what-am-i? (thing))
;Provide implementation that works for any type.
(defmethod what-am-i? (thing)
(format t "My value is ~a~%" thing))
;Specialize on thing being an integer.
(defmethod what-am-i? ((thing integer))
(format t "I am an integer!~%")
(call-next-method))
;Specialize on thing being a string.
(defmethod what-am-i? ((thing string))
(format t "I am a string!~%")
(call-next-method))
CL-USER> (what-am-i? 25)
I am an integer!
My value is 25
NIL
CL-USER> (what-am-i? "Andrew")
I am a string!
My value is Andrew
NIL
It can be harder than in more popular languages to find good libraries.
It is not purely functional like haskell
Whole-program transformations. (It would be just like macros, but for everything. You could use it to implement declarative language features.) Equivalently, the ability to write add-ons to the compiler. (At least, Scheme is missing this. CL may not be.)
Built-in theorem assistant / proof checker for proving assertions about your program.
Of course, I don't know of any other language that has these, so I don't think there's much competition in terms of features.
You are asking the ronge question. The language with the most features isnt the best. A language needs a goal.
We could add all of this and more
* Pass-by-reference (C++/C#)
* String interpolation (Perl/Ruby)
* Nice infix syntax (though it's not clear that it's worth it) (Python)
* Monadic 'iteration' construct which can be overloaded for other uses (Haskell/C#/F#/Scala)
* Static typing (though it's not clear that it's worth it) (many languages)
* Type inference (not in the standard at least) (Caml and many others)
* Abstract Data Types (Haskell/F#/Caml)
* Pattern matching (Haskell/F#/Caml/Scala/others)
* Backtracking (though it's not clear that it's worth it) (Prolog)
* ad-hoc polymorphism (see Andrew Myers' answer)
* immutable data structures (many languages)
* lazy evaluation (Haskell)
but that would make a good language. A language is not functional if you use call by ref.
If you look at the new list Clojure. Some of them are implemented but other that CL has are not and that makes for a good language.
Clojure for example added some:
ad-hoc polymorphism
lazy evaluation
immutable data structures
Type inference (most dynamic languages have compilers that do that)
My Answer is:
Scheme schooled stay as it is.
CL could add some ideos to the standard if they would make a new one.
Its LISP most can be added with libs.
Decent syntax. (Someone had to say it.) It may be simple/uniform/homoiconic/macro-able/etc, but as a human, I just loathe looking at it :)
It's missing a great IDE

How do I 'get' Lisp? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've read The Nature of Lisp. The only thing I really got out of that was "code is data." But without defining what these terms mean and why they are usually thought to be separate, I gain no insight. My initial reaction to "code is data" is, so what?
The old fashioned view: 'it' is interactive computation with symbolic expressions.
Lisp enables easy representation of all kinds of expressions:
english sentence
(the man saw the moon)
math
(2 * x ** 3 + 4 * x ** 2 - 3 * x + 3)
rules
(<- (likes Kim ?x) (likes ?x Lee) (likes ?x Kim))
and also Lisp itself
(mapcar (function sqr) (quote (1 2 3 4 5)))
and many many many more.
Lisp now allows to write programs that compute with such expressions:
(translate (quote (the man saw the moon)) (quote german))
(solve (quote (2 * x ** 3 + 4 * x ** 2 - 3 * x + 3)) (quote (x . 3)))
(show-all (quote (<- (likes Kim ?x) (likes ?x Lee) (likes ?x Kim))))
(eval (quote (mapcar (function sqr) (quote (1 2 3 4 5)))))
Interactive means that programming is a dialog with Lisp. You enter an expression and Lisp computes the side effects (for example output) and the value.
So your programming session is like 'talking' with the Lisp system. You work with it until you get the right answers.
What are these expressions? They are sentences in some language. They are part descriptions of turbines. They are theorems describing a floating point engine of an AMD processor. They are computer algebra expressions in physics. They are descriptions of circuits. They are rules in a game. They are descriptions of behavior of actors in games. They are rules in a medical diagnosis system.
Lisp allows you to write down facts, rules, formulas as symbolic expressions. It allows you to write programs that work with these expressions. You can compute the value of a formula. But you can equally easy write programs that compute new formulas from formulas (symbolic math: integrate, derive, ...). That was Lisp designed for.
As a side effect Lisp programs are represented as such expressions too. Then there is also a Lisp program that evaluates or compiles other Lisp programs. So the very idea of Lisp, the computation with symbolic expressions, has been applied to Lisp itself. Lisp programs are symbolic expressions and the computation is a Lisp expression.
Alan Kay (of Smalltalk fame) calls the original definition of Lisp evaluation in Lisp the Maxwell's equations of programming.
Write Lisp code. The only way to really 'get' Lisp (or any language, for that matter) is to roll up your sleeves and implement some things in it. Like anything else, you can read all you want, but if you want to really get a firm grasp on what's going on, you've got to step outside the theoretical and start working with the practical.
The way you "get" any language is by trying to write some code in it.
About the "data is code" thing, in most languages there is a clear separation between the code that gets executed, and the data that is processed.
For example, the following simple C-like function:
void foo(int i){
int j;
if (i % 42 == 0){
bar(i-2);
}
for (j = 0; j < i; ++j){
baz();
}
}
the actual control flow is determined once, statically, while writing the code. The function bar isn't going to change, and the if statement at the beginning of the function isn't going to disappear. This code is not data, it can not be manipulated by the program.
All that can be manipulated is the initial value of i. And on the other hand, that value can not be executed the way code can. You can call the function foo, but you can't call the variable i. So i is data, but it is not code.
Lisp does not have this distinction. The program code is data that can be manipulated too. Your code can, at runtime, take the function foo, and perhaps add another if statement, perhaps change the condition in the for-loop, perhaps replace the call to baz with another function call. All your code is data that can be inspected and manipulated as simply as the above function can inspect and manipulate the integer i.
I would highly recommend Structure and Interpretation of Computer Programs, which actually uses scheme, but that is a dialect of lisp. It will help you "get" lisp by having you do many different exercises and goes on to show some of the ways that lisp is so usefull.
I think you have to have more empathy for compiler writers to understand how fundamental the code is data thing is. I'll admit, I've never taken a compilers course, but converting any sufficiently high-level language into machine code is a hard problem, and LISP, in many ways, resembles an intermediate step in this process. In the same way that C is "close to the metal", LISP is close to the compiler.
This worked for me:
Read "The Little Schemer". It's the shortest path to get you thinking in Lisp mode (minus the macros). As a bonus, it's relatively short/fun/inexpensive.
Find a good book/tutorial to get you started with macros. I found chapter 8 of "The Scheme
Programming Language" to be a good starting point for Scheme.
http://www.ccs.neu.edu/home/matthias/BTLS/
http://www.scheme.com/tspl3/syntax.html
By watching legendary Structure and Interpretation of Computer Programs?
In Common Lisp, "code is data" boils down to this. When you write, for example:
(add 1 2)
your Lisp system will parse that text and generate a list with three elements: the symbol ADD, and the numbers 1 and 2. So now they're data. You can do whatever you want with them, replace elements, insert other stuff, etc.
The fun part is that you can pass this data on to the compiler and, because you can manipulate these data structures using Lisp itself, this means you can write programs that write other programs. This is not as complicated as it sounds, and Lispers do it all the time using macros. So, just get a book about Lisp, and try it out.
Okay, I'm going to take a crack at this. I'm new to Lisp myself, just having arrived from the world of python. I haven't experienced that sudden moment of enlightenment that all the old Lispers talk about, but I'll tell you what I am seeing so far.
First, look at this random bit of python code:
def is_palindrome(st):
l = len(st)/2
return True if st[:l] == st[:-l-1:-1] else False
Now look at this:
"""
def is_palindrome(st):
l = len(st)/2
return True if st[:l] == st[:-l-1:-1] else False
"""
What do you, as a programmer, see? The code is identical, FYI.
If you are like me, you'll tend to think of the first as active code. It consists of a number of syntactic elements.
The second, despite its similarity, is a single syntactic item. It's a string. You interact with it as a single entity. To deal with it as code - to handle it comfortably along its syntactic boundaries - you will have to do some parsing. To execute it, you need to invoke an interpreter. It's not the same thing at all as the first.
So when we do code generation in most languages what are we dealing with? Strings. When I generate HTML or SQL with python I use python strings as the interface between the two languages. Even if I generate python with python, strings are the tool.*
Doesn't the thought of that just... make you want to dance with joy? There's always this grotesque mismatch between that which you are working with and that which you are working on. I sensed that the first time that I generated SQL with perl. Differences in escaping. Differences in formatting: think about trying to get a generated html document to look tidy. Stuff isn't easy to reuse. Etc.
To solve the problem we serially create templating libraries. Scads of them. Why so many? My guess is that they're never quite satisfactory. By the time they start getting powerful enough they've turned into monstrosities. Granted, some of them - such as SQLAlchemy and Genshi in the python world - are very beautiful and admirable monstrosities. Let's... um... avoid mention of PHP.
Because strings make an awkward interface between the worked-on language and the worked-with, we create a third language - templates - to avoid them. ** This also tends to be a little awkward.
Now let's look at a block of quoted Lisp code:
'(loop for i from 1 to 8 do (print i))
What do you see? As a new Lisp coder, I've caught myself looking at that as a string. It isn't. It is inactive Lisp code. You are looking at a bunch of lists and symbols. Try to evaluate it after turning one of the parentheses around. The language won't let you do it: syntax is enforced.
Using quasiquote, we can shoehorn our own values into this inactive Lisp code:
`(loop for i from 1 to ,whatever do (print i))
Note the nature of the shoehorning: one item has been replaced with another. We aren't formatting our value into a string. We're sliding it into a slot in the code. It's all neat and tidy.
In fact if you want to directly edit the text of the code, you are in for a hassle. For example if you are inserting a name <varname> into the code, and you also want to use <varname>-tmp in the same code you can't do it directly like you can with a template string: "%s-tmp = %s". You have to extract the name into a string, rewrite the string, then turn it into a symbol again and finally insert.
If you want to grasp the essence of Lisp, I think that you might gain more by ignoring defmacro and gensyms and all that window dressing for the moment. Spend some time exploring the potential of the quasiquote, including the ,# thing. It's pretty accessible. Defmacro itself only provides an easy way to execute the result of quasiquotes. ***
What you should notice is that the hermetic string/template barrier between the worked-on and the worked-with is all but eliminated in Lisp. As you use it, you'll find that your sense of two distinct layers - active and passive - tends to dissipate. Functions call macros which call macros or functions which have functions (or macros!) passed in with their arguments. It's kind of a big soup - a little shocking for the newcomer. That said, I don't find that the distinction between macros and functions is as seamless as some Lisp people say. Mostly it's ok, but every so often as I wander in the soup I find myself bumping up against the ghost of that old barrier - and it really creeps me out!
I'll get over it, I'm sure. No matter. The convenience pays for the scare.
Now that's Lisp working on Lisp. What about working on other languages? I'm not quite there yet, personally, but I think I see the light at the end of the tunnel. You know how Lisp people keep going on about S-expressions being the same thing as a parse tree? I think the idea is to parse the foreign language into S-expressions, work on them in the amazing comfort of the Lisp environment, then send them back to native code. In theory, every language out there could be turned into S-expressions, or even executable lisp code. You're not working in a first language combined with a third language to produce code in a second language. It is all - while you are working on it - Lisp, and you can generate it all with quasiquotes.
Have a look at this (borrowed from PCL):
(define-html-macro :mp3-browser-page ((&key title (header title)) &body body)
`(:html
(:head
(:title ,title)
(:link :rel "stylesheet" :type "text/css" :href "mp3-browser.css"))
(:body
(standard-header)
(when ,header (html (:h1 :class "title" ,header)))
,#body
(standard-footer))))
Looks like an S-expression version of HTML, doesn't it? I have a feeling that Lisp works just fine as its own templating library.
I've started to wonder about an S-expression version of python. Would it qualify as a Lisp? It certainly wouldn't be Common Lisp. Maybe it would be nicer - for python programmers at least. Hey, and what about P-expressions?
* Python now has something called AST, which I haven't explored. Also a person could use python lists to represent other languages. Relative to Lisp, I suspect that both are a bit of a hack.
** SQLAlchemy is kind of an exception. It's done a nice job of turning SQL directly into python. That said, it appears to have involved significant effort.
*** Take it from a newbie. I'm sure I'm glossing over something here. Also, I realize that quasiquote is not the only way to generate code for macros. It's certainly a nice one, though.
Data is code is an interesting paradigm that supports treating a data structure as a command. Treating data in this way allows you to process and manipulate the structure in various ways - e.g. traversal - by evaluating it. Moreover, the 'data is code' paradigm obviates the need in many cases to develop custom parsers for data structures; the language parser itself can be used to parse the structures.
The first step is forgetting everything you have learned with all the C and Pascal-like languages. Empty your mind. This is the hardest step.
Then, take a good introduction to programming that uses Lisp. Don't try to correlate what you see with anything that you know beforehand (when you catch yourself doing that, repeat step 1). I liked Structure and Interpretation of Computer Programs (uses Scheme), Practical Common Lisp, Paradigms of Artificial Intelligence Programming, Casting Spels in Lisp, among others. Be sure to write out the examples. Also try the exercises, but limit yourself to the constructs you have learned in that book. If you find yourself trying to find, for example, some function to set a variable or some statement that resembles a for loop, repeat step 1, then revisit the chapters before to find out how it is done in Lisp.
Read and understand the legendary page 13 of the Lisp 1.5 Programmer's Manual
According to Alan Kay, at least.
One of the reasons that some university computer science programs use Lisp for their intro courses is that it's generally true that a novice can learn functional, procedural, or object-oriented programming more or less equally well. However, it's much harder for someone who already thinks in procedural statements to begin thinking like a functional programmer than to do the inverse.
When I tried to pick up Lisp, I did it "with a C accent." set! amd begin were my friends and constant companions. It is surprisingly easy to write Lisp code without ever writing any functional code, which isn't the point.
You may notice that I'm not answering your question, which is true. I merely wanted to let you know that it's awfully hard to get your mind thinking in a functional style, and it'll be an exciting practice that will make you a stronger programmer in the long run.
Kampai!
P.S. Also, you'll finally understand that "my other car is a cdr" bumper sticker.
To truly grok lisp, you need to write it.
Learn to love car, cdr, and cons. Don't iterate when you can recurse. Start out writing some simple programs (factorial, list reversal, dictionary lookup), and work your way up to more complex ones (sorting sets of items, pattern matching).
On the code is data and data is code thing, I wouldn't worry about it at this point. You'll understand it eventually, and its not critical to learning lisp.
I would suggest checking out some of the newer variants of Lisp like Arc or Clojure. They clean up the syntax a little and are smaller and thus easier to understand than Common Lisp. Clojure would be my choice. It is written on the JVM and so you don't have the issues with various platform implementations and library support that exist with some Lisp implementations like SBCL.
Read On Lisp and Paradigms in Artificial Intelligence Programming. Both of these have excellent coverage of Lisp macros - which really make the code is data concept real.
Also, when writing Lisp, don't iterate when you can recurse or map (learn to love mapcar).
it's important to see that data is code AND code is data. This feeds the eval/apply loop. Recursion is also fun.
(This link is broken:
![Eval/Apply][1]
[1]: http://ely.ath.cx/~piranha/random_images/lolcode-eval_apply-2.jpg
)
I'd suggest that is a horrible introduction to the language. There are better places to start and better people/articles/books than the one you cited.
Are you a programmer? What language(s)?
To help you with your question more background might be helpful.
About the whole "code is data" thing:
Isn't that due to the "von Neumann architecture"? If code and data were located in physically separate memory locations, the bits in the data memory could not be executed whereas the bits in the program memory could not be interpreted as anything but instructions to the CPU.
Do I understand this correctly?
I think to learn anything you have to have a purpose for it, such as a simple project.
For Lisp, a good simple project is a symbolic differentiator, so for example
(diff 'x 'x) -> 1
(diff 'a 'x) -> 0
(diff `(+ ,xx ,yy) 'x) where xx and yy are subexpressions
-> `(+ ,(diff xx 'x),(diff yy 'x))
etc. etc.
and then you need a simplifier, such as
(simp `(+ ,x 0)) -> x
(simp `(* ,x 0)) -> 0
etc. etc.
so if you start with a math expression, you can eval it to get its value, and you can eval its derivative to get its derivative.
I hope this illustrates what can happen when program code manipulates program code.
As Marvin Minsky observed, computer math is always worried about accuracy and roundoff error, right? Well, this is either exactly right or completely wrong!
You can get LISP in many ways, the most common is by using Emacs or working next to somebody who has developed LISP already.
Sadly, once you get LISP, it's hard to get rid of it, antibiotics won't work.
BTW: I also recommend The Adventures of a Pythonista in Schemeland.
This may be helpful: http://www.defmacro.org/ramblings/fp.html (isn't about LISP but about functional programming as a paradigm)
The way I think about it is that the best part of "code is data" is the face that function are, well, functionally no different than another variable. The fact that you can write code that writes code is one of the single most powerful (and often overlooked) features of Lisp. Functions can accept other functions as parameters, and even return functions as a result.
This lets one code at a much higher level of abstraction than, say, Java. It makes many tasks elegant and concise, and therefore, makes the code easier to modify, maintain, and read, or at least in theory.
I would say that the only way to truly "get" Lisp is to spend a lot of time with it -- once you get the hang of it, you'll wish you had some of the features of Lisp in your other programming languages.