Racket: enable scribble language in sub module - racket

#lang racket/base
(module x scribble/text
#(display 123))
It seems like #lang statements are not valid in nested sub-modules, and the expanded module version above is missing something:
error: module: no #%module-begin binding in the module's language
update:
looks like this more or less works, but is there a better way? does scribble do something with output ports that isn't being handled?
#lang racket/base
(module x scribble/text/lang
(#%module-begin
#reader scribble/reader #list{
hi
#(+ 1 456)
}))

First, your code has a redundant #%module-begin which can be removed.
#lang does several things -- one is control the semantics of the file
by determining the set of initially imported bindings, and that's
something that the module form has done before #lang came up. With
submodules, it became possible to use module for parts of a file too.
However, #lang can also determine the reader that parses the file, and
that's not possible to do with submodules, so you're stuck with only one
toplevel #lang to set the parser for the whole file.
(Sidenote: There is a technical reason for that. A #lang reader reads
the rest of the file until it reaches an eof value, so a nested
#lang would require getting an eof value before getting the end of
the file, or adding a new kind of eof-like value. That means that it's
a change that should be done carefully -- it's possible to do, of
course, but the need didn't come up often enough. Hopefully it will, in
the future.)
But in your case you don't want a completely new concrete syntax, just
an extension for s-expressions -- and an extension that was chosen to
have a minimal impact on regular code. So in almost all cases it's fine
to just enable the #-form syntax for the whole file, and then use
#-forms where you want it. Since it's just an alternative way for
reading sexprs, you can even use that with module, leading to this
code that doesn't need to use #reader:
#lang at-exp racket/base
#module[x scribble/text/lang]{
hi
#(+ 1 456)
}
(require 'x)
One thing that is a bit strange here is using scribble/text/lang and
not just scribble/text. Usually, #lang foo is exactly the same as
(module x foo ...) after reading the code with foos reader. But in
the case of the scribble/text language there is another difference:
using it as a #lang makes the semantics of the module body be "output
each thing". The idea is that as a language you'll want to spit out
mostly-text files, but as a library you'll want to write code in it
and do the printout yourself.
Since this code uses module, using scribble/text means that you're
not getting the spit-all-out functionality, which is why you need to
explicitly switch to scribble/text/lang. But you could have instead
just do the spitting yourself using the language's output, which would
give you this code:
#lang at-exp racket/base
(module x racket/base
(require scribble/text)
(output #list{
hi
#(+ 1 456)}))
(require 'x)
Note that scribble/text is not used as a language here, since it
doesn't provide enough stuff to be one when used (outside of a #lang).
(Which you've found out, leading to that redundant #%module-begin...)
This version is slightly more verbose, but I'm guessing that it makes
more sense in your case, since using it for some part of the code means
that you want to use it as a library.
Finally, if you really don't want to read the whole file with the #
syntax, only some parts, then the #reader that you've found is
perfectly fine. (And this is made simple with scribble/text that
treats lists as concatenated outputs, so you need just one wrapper for
each chunk of text.)

Related

require vs load vs include vs import in Racket

The Racket docs indicate that Racket has separate forms for: require, load, include, and import. Many other languages only contain one of these and are generally used synonymously (although obviously with language specific differences such as #include in C, and import in Java).
Since Racket has all four of these, what is the difference between each one and which should I be using in general? Also if each has a specific use, when should I use an alternative type? Also, this question seems to indicate that require (paired with provide) is preferred, why?
1. Require
You are correct, the default one you want is almost always require (paired with a provide). These two forms go hand in hand with Racket's modules and allows you to more easily determine what variables should be scoped in which files. For example, the following file defines three variables, but only exports 2.
#lang racket ; a.rkt
(provide b c)
(define a 1)
(define b 2)
(define c 3)
As per the Racket style guide, the provide should ideally be the first form in your file after the #lang so that you can easily tell what a module provides. There are a few cases where this isn't possible, but you probably won't encounter those until you start making your own Racket libraries you intend for public distribution. Personally, I still put a file's require before its provide, but I do sometimes get flack for it.
In a repl, or another module, you can now require this file and see the variables it provides.
Welcome to Racket v6.12.
> (require "a.rkt")
> c
3
> b
2
> a
; a: undefined;
; cannot reference undefined identifier
; [,bt for context]
There are ways to get around this, but this serves as a way for a module to communicate what its explicit exports are.
2. Load
This is a more dynamic variant of require. In general you should not use it, and instead use dynamic-require when you need to load a module dynamically. In this case, load is effectively a primitive that require uses behind the scenes. If you are explicitly looking to emulate the top level however (which, to be clear, you almost never do), then load is a fine option. Although in those rare cases, I would still direct you to the racket/load language. Which interacts exactly as if each form was entered into the repl directly.
#lang racket/load
(define x 5)
(displayln x) ; => prints 5
(define x 6)
(displayln x) ; => prints 6
3. Include
Include is similar to #include in C. There are even fewer cases where you should use it. The include form grabs the s-expression syntax of the given path, and puts it directly in the file where the include form was. At first, this can appear as a nice solution to allow you to split up a single module into multiple files, or to have a module 'piece' you want to put in multiple files. There are however better ways to do both of those things without using include, that also don't come with the confusing side effects you get with include.1 One thing to remember if you still insist on using import, is that the file you are importing probably shouldn't have a #lang line unless you are explicitly wanting to embed a submodule. (In which case you will also need to have a require form in addition to the include).
4. Import
Finally, import is not actually a core part of Racket, but is part of its unit system. Units are similar in some ways to modules, but allow for circular dependencies (unit A can depend on Unit B while Unit B depends on Unit A). They have fallen out of favor in recent years due to the syntactic overhead they have.
Also unlike the other forms import (and additionally export), take signatures, and relies on an external linker to decide which actual units should get linked together. Units are themselves a complex topic and deserve their own question on how to create and link them.
Conclusion (tldr)
TLDR; Use require and provide. They are the best supported and easiest to understand. The other forms do have their uses, but should be thought of 'advanced uses' only.
1These side effects are the same as you would expect for #include in C. Such as order being important, and also with expressions getting intermixed in very unpredictable ways.

Reading unknown symbols as strings in at-exp languages

I have created a module which provides various functions, including #%module-begin. I want to use it with at-exp syntax, which I can do using the following #lang line:
#lang at-exp s-exp "my-library.rkt"
However, this does not read unknown symbols as strings, as would happen for example when using the scribble/text language. How can I provide this functionality from my library, to save me writing quote marks around all my strings?
I think it may have something to do with the #%top function. Perhaps I can require it somehow from scribble/text and then provide it from my library?
What scribble/text does, is it starts reading the file in "text" mode, whereas at-exp starts reading the file in "racket" mode. Messing with #%top is not what you want here. To do the same thing as scribble/text, you would need a version of at-exp that starts in text mode. That doesn't exist (yet) though.
The function read-syntax-inside from scribble/reader does this. However, you will have to define your own #lang language that uses it. For that, you might find this documentation helpful, but there's no quick answer.
Update:
I looked at the implementation of scribble/text, and the answer seems a lot quicker than I thought it would be. Something like this should work:
my-library/lang/reader.rkt:
#lang s-exp syntax/module-reader
my-library/main
#:read scribble:read-inside
#:read-syntax scribble:read-syntax-inside
#:whole-body-readers? #t
#:info (scribble-base-reader-info)
(require (prefix-in scribble: scribble/reader)
(only-in scribble/base/reader scribble-base-reader-info))
Testing it:
#lang my-library
This is text
#(list 'but 'this 'is 'not)
I tested this with my-library/main.rkt re-providing everything from racket/base.

Should macros have side effects?

Can (or should) a macro expansion have side effects? For example, here is a macro which actually goes and grabs the contents of a webpage at compile time:
#lang racket
(require (for-syntax net/url))
(require (for-syntax racket/port))
(define-syntax foo
(lambda (syntx)
(datum->syntax #'lex
(port->string
(get-pure-port
(string->url
(car (cdr (syntax->datum syntx)))))))))
Then, I can do (foo "http://www.pointlesssites.com/") and it will be replaced with "\r\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\"\r\n\t <and so on>"
Is this good practice, or not? Am I garunteed that Racket will only run this code once? If I add a (display "running...") line to the macro, it only prints once, but I would hate to generalize from one example ...
PS - the reason I'm asking is because I actually think this could be really useful sometimes. For example this is a library which allows you to load (at compile-time) a discovery document from the Google API Discovery service and automatically create wrappers for it. I think it would be really cool if the library actually fetched the discovery document from the web, instead of from a local file.
Also, to give an example of a macro with a different kind of side effects: I once built a macro which translated a small subset of Racket into (eta-expanded) lambda calculus (which is, of course, still runnable in Racket). Whenever the macro finished translating a function, it would store the result in a dictionary so that later invocations of the macro could make use of that function definition in their own translations.
The short answer
It's fine for macros to have side effects, but you should make sure that your program doesn't change behavior when it's compiled ahead of time.
The longer answer
Macros with side effects are a powerful tool, and can let you do things that make programs much easier to write, or enable things that wouldn't be possible at all. But there are pitfalls to be aware of when you use side effects in macros. Fortunately, Racket provides all the tools to make sure that you can do this correctly.
The simplest kind of macro side effect is where you use some external state to find the code you want to generate. The examples you list in the question (reading Google API description) are of this kind. An even simpler example is the include macro:
#lang racket
(include "my-file.rktl")
This reads the contents of myfile.rktl and drops it in place right where the include form is used.
Now, include isn't a good way to structure your program, but this is a quite benign sort of side effect in the macro. It works the same if you compile the file ahead of time as if you don't, since the result of the include is part of the file.
Another simple example that's not good is something like this:
#lang racket
(define-syntax (show-file stx)
(printf "using file ~a\n" (syntax-source stx))
#'(void))
(show-file)
That's because the printf gets executed only at compile time, so if you compile your program that uses show-file ahead of time (as with raco make) then the printf will happen then, and won't happen when the program is run, which probably isn't the intention.
Fortunately, Racket has a technique for letting you write macros like show-file effectively. The basic idea is to leave residual code that actually performs the side effect. In particular, you can use Racket's begin-for-syntax form for this purpose. Here's how I would write show-file:
#lang racket
(define-syntax (show-file stx)
#`(begin-for-syntax
(printf "using file ~a\n" #,(syntax-source stx))))
(show-file)
Now, instead of happening when the show-file macro is expanded, the printf happens in the code that show-file generates, with the source embedded in the expanded syntax. That way your program keeps working correctly in the presence of ahead-of-time compilation.
There are other uses of macros with side effects, too. One of the most prominent in Racket is for inter-module communication -- because require doesn't produce values that the requiring module can get, to communicate between modules the most effective way is to use side effects. To make this work in the presence of compilation requires almost exactly the same trick with begin-for-syntax.
This is a topic that the Racket community, and I in particular, have thought a lot about, and there are several academic papers talking about how this works:
Composable and Compilable Macros: You want it when?, Matthew Flatt, ICFP 2002
Advanced Macrology and the Implementation of Typed Scheme, Ryan Culpepper, Sam Tobin-Hochstadt, and Matthew Flatt, Scheme Workshop 2007
Languages as Libraries, Sam Tobin-Hochstadt, Ryan Culpepper, Vincent St-Amour, Matthew Flatt, and Matthias Felleisen, PLDI 2011
In common Lisp the function eval-when allows you to decide when the macro will be expanded.

Writing an Eval Procedure in Scheme?

My problem isn't with the built-in eval procedure but how to create a simplistic version of it. Just for starters I would like to be able to take this in '(+ 1 2) and have it evaluate the expression + where the quote usually takes off the evaluation.
I have been thinking about this and found a couple things that might be useful:
Unquote: ,
(quasiquote)
(apply)
My main problem is regaining the value of + as a procedure and not a symbol. Once I get that I think I should just be able to use it with the other contents of the list.
Any tips or guidance would be much appreciated.
Firstly, if you're doing what you're doing, you can't go wrong reading at least the first chapter of the Metalinguistic Abstraction section of Structure and Interpretation of Computer Programs.
Now for a few suggestions from myself.
The usual thing to do with a symbol for a Scheme (or, indeed, any Lisp) interpreter is to look it up in some sort of "environment". If you're going to write your own eval, you will likely want to provide your own environment structures to go with it. The one thing for which you could fall back to the Scheme system you're building your eval on top of is the initial environment containing bindings for things like +, cons etc.; this can't be achieved in a 100% portable way, as far as I know, due to various Scheme systems providing different means of getting at the initial environment (including the-environment special form in MIT Scheme and interaction-environment in (Petite) Chez Scheme... and don't ask me why this is so), but the basic idea stays the same:
(define (my-eval form env)
(cond ((self-evaluating? form) form)
((symbol? form)
;; note the following calls PCS's built-in eval
(if (my-kind-of-env? env)
(my-lookup form env)
;; apparently we're dealing with an environment
;; from the underlying Scheme system, so fall back to that
;; (note we call the built-in eval here)
(eval form env)))
;; "applicative forms" follow
;; -- special forms, macro / function calls
...))
Note that you will certainly want to check whether the symbol names a special form (lambda and if are necessary -- or you could use cond in place of if -- but you're likely to want more and possibly allow for extentions to the basic set, i.e. macros). With the above skeleton eval, this would have to take place in what I called the "applicative form" handlers, but you could also handle this where you deal with symbols, or maybe put special form handlers first, followed by regular symbol lookup and function application.

What makes Lisp macros so special?

Reading Paul Graham's essays on programming languages one would think that Lisp macros are the only way to go. As a busy developer, working on other platforms, I have not had the privilege of using Lisp macros. As someone who wants to understand the buzz, please explain what makes this feature so powerful.
Please also relate this to something I would understand from the worlds of Python, Java, C# or C development.
To give the short answer, macros are used for defining language syntax extensions to Common Lisp or Domain Specific Languages (DSLs). These languages are embedded right into the existing Lisp code. Now, the DSLs can have syntax similar to Lisp (like Peter Norvig's Prolog Interpreter for Common Lisp) or completely different (e.g. Infix Notation Math for Clojure).
Here is a more concrete example:Python has list comprehensions built into the language. This gives a simple syntax for a common case. The line
divisibleByTwo = [x for x in range(10) if x % 2 == 0]
yields a list containing all even numbers between 0 and 9. Back in the Python 1.5 days there was no such syntax; you'd use something more like this:
divisibleByTwo = []
for x in range( 10 ):
if x % 2 == 0:
divisibleByTwo.append( x )
These are both functionally equivalent. Let's invoke our suspension of disbelief and pretend Lisp has a very limited loop macro that just does iteration and no easy way to do the equivalent of list comprehensions.
In Lisp you could write the following. I should note this contrived example is picked to be identical to the Python code not a good example of Lisp code.
;; the following two functions just make equivalent of Python's range function
;; you can safely ignore them unless you are running this code
(defun range-helper (x)
(if (= x 0)
(list x)
(cons x (range-helper (- x 1)))))
(defun range (x)
(reverse (range-helper (- x 1))))
;; equivalent to the python example:
;; define a variable
(defvar divisibleByTwo nil)
;; loop from 0 upto and including 9
(loop for x in (range 10)
;; test for divisibility by two
if (= (mod x 2) 0)
;; append to the list
do (setq divisibleByTwo (append divisibleByTwo (list x))))
Before I go further, I should better explain what a macro is. It is a transformation performed on code by code. That is, a piece of code, read by the interpreter (or compiler), which takes in code as an argument, manipulates and the returns the result, which is then run in-place.
Of course that's a lot of typing and programmers are lazy. So we could define DSL for doing list comprehensions. In fact, we're using one macro already (the loop macro).
Lisp defines a couple of special syntax forms. The quote (') indicates the next token is a literal. The quasiquote or backtick (`) indicates the next token is a literal with escapes. Escapes are indicated by the comma operator. The literal '(1 2 3) is the equivalent of Python's [1, 2, 3]. You can assign it to another variable or use it in place. You can think of `(1 2 ,x) as the equivalent of Python's [1, 2, x] where x is a variable previously defined. This list notation is part of the magic that goes into macros. The second part is the Lisp reader which intelligently substitutes macros for code but that is best illustrated below:
So we can define a macro called lcomp (short for list comprehension). Its syntax will be exactly like the python that we used in the example [x for x in range(10) if x % 2 == 0] - (lcomp x for x in (range 10) if (= (% x 2) 0))
(defmacro lcomp (expression for var in list conditional conditional-test)
;; create a unique variable name for the result
(let ((result (gensym)))
;; the arguments are really code so we can substitute them
;; store nil in the unique variable name generated above
`(let ((,result nil))
;; var is a variable name
;; list is the list literal we are suppose to iterate over
(loop for ,var in ,list
;; conditional is if or unless
;; conditional-test is (= (mod x 2) 0) in our examples
,conditional ,conditional-test
;; and this is the action from the earlier lisp example
;; result = result + [x] in python
do (setq ,result (append ,result (list ,expression))))
;; return the result
,result)))
Now we can execute at the command line:
CL-USER> (lcomp x for x in (range 10) if (= (mod x 2) 0))
(0 2 4 6 8)
Pretty neat, huh? Now it doesn't stop there. You have a mechanism, or a paintbrush, if you like. You can have any syntax you could possibly want. Like Python or C#'s with syntax. Or .NET's LINQ syntax. In end, this is what attracts people to Lisp - ultimate flexibility.
You will find a comprehensive debate around lisp macro here.
An interesting subset of that article:
In most programming languages, syntax is complex. Macros have to take apart program syntax, analyze it, and reassemble it. They do not have access to the program's parser, so they have to depend on heuristics and best-guesses. Sometimes their cut-rate analysis is wrong, and then they break.
But Lisp is different. Lisp macros do have access to the parser, and it is a really simple parser. A Lisp macro is not handed a string, but a preparsed piece of source code in the form of a list, because the source of a Lisp program is not a string; it is a list. And Lisp programs are really good at taking apart lists and putting them back together. They do this reliably, every day.
Here is an extended example. Lisp has a macro, called "setf", that performs assignment. The simplest form of setf is
(setf x whatever)
which sets the value of the symbol "x" to the value of the expression "whatever".
Lisp also has lists; you can use the "car" and "cdr" functions to get the first element of a list or the rest of the list, respectively.
Now what if you want to replace the first element of a list with a new value? There is a standard function for doing that, and incredibly, its name is even worse than "car". It is "rplaca". But you do not have to remember "rplaca", because you can write
(setf (car somelist) whatever)
to set the car of somelist.
What is really happening here is that "setf" is a macro. At compile time, it examines its arguments, and it sees that the first one has the form (car SOMETHING). It says to itself "Oh, the programmer is trying to set the car of somthing. The function to use for that is 'rplaca'." And it quietly rewrites the code in place to:
(rplaca somelist whatever)
Common Lisp macros essentially extend the "syntactic primitives" of your code.
For example, in C, the switch/case construct only works with integral types and if you want to use it for floats or strings, you are left with nested if statements and explicit comparisons. There's also no way you can write a C macro to do the job for you.
But, since a lisp macro is (essentially) a lisp program that takes snippets of code as input and returns code to replace the "invocation" of the macro, you can extend your "primitives" repertoire as far as you want, usually ending up with a more readable program.
To do the same in C, you would have to write a custom pre-processor that eats your initial (not-quite-C) source and spits out something that a C compiler can understand. It's not a wrong way to go about it, but it's not necessarily the easiest.
Lisp macros allow you to decide when (if at all) any part or expression will be evaluated. To put a simple example, think of C's:
expr1 && expr2 && expr3 ...
What this says is: Evaluate expr1, and, should it be true, evaluate expr2, etc.
Now try to make this && into a function... thats right, you can't. Calling something like:
and(expr1, expr2, expr3)
Will evaluate all three exprs before yielding an answer regardless of whether expr1 was false!
With lisp macros you can code something like:
(defmacro && (expr1 &rest exprs)
`(if ,expr1 ;` Warning: I have not tested
(&& ,#exprs) ; this and might be wrong!
nil))
now you have an &&, which you can call just like a function and it won't evaluate any forms you pass to it unless they are all true.
To see how this is useful, contrast:
(&& (very-cheap-operation)
(very-expensive-operation)
(operation-with-serious-side-effects))
and:
and(very_cheap_operation(),
very_expensive_operation(),
operation_with_serious_side_effects());
Other things you can do with macros are creating new keywords and/or mini-languages (check out the (loop ...) macro for an example), integrating other languages into lisp, for example, you could write a macro that lets you say something like:
(setvar *rows* (sql select count(*)
from some-table
where column1 = "Yes"
and column2 like "some%string%")
And thats not even getting into Reader macros.
Hope this helps.
I don't think I've ever seen Lisp macros explained better than by this fellow: http://www.defmacro.org/ramblings/lisp.html
A lisp macro takes a program fragment as input. This program fragment is represented a data structure which can be manipulated and transformed any way you like. In the end the macro outputs another program fragment, and this fragment is what is executed at runtime.
C# does not have a macro facility, however an equivalent would be if the compiler parsed the code into a CodeDOM-tree, and passed that to a method, which transformed this into another CodeDOM, which is then compiled into IL.
This could be used to implement "sugar" syntax like the for each-statement using-clause, linq select-expressions and so on, as macros that transforms into the underlying code.
If Java had macros, you could implement Linq syntax in Java, without needing Sun to change the base language.
Here is pseudo-code for how a lisp-style macro in C# for implementing using could look:
define macro "using":
using ($type $varname = $expression) $block
into:
$type $varname;
try {
$varname = $expression;
$block;
} finally {
$varname.Dispose();
}
Since the existing answers give good concrete examples explaining what macros achieve and how, perhaps it'd help to collect together some of the thoughts on why the macro facility is a significant gain in relation to other languages; first from these answers, then a great one from elsewhere:
... in C, you would have to write a custom pre-processor [which would probably qualify as a sufficiently complicated C program] ...
—Vatine
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming [which is still not as powerful].
—Matt Curtis
... in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack.
—Miguel Ping
DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201. Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably ''still'' can't use the new feature until they're allowed to break source compatibility with older versions of Java. So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.
—Peter Seibel, in "Practical Common Lisp"
Think of what you can do in C or C++ with macros and templates. They're very useful tools for managing repetitive code, but they're limited in quite severe ways.
Limited macro/template syntax restricts their use. For example, you can't write a template which expands to something other than a class or a function. Macros and templates can't easily maintain internal data.
The complex, very irregular syntax of C and C++ makes it difficult to write very general macros.
Lisp and Lisp macros solve these problems.
Lisp macros are written in Lisp. You have the full power of Lisp to write the macro.
Lisp has a very regular syntax.
Talk to anyone that's mastered C++ and ask them how long they spent learning all the template fudgery they need to do template metaprogramming. Or all the crazy tricks in (excellent) books like Modern C++ Design, which are still tough to debug and (in practice) non-portable between real-world compilers even though the language has been standardised for a decade. All of that melts away if the langauge you use for metaprogramming is the same language you use for programming!
I'm not sure I can add some insight to everyone's (excellent) posts, but...
Lisp macros work great because of the Lisp syntax nature.
Lisp is an extremely regular language (think of everything is a list); macros enables you to treat data and code as the same (no string parsing or other hacks are needed to modify lisp expressions). You combine these two features and you have a very clean way to modify code.
Edit: What I was trying to say is that Lisp is homoiconic, which means that the data structure for a lisp program is written in lisp itself.
So, you end up with a way of creating your own code generator on top of the language using the language itself with all its power (eg. in Java you have to hack your way with bytecode weaving, although some frameworks like AspectJ allows you to do this using a different approach, it's fundamentally a hack).
In practice, with macros you end up building your own mini-language on top of lisp, without the need to learn additional languages or tooling, and with using the full power of the language itself.
Lisp macros represents a pattern that occurs in almost any sizeable programming project. Eventually in a large program you have a certain section of code where you realize it would be simpler and less error prone for you to write a program that outputs source code as text which you can then just paste in.
In Python objects have two methods __repr__ and __str__. __str__ is simply the human readable representation. __repr__ returns a representation that is valid Python code, which is to say, something that can be entered into the interpreter as valid Python. This way you can create little snippets of Python that generate valid code that can be pasted into your actually source.
In Lisp this whole process has been formalized by the macro system. Sure it enables you to create extensions to the syntax and do all sorts of fancy things, but it's actual usefulness is summed up by the above. Of course it helps that the Lisp macro system allows you to manipulate these "snippets" with the full power of the entire language.
In short, macros are transformations of code. They allow to introduce many new syntax constructs. E.g., consider LINQ in C#. In lisp, there are similar language extensions that are implemented by macros (e.g., built-in loop construct, iterate). Macros significantly decrease code duplication. Macros allow embedding «little languages» (e.g., where in c#/java one would use xml to configure, in lisp the same thing can be achieved with macros). Macros may hide difficulties of using libraries usage.
E.g., in lisp you can write
(iter (for (id name) in-clsql-query "select id, name from users" on-database *users-database*)
(format t "User with ID of ~A has name ~A.~%" id name))
and this hides all the database stuff (transactions, proper connection closing, fetching data, etc.) whereas in C# this requires creating SqlConnections, SqlCommands, adding SqlParameters to SqlCommands, looping on SqlDataReaders, properly closing them.
While the above all explains what macros are and even have cool examples, I think the key difference between a macro and a normal function is that LISP evaluates all the parameters first before calling the function. With a macro it's the reverse, LISP passes the parameters unevaluated to the macro. For example, if you pass (+ 1 2) to a function, the function will receive the value 3. If you pass this to a macro, it will receive a List( + 1 2). This can be used to do all kinds of incredibly useful stuff.
Adding a new control structure, e.g. loop or the deconstruction of a list
Measure the time it takes to execute a function passed in. With a function the parameter would be evaluated before control is passed to the function. With the macro, you can splice your code between the start and stop of your stopwatch. The below has the exact same code in a macro and a function and the output is very different. Note: This is a contrived example and the implementation was chosen so that it is identical to better highlight the difference.
(defmacro working-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; not splicing here to keep stuff simple
((- (get-universal-time) start))))
(defun my-broken-timer (b)
(let (
(start (get-universal-time))
(result (eval b))) ;; doesn't even need eval
((- (get-universal-time) start))))
(working-timer (sleep 10)) => 10
(broken-timer (sleep 10)) => 0
One-liner answer:
Minimal syntax => Macros over Expressions => Conciseness => Abstraction => Power
Lisp macros do nothing more than writing codes programmatically. That is, after expanding the macros, you got nothing more than Lisp code without macros. So, in principle, they achieve nothing new.
However, they differ from macros in other programming languages in that they write codes on the level of expressions, whereas others' macros write codes on the level of strings. This is unique to lisp thanks to their parenthesis; or put more precisely, their minimal syntax which is possible thanks to their parentheses.
As shown in many examples in this thread, and also Paul Graham's On Lisp, lisp macros can then be a tool to make your code much more concise. When conciseness reaches a point, it offers new levels of abstractions for codes to be much cleaner. Going back to the first point again, in principle they do not offer anything new, but that's like saying since paper and pencils (almost) form a Turing machine, we do not need an actual computer.
If one knows some math, think about why functors and natural transformations are useful ideas. In principle, they do not offer anything new. However by expanding what they are into lower-level math you'll see that a combination of a few simple ideas (in terms of category theory) could take 10 pages to be written down. Which one do you prefer?
I got this from the common lisp cookbook and I think it explained why lisp macros are useful.
"A macro is an ordinary piece of Lisp code that operates on another piece of putative Lisp code, translating it into (a version closer to) executable Lisp. That may sound a bit complicated, so let's give a simple example. Suppose you want a version of setq that sets two variables to the same value. So if you write
(setq2 x y (+ z 3))
when z=8 both x and y are set to 11. (I can't think of any use for this, but it's just an example.)
It should be obvious that we can't define setq2 as a function. If x=50 and y=-5, this function would receive the values 50, -5, and 11; it would have no knowledge of what variables were supposed to be set. What we really want to say is, When you (the Lisp system) see (setq2 v1 v2 e), treat it as equivalent to (progn (setq v1 e) (setq v2 e)). Actually, this isn't quite right, but it will do for now. A macro allows us to do precisely this, by specifying a program for transforming the input pattern (setq2 v1 v2 e)" into the output pattern (progn ...)."
If you thought this was nice you can keep on reading here:
http://cl-cookbook.sourceforge.net/macros.html
In python you have decorators, you basically have a function that takes another function as input. You can do what ever you want: call the function, do something else, wrap the function call in a resource acquire release, etc. but you don't get to peek inside that function. Say we wanted to make it more powerful, say your decorator received the code of the function as a list then you could not only execute the function as is but you can now execute parts of it, reorder lines of the function etc.