I want to use Lisp's read-time conditionalization feature to merge two versions of my code, old and new. I have something like:
'(ant bee #+new cat #+new dog #+new eel fish)
so the old version is:
'(ant bee fish)
and the new version, when the feature new is defined, is:
'(ant bee cat dog eel fish)
Is there some way of writing this more succinctly, with only one occurrence of #+new?
If you can also use a backquote, here's a solution that works in at least three Common Lisp implementations. You can simply backquote the list, and then use the conditional before a comma-at spliced list:
CL-USER> `(a b #+new,#'(c d e) f)
(A B F)
CL-USER> (push :new *features*)
...
CL-USER> `(a b #+new,#'(c d e) f)
(A B C D E F)
In the documentation section 2.4.8.17 Sharpsign Plus, we read that (emphasis added):
#+ operates by first reading the feature expression and then skipping over the form if the feature expression fails. While reading the test,
the current package is the KEYWORD package. Skipping over the form is
accomplished by binding *read-suppress* to true and then calling read.
There's no universal implementation of backquote, so I'm not absolutely certain that this is completely portable. We're depending on the assumption that ,#<something> will be read as one form so that it's what's skipped over. If it's possible for it to be processed as more than one form, this won't work. I've tested it in Clozure CL, SBCL, and CLISP, and it works in all of those.
Will something like this help:
#+new
(defvar *new-items* '(cat dog eel))
#-new
'(ant bee fish)
#+new
(append '(ant bee fish) *new-times*)
to be more portable? Not exactly one use of #+new but it makes its use less scattered.
Related
I'm attempting to implement a lisp version of a domain specific language that uses the binary operators and comparators on discrete probability distributions. Is there a safe(ish) and concise way to extend the functionality of these atoms < <= = >= > + - / * to work with new types, without completely bricking the language?
I realize just making new operator names would be the easiest solution, but that answer is not compelling nor fun, and certainly doesn't live up to lisp's reputation as the programmable programming language.
For example, say we have the hash-table:
((4 . 1/4) (3 . 1/4) (2 . 1/4) (1 . 1/4))
which we create with the macro (d 4), and we want to be able to compare it to a number like so
(< (d 4) 3)
which we would want to return a hash-table representing a 50% chance it's true and 50% chance it's false, say in this form:
((0 . 1/2) (1 . 1/2))
but currently it produces the error:
*** - <: #S(HASH-TABLE :TEST FASTHASH-EQL (4 . 1/4) (3 . 1/4) (2 . 1/4) (1 . 1/4)) is not a real number
There are three answers to this:
no, you can't do it because + &c are not generic functions in CL and you may not redefine symbols exported from the CL package ...
... but yes, of course you can do it if you're willing to accept a small amount of inconvenience, because Lisp is a programmable programming language ...
... but no, in fact you can't completely do it if you want to preserve the semantics of your extended language in fact, and there isn't anything you can do about that.
So let's look at these in order.
(1): The arithmetic operators are not generic functions in CL ...
This means you can't extend them: they work on numbers. And you can't redefine them because the language forbids that.
(2): ... but of course you can get around that ...
So, the first trick is we want to define a package in which the arithmetic operators are not the CL versions of them. We can do this manually, but we'll use Tim Bradshaw's conduits system to make life easier:
(in-package :org.tfeb.clc-user)
(defpackage :cl/generic-arith
;; The package people will use
(:use)
(:extends/excluding :cl
#:< #:<= #:= #:= #:> #:+ #:- #:/ #:*)
(:export
#:< #:<= #:= #:= #:> #:+ #:- #:/ #:*))
(defpackage :cl/generic-arith/user
;; User package
(:use :cl/generic-arith))
(defpackage :cl/generic-arith/impl
;; Implementation package
(:use :cl/generic-arith))
So now:
> (in-package :cl/generic-arith/user)
#<The CL/GENERIC-ARITH/USER package, 0/16 internal, 0/16 external>
> (describe '*)
* is a symbol
name "*"
value #<unbound value>
function #<unbound function>
plist nil
package #<The CL/GENERIC-ARITH package, 0/1024 internal, 978/1024 external>
So now we can get on with defining these things. There are some problems: because what we've done is create an environment where, for instance * is not cl:*, then anything which uses * as a symbol will not work. So for instance cl:* is bound the the value of the last form evaluated in the repl, and typing * isn't going to get that symbol: you'll need to type cl:*.
Similarly the + method combination will not work: you'd have to explicitly say
(defgeneric foo (...)
...
(:method-combination cl:+)
...)
So there is some pain. But not, probably, that much pain.
And now we can start implementing things. The obvious approach is to define generic functions which are 2-arg versions of the functions we need, and then define the functions on top of them. That means we can use things like compiler macros for the actual operators without fear of breaking the underlying generic functions.
So start by
(in-package :cl/generic-arith/impl)
(defgeneric +/2 (a b)
(:method ((a number) (b number))
(cl:+ a b)))
...
And now you think for a bit and realise that ...
(3) ... but there are problems you can't get around, in any language
The problem is at least this: + is one of the two operators on a field, and the way it is defined in CL in terms of some two-argument function like +/2 above is something like this (this may be wrong implementationally but it's how the maths would go):
(+ a) is a;
(+ a b) is (+/2 a b);
(+ a b ...) is (+/2 a (+ b ...)` and this is fine by associativity (but beware of floats).
And that looks fine but we've missed a case:
(+) is the additive identity of the field (and similarly (*) is the multiplicative identity).
Oh, wait: what field? For cl:+ this is fine: it's the identities of the field of numbers. But now? Who knows? And there's nothing that can be done about that: if you want + to work the way it does in CL, except on other fields, you can't have that.
In general if + and * are to represent the two operations of a field, then if zero-argument versions of those operators are allowed those should return the two identities of the field (which they do in CL). But that means that all fields they are defined for must share those two identities (which, again, they do in CL: the fields of rationals & complex numbers share identities). This means in turn that (+ x 3) (from (+ x (*) (*) (*))) must be allowed (and so on). Or you either disallow the zero-argument case, or that they represent the field operations, or both.
You can by using generic-cl, that does that and that is extensible.
(= "hello" y) ;; would throw an error in standard CL
You would :use :generic-cl instead of :cl in a package definition, and you would use :generic-cl-user at the REPL.
Subclass the methods equalp and lessp.
Say I have macros foo and bar. If I write (foo (bar)) my understanding is that in most (all?) lisps foo is going to be given '(bar), not whatever bar would have expanded to had it been expanded first. Some lisps have something like local-expand where the implementation of foo can explicitly request the expansion of its argument before continuing, but why isn't that the default? It seems more natural to me. Is this an accident of history or is there a strong reason to do it the way most lisps do it?
I've been noticing in Rust that I want the macros to work this way. I'd like to be able to wrap a bunch of declarations inside a macro call so that the macro can then crawl the declarations and generate reflection information. But if I use a macro to generate the definitions that I want crawled, my macro wrapping the declarations sees the macro invocations that would generate the declarations rather than the actual declarations.
If I write (foo (bar)) my understanding is that in most (all?) lisps foo is going to be given '(bar), not whatever bar would have expanded to had it been expanded first.
That would restrict Lisp such that (bar) would need to be something that can be expanded -> probably something which is written in the Lisp language.
Lisp developers would like to see macros where the inner stuff can be a completely new language with different syntax rules. For example something, where FOO does not expand it's subforms, but transpiles/compiles a fully/partially different language to Lisp. Something which does not have the usual prefix expression syntax:
Examples
(postfix (a b +) sin)
-> (sin (+ a b))
Here the + in the macro form is not the infix +.
or
(query-all (person name)
where (person = "foo") in database DB)
Lisp macros don't work on language parse trees, but arbitrary, possibly nested, s-expressions. Those don't need to be valid Lisp code outside of that macro -> don't need to follow the usual syntax / semantics.
Common Lisp has the function MACROEXPAND and MACROEXPAND-1, such that the outer macro can expand inner code if it wants to do it during its own macro expansion:
CL-USER 26 > (defmacro bar (a) `(* ,a ,a))
BAR
CL-USER 27 > (bar 10)
100
CL-USER 28 > (defmacro foo (a &environment e)
(let ((f (macroexpand a e)))
(print (list a '-> f))
`(+ ,(second f) ,(third f))))
FOO
CL-USER 29 > (foo (bar 10))
((bar 10) -> (* 10 10))
20
In above macro, if FOO would see only an expanded form, it could not print both the source and the expansion.
This works also with scoped macros. Here the BAR macro gets locally redefined and the MACROEXPAND generates different code inside FOO for the same form:
CL-USER 30 > (macrolet ((bar (a)
`(expt ,a ,a)))
(foo (bar 10)))
((bar 10) -> (EXPT 10 10))
20
If foo is a macro then (foo (bar)) must pass the raw syntax (bar) to the foo macro expander. This is absolutely essential.
This is because foo can give any meaning whatsoever to bar.
Consider the defmacro macro itself:
(defmacro foo (bar) body)
Here, the argument (bar) is a parameter list ("macro lambda list") and not a form (Common Lisp jargon for to-be-evaluated expression). It says that the macro shall have a single parameter called bar. Therefore it is nonsensically wrong to try to expand (bar) before handing it to defmacro's expander.
Only if we know that an expression is going to be evaluated is it legitimate to expand it as a macro. But we don't know that about an expression which is the argument to a macro.
Other counterexamples are easy to come up with. (defstruct point (x 0) (y 0)): (x 0) isn't a call to operator x, but a slot x whose default value is 0. (dolist (x list) ...): x is a variable to be stepped over list.
That said, there are implementation choices regarding the timing of macro expansion.
A Lisp implementation can macro-expand an entire top-level form before evaluating or compiling any of it. Or it can expand incrementally, so that for instance when (+ x y) is being processed, x is macro-expanded and evaluated or compiled into some intermediate form already before y is even looked at.
A pure syntax tree interpreter for Lisp which always keeps the code in the original form and always expands (and re-expands) the code as it is evaluating has certain interactivity advantages. Any macro that you rewrite goes instantly "live" in all the existing code that you have input into the REPL, like existing function definitions. It is obviously quite inefficient in terms of execution speed, but any code that you call uses the latest definition of your macros without any hassle of telling the system to reload that code to have it expanded again. That also eliminates the risk that you're testing something that is still based on the old, buggy version of some macro that you fixed. If you're ever writing a Lisp, the range of timing choices for expansion is good to keep in mind, so that you consciously reject the choices you don't go with.
In turn, that said, there are some constraints on the timing of macro expansion. Conceivably, a Lisp interpreter or compiler, when processing an entire file, could go through all the top level forms and expand all of them at once before processing any of them. The implementor will quickly learn that this is bad, because some of the later forms depend on the side effects of the earlier forms. Such as, oh, macros being defined! If the first form defines a macro, which the second one uses, then we cannot expand the second form without evaluating the effect of the first.
It makes sense, in a Lisp, to split up physical top-level forms into logical ones. Suppose that someone writes (or uses a macro to generate) codde like (progn (defmacro foo ...) (foo)). This entire progn cannot be macro expanded up-front before evaluation; it won't work! There has to be a rule such as "whenever a top-level form is based on the progn operator, then the children of the progn operator are considered top-level forms by all the processing which treats top-level forms specially, and this rule is recursively applied." The top-level entry point into the macro-expanding code walker then has to contain special case hacks to do this recognition of logical top-level forms, breaking them up and recursing into a lower level expander which doesn't do those checks any more.
I've been noticing in Rust that I want the macros to work this way.
I'd like to be able to wrap a bunch of declarations inside a macro
call so that the macro can then crawl the declarations and generate
reflection information.
It does sound like local-expand is the right tool for that job.
However, an alternative approach would be something like this:
Suppose that wrapper is our outer macro, and that the intended
syntax is:
(wrapper decl1 decl2 ...)
where decl is a declaration that potenteally uses some standard form declare.
We can let
(wrapper decl1 decl2 ...)
expand to
(let-syntax ([declare our-declare])
decl1 decl2 ...
(post-process-reflection-information))
where our-declare is a helper macro that expands both to the standard declaration as well as some form that stores the reflection information,
also post-process-reflection-information is another macro that
does any needed post processing.
I think you are trying to use macros for something they are not designed to solve. Macros are primarily a text/code substitution mechanism, and in the case of Lisp this looks a lot like a simplified term-rewriting system (see also How does term-rewriting based evaluation work?). There are different strategies possible for how to substitute a pattern of code, and in which order, but in C/C++ preprocessor macros, in LaTeX, and in Lisp, the process is typically done by computing the expansion until the form is no longer expandable, starting from the topmost terms. This order is quite natural and because it is distinct from normal evaluation rules, it can be used to implement things the normal evaluation rules cannot.
In your case, you are interested in getting access to all the declarations of some object/type, something which falls under the introspection/reflection category (as you said yourself). But implementing reflection/introspection with macros doesn't look totally doable, since macros work on abstract syntax trees and this might be a poor way to access the metadata you want.
Typically the compiler is going to parse/analyze the struct definitions and build the definitive, canonical representation of the struct, even if there are different way to express that syntactically; it may even use prior information not available directly as source code to compute more interesting metadata (e.g. if you had inheritance, there could be a set of properties inherited from a type defined in another module (I don't think this applies to Rust)).
I think currently Rust does not offer compile-time or runtime introspection facilities, which explains why are you going with the macro route. In Common Lisp macros are definitely not used for introspection, the actual values obtained after evaluation (at different times) is used to gain information about an object. For example, defclass expands as a set of instructions that register a class in the language, but in order to get all the slots of a class, you ask the language to give it to you, e.g:
(defclass foo () (x)) ;; define class foo with slot X
(defclass bar () (y)) ;; define class bar with slot Y
(defclass zot (foo bar) ()) ;; define class zot with foo and bar as superclasses
USER> (c2mop:class-slots (find-class 'zot))
(#<SB-MOP:STANDARD-EFFECTIVE-SLOT-DEFINITION X>
#<SB-MOP:STANDARD-EFFECTIVE-SLOT-DEFINITION Y>)
I don't know what the solution for your problem is, but in addition to the other answers, I think it is not specifically a fault of the macro system. If a macro is defined as done usually as only a term rewriting system, it will always have difficulties to perform some tasks on the semantic level. But Rust is still evolving so there might be better ways to do things in the future.
I often find myself typing (ns user) and pressing C+c M+n repeatedly when I work on clojure source code. The problem is that I often use functions like source and doc and they are in clojure.repl and I don't want to :require them to my namespaces. What are experienced clojurians doing in this case?
Clarification: I know how clojure's namespacing works. What I want to achieve is to be able to call (source myfunc),(doc myfunc) etc. without the need to use fully qualified names in the REPL and without the need to require the functions from clojure.repl in each of my namespaces.
Thanks for clearing up what you are asking for.
Leiningen has a feature called :injections which you can combine with vinyasa to get this effect If you put something like this in your leiningen profile:
~/lein/profiles.clj:
{:user {:plugins []
:dependencies [[im.chit/vinyasa "0.1.8"]]
:injections [(require 'vinyasa.inject)
(vinyasa.inject/inject
'clojure.core '>
'[[clojure.repl doc source]
[clojure.pprint pprint pp]])]}}
Because this is in your profiles.clj it only affects you. Other people who work on the project won't be affected.
Because injecting into clojure.core strikes me as a little iffy, I follow the vinyasa author's advice and inject into a namespace called . which is crated by my profile for every project I work on. This namespace always exists which makes these function work even in newly created namespaces that don't yet refer clojure.core.
My ~/.lein/profiles.clj:
{:user
{:plugins []
:dependencies [[spyscope "0.1.4"]
[org.clojure/tools.namespace "0.2.4"]
[io.aviso/pretty "0.1.8"]
[im.chit/vinyasa "0.4.7"]]
:injections
[(require 'spyscope.core)
(require '[vinyasa.inject :as inject])
(require 'io.aviso.repl)
(inject/in ;; the default injected namespace is `.`
;; note that `:refer, :all and :exclude can be used
[vinyasa.inject :refer [inject [in inject-in]]]
[clojure.pprint :refer [pprint]]
[clojure.java.shell :refer [sh]]
[clojure.repl :refer [doc source]]
[vinyasa.maven pull]
[vinyasa.reflection .> .? .* .% .%> .& .>ns .>var])]}}
which works like this:
hello.core> (./doc first)
-------------------------
clojure.core/first
([coll])
Returns the first item in the collection. Calls seq on its
argument. If coll is nil, returns nil.
nil
hello.core> (in-ns 'new-namespace)
#namespace[new-namespace]
new-namespace> (./doc first)
nil
new-namespace> (clojure.core/refer-clojure)
nil
new-namespace> (./doc first)
-------------------------
clojure.core/first
([coll])
Returns the first item in the collection. Calls seq on its
argument. If coll is nil, returns nil.
nil
To achieve this, you may use vinyasa library, in particular its inject functionality. Basically you need to add needed functions from clojure.repl namespace to clojure.core namespace. After you won't need to require them explicitly. See the following:
user> (require '[vinyasa.inject :refer [inject]])
nil
;; injecting `source` and `doc` symbols to clojure.core
user> (inject '[clojure.core [clojure.repl source doc]])
[]
;; switching to some other namespace
user> (require 'my-project.core)
nil
user> (in-ns 'my-project.core)
#namespace[my-project.core]
;; now those functions are accessible w/o qualifier
my-project.core> (doc vector)
-------------------------
clojure.core/vector
([] [a] [a b] [a b c] [a b c d] [a b c d e] [a b c d e f] [a b c d e f & args])
Creates a new vector containing the args.
nil
While learning Lisp, I've seen that if there are two parameters to a function, where one is a single element or a subset (needle), and the other is a list (haystack), the element or subset always comes first.
Examples:
(member 3 '(3 1 4 1 5))
(assoc 'jane '((jane doe)
(john doe)))
(subsetp '(a e) '(a e i o u))
To me, it seems as if there was a rule in Lisp that functions should follow this guidance: Part first, entire thing second.
Is this finding actually based on a guideline in Lisp, or is it accidentally?
Functions like member and assoc are at least from 1960.
I would simply expect that it followed mathematical notation, for example in set theory:
e ∈ m
Since Lisp uses prefix notation, the predicate/function/operator comes first, the element second and the set is third:
(∈ e m)
John McCarthy had a Ph.D. in Mathematics.
Generally it is also more useful in Common Lisp to have set-like argument last:
(defun find-symbol (name package) ...)
The actual definition in Common Lisp is:
(defun find-symbol (name &optional (package *package*)) ...)
This allows us to use the current package as a useful default.
Lets see. The first McCarthy LISP from 1960 had the list sometimes as the first argument. See page 123 in this LISP manual. E.g.
;; 1960 maplist
(defun maplist (list function)
...)
Now this is perhaps because this function was one of the first higher order functions that were made. In fact it predated the first implementation as it was in the first Lisp paper. In the same manual on page 125 you'll find sassoc and it looks very much like assoc today:
(defun sassoc (needle haystack default-function)
...)
Both of these look the same in the next version 1.5 of the language. (See page 63 for maplist and 60 for sassoc)
From here to Common Lisp there are divergent paths that joins again. A lot of new ideas came about but there has to be a reason to break compatibility to actually do it. I can think of one reason and that is support for multiple lists. In Common Lisp maplist is:
(defun maplist (function &rest lists+)
...)
A quick search in the CLHS for common argument names in "wrong" order gave me fill, map-into, and sort. There might be more.
Peter Norvigs style guide say to follow conventions but not more detailed than that. When reading Scheme SRFIs they often mention defacto implementations around and what Common Lisp has as solution before suggesting something similar as a standard. I do the same when choosing how to implement things.
I've heard that Lisp's macro system is very powerful. However, I find it difficult to find some practical examples of what they can be used for; things that would be difficult to achieve without them.
Can anyone give some examples?
Source code transformations. All kinds. Examples:
New control flow statements: You need a WHILE statement? Your language doesn't have one? Why wait for the benevolent dictator to maybe add one next year. Write it yourself. In five minutes.
Shorter code: You need twenty class declarations that almost look identical - only a limited amount of places are different. Write a macro form that takes the differences as parameter and generates the source code for you. Want to change it later? Change the macro in one place.
Replacements in the source tree: You want to add code into the source tree? A variable really should be a function call? Wrap a macro around the code that 'walks' the source and changes the places where it finds the variable.
Postfix syntax: You want to write your code in postfix form? Use a macro that rewrites the code to the normal form (prefix in Lisp).
Compile-time effects: You need to run some code in the compiler environment to inform the development environment about definitions? Macros can generate code that runs at compile time.
Code simplifications/optimizations at compile-time: You want to simplify some code at compile time? Use a macro that does the simplification - that way you can shift work from runtime to compile time, based on the source forms.
Code generation from descriptions/configurations: You need to write a complex mix of classes. For example your window has a class, subpanes have classes, there are space constraints between panes, you have a command loop, a menu and a whole bunch of other things. Write a macro that captures the description of your window and its components and creates the classes and the commands that drive the application - from the description.
Syntax improvements: Some language syntax looks not very convenient? Write a macro that makes it more convenient for you, the application writer.
Domain specific languages: You need a language that is nearer to the domain of your application? Create the necessary language forms with a bunch of macros.
Meta-linguistic abstraction
The basic idea: everything that is on the linguistic level (new forms, new syntax, form transformations, simplification, IDE support, ...) can now be programmed by the developer piece by piece - no separate macro processing stage.
Pick any "code generation tool". Read their examples. That's what it can do.
Except you don't need to use a different programming language, put any macro-expansion code where the macro is used, run a separate command to build, or have extra text files sitting on your hard disk that are only of value to your compiler.
For example, I believe reading the Cog example should be enough to make any Lisp programmer cry.
Anything you'd normally want to have done in a pre-processor?
One macro I wrote, is for defining state-machines for driving game objects. It's easier to read the code (using the macro) than it is to read the generated code:
(def-ai ray-ai
(ground
(let* ((o (object))
(r (range o)))
(loop for p in *players*
if (line-of-sight-p o p r)
do (progn
(setf (target o) p)
(transit seek)))))
(seek
(let* ((o (object))
(target (target o))
(r (range o))
(losp (line-of-sight-p o target r)))
(when losp
(let ((dir (find-direction o target)))
(setf (movement o) (object-speed o dir))))
(unless losp
(transit ground)))))
Than it is to read:
(progn
(defclass ray-ai (ai) nil (:default-initargs :current 'ground))
(defmethod gen-act ((ai ray-ai) (state (eql 'ground)))
(macrolet ((transit (state)
(list 'setf (list 'current 'ai) (list 'quote state))))
(flet ((object ()
(object ai)))
(let* ((o (object)) (r (range o)))
(loop for p in *players*
if (line-of-sight-p o p r)
do (progn (setf (target o) p) (transit seek)))))))
(defmethod gen-act ((ai ray-ai) (state (eql 'seek)))
(macrolet ((transit (state)
(list 'setf (list 'current 'ai) (list 'quote state))))
(flet ((object ()
(object ai)))
(let* ((o (object))
(target (target o))
(r (range o))
(losp (line-of-sight-p o target r)))
(when losp
(let ((dir (find-direction o target)))
(setf (movement o) (object-speed o dir))))
(unless losp (transit ground)))))))
By encapsulating the whole state-machine generation in a macro, I can also ensure that I only refer to defined states and warn if that is not the case.
With macros you can define your own syntax, thus you extend Lisp and make it
suited for the programs you write.
Check out the, very good, online book Practical Common Lisp, for practical examples.
7. Macros: Standard Control Constructs
8. Macros: Defining Your Own
Besides extending the language's syntax to allow you to express yourself more clearly, it also gives you control over evaluation. Try writing your own if in your language of choice so that you can actually write my_if something my_then print "success" my_else print "failure" and not have both print statements get evaluated. In any strict language without a sufficiently powerful macro system, this is impossible. No Common Lisp programmers would find the task too challenging, though. Ditto for for-loops, foreach loops, etc. You can't express these things in C because they require special evaluation semantics (people actually tried to introduce foreach into Objective-C, but it didn't work well), but they are almost trivial in Common Lisp because of its macros.
R, the standard statistics programming language, has macros (R manual, chapter 6). You can use this to implement the function lm(), which analyzes data based on a model that you specify as code.
Here's how it works: lm(Y ~ aX + b, data) will try to find a and b parameters that best fit your data. The cool part is, you can substitute any linear equation for aX + b and it will still work. It's a brilliant feature to make statistics computation easier, and it only works so elegantly because lm() can analyze the equation it's given, which is exactly what Lisp macros do.
Just a guess -- Domain Specific Languages.
Macros are essential in providing access to language features. For instance, in TXR Lisp, I have a single function called sys:capture-cont for capturing a delimited continuation. But this is awkward to use by itself. So there are macros wrapped around it, such as suspend, or obtain and yield which provide alternative models for resumable, suspended execution. They are implemented here.
Another example is the complex macro defstruct which provides syntax for defining a structure type. It compiles its arguments into lambda-s and other material which is passed to the function make-struct-type. If programs used make-struct-type directly for defining OOP structures, they would be ugly:
1> (macroexpand '(defstruct foo bar x y (z 9) (:init (self) (setf self.x 42))))
(sys:make-struct-type 'foo 'bar '()
'(x y z) ()
(lambda (#:g0101)
(let ((#:g0102 (struct-type #:g0101)))
(unless (static-slot-p #:g0102 'z)
(slotset #:g0101 'z
9)))
(let ((self #:g0101))
(setf (qref self x)
42)))
())
Yikes! There is a lot going on that has to be right. For instance, we don't just stick a 9 into slot z because (due to inheritance) we could actually be the base structure of a derived structure, and in the derived structure, z could be a static slot (shared by instances). We would be clobbering the value set up for z in the derived class.
In ANSI Common Lisp, a nice example of a macro is loop, which provides an entire sub-language for parallel iteration. A single loop invocation can express an entire complicated algorithm.
Macros let us think independently about the syntax we would like in a language feature, and the underlying functions or special operators required to implement it. Whatever choices we make in these two, macros will bridge them for us. I don't have to worry that make-struct is ugly to use, so I can focus on the technical aspects; I know that the macro can look the same regardless of how I make various trade-offs. I made the design decision that all struct initialization is going to be done by some functions registered to the type. Okay, that means that my macro has to take all the initializations in the slot-defining syntax, and compile the anonymous functions, where the slot initialization is done by code generated in the bodies.
Macros are compilers for bits of syntax, for which functions and special operators are the target language.
Sometimes people (non-Lisp people, usually) criticize macros in this way: macros don't add any capabilities, only syntactic sugar.
Firstly, syntactic sugar is a capability.
Secondly, you also have to consider macros from a "total hacker perspective": combining macros with implementation-level work. If I'm adding features to a Lisp dialect, such as structures or continuations, I am actually extending the power. The involvement of macros in that enterprise is essential. Even though macros aren't the source of the new power (it doesn't emanate from the macros themselves), they help tame and harness it, giving it expression.
If you don't have sys:capture-cont, you can't just hack up its behavior with a suspend macro. But if you don't have macros, then you have to do something awfully inconvenient to provide access to a new feature that isn't a library function, namely hard-coding some new phrase structure rules into a parser.