How can i get a lambda list specification of a some function parameters, or at least a number of arguments it takes?
For example:
(defun a (a b) )
(get-arg-list #'a) ;-> '(a b)
Common Lisp provides the function FUNCTION-LAMBDA-EXPRESSION which may be able to recover the source expression, which then includes the lambda list.
LispWorks has defined a function FUNCTION-LAMBDA-LIST which returns the arglist.
Many other implementations have some form of ARGLIST function in some internal package.
Many Common Lisp users use SLIME, a very clever editor extension for the GNU Emacs editor. It has a backend for Common Lisp called SWANK. The SWANK sources provide all kinds of interfaces to the various Common Lisp implementations, including getting the arglist of functions.
This is implementation specific, but this CLHS function might get you started - http://clhs.lisp.se/Body/f_descri.htm
The easiest way to do this is to use the SWANK library which is used by SLIME.
The way to use it is to load SLIME, which is most easily done through Quicklisp:
(ql:quickload "swank")
Then, you can get the argument list using the following function:
CL-USER> (swank-backend:arglist #'a)
(A B)
Related
This question already has answers here:
What makes Lisp macros so special?
(15 answers)
Closed 3 months ago.
I keep reading that Lisp macros are one of the most powerful features of the language. But reading over the specifications and manuals, they are just functions whose arguments are unevaluated.
Given any macro (defmacro example (arg1 ... argN) (body-forms)) I could just write (defun example (arg1 ... argN) ... (body-forms)) with the last body-form turned into a list and then call it like (eval (example 'arg1 ... 'argN)) to emulate the same behavior of the macro. If this were the case, then macros would just be syntactic sugar, but I doubt that syntactic sugar would be called a powerful language feature. What am I missing? Are there cases where I cannot carry out this procedure to emulate a macro?
I can't talk about powerful because it can be a little bit subjective, but macros are regular Lisp functions that work on Lisp data, so they are as expressive as other functions. This isn't the case with templates or generic functions in other languages that rely more on static types and are more restricted (on purpose).
In some way, yes macros are simple syntactic facilities, but you are focused in your emulation on the dynamic semantics of macros, ie. how you can run code that evaluates macros at runtime. However:
the code using eval is not equivalent to expanded code
the preprocessing/compile-time aspect of macros is not emulated
Lexical scope
Function, like +, do not inherit the lexical scope:
(let ((x 30))
(+ 3 4))
Inside the definition of +, you cannot access x. Being able to do so is what "dynamic scope" is about (more precisely, see dynamic extent, indefinite scope variables). But nowadays it is quite the exception to rely on dynamic scope. Most functions use lexical scope, and this is the case for eval too.
The eval function evaluates a form in the null lexical environment, and it never has access to the surrounding lexical bindings. As such, it behaves like any regular function.
So, in you example, calling eval on the transformed source code will not work, since arg1 to argnN will probably be unbound (it depends on what your macro does).
In order to have an equivalent form, you have to inject bindings in the transformed code, or expand at a higher level:
(defun expand-square (var)
(list '* var var))
;; instead of:
(defun foo (x) (eval (expand-square 'x))) ;; x unbound during eval
;; inject bindings
(defun foo (x) (eval `(let ((z ,x)) (expand-square z))))
;; or expand the top-level form
(eval `(defun foo (x) ,(expand-square 'x)))
Note that macros (in Common Lisp) also have access to the lexical environment through &environment parameters in their lambda-list. The use of this environment is implementation dependent, but can be used to access the declarations associated with a variable, for example.
Notice also how in the last example you evaluate the code when defining the function, and not when running it. This is the second thing about macro.
Expansion time
In order to emulate macros you could locally replace a call to a macro by a form that emulates it at runtime (using let to captures all the bindings you want to see inside the expanded code, which is tedious), but then you would miss the useful aspect of macros that is: generating code ahead of time.
The last example above shows how you can quote defun and wrap it in eval, and basically you would need to do that for all functions if you wanted to emulate the preprocessing work done by macros.
The macro system is a way to integrate this preprocessing step in the language in a way that is simple to use.
Conclusion
Macros themselves are a nice way to abstract things when functions can't. For example you can have a more human-friendly, stable syntax that hides implementation details. That's how you define pattern-matching abilities in Common Lisp that make it look like they are part of the language, without too much runtime penalty or verbosity.
They rely on simple term-rewriting functions that are integrated in the language, but you can emulate their behavior either at compile-time or runtime yourself if you want. They can be used to perform different kinds of abstraction that are usually missing or more cumbersome to do in other languages, but are also limited: they don't "understand" code by themselves, they don't give access to all the facilities of the compiler (type propagation, etc.). If you want more you can use more advanced libraries or compiler tools (see deftransform), but macros at least are portable.
Macros are not just functions whose arguments are unevaluated. Macros are functions between programming languages. In other words a macro is a function whose argument is a fragment of source code of a programming language which includes the macro, and whose value is a fragment of source code of a language which does not include the macro (or which includes it in a simpler way).
In very ancient, very rudimentary, Lisps, before people really understood what macros were, you could simulate macros with things called FEXPRs combined with EVAL. A FEXPR was simply a function which did not evaluate its arguments. This worked in such Lisps only because they were completely dynamically scoped, and the cost of it working was that compilation of such things was not possible at all. Those are two enormous costs.
In any modern Lisp, this won't work at all. You can write a toy version of FEXPRs as a macro (this may be buggy):
(defmacro deffex (fx args &body body)
(assert (every (lambda (arg)
(and (symbolp arg)
(not (member arg lambda-list-keywords))))
args)
(args) "not a simple lambda list")
`(defmacro ,fx ,args
`(let ,(mapcar (lambda (argname argval)
`(,argname ',argval))
',args (list ,#args))
,#',body)))
So now we could try to write a trivial binding construct I'll call with using this thing:
(deffex with (var val form)
(eval `(let ((,var ,val)) ,form)))
And this seems to work:
> (with a 1 a)
1
Of course, we're paying the cost that no code which uses this construct can ever be compiled so all our programs will be extremely slow, but perhaps that is a cost we're willing to accept (it's not, but never mind).
Except, of course, it doesn't work, at all:
> (with a 1
(with b 2
(+ a b)))
Error: The variable a is unbound.
Oh dear.
Why doesn't it work? It doesn't work because Common Lisp is lexically scoped, and eval is a function: it can't see the lexical bindings.
So not only does this kind of approach prevent compilation in a modern Lisp, it doesn't work at all.
People often, at this point, suggest some kind of kludge solution which would allow eval to be able to see lexical bindings. The cost of such a solution is that all the lexical bindings need to exist in compiled code: no variable can ever be compiled away, not even its name. That's essentially saying that no good compilers can ever be used, even for the small part of your programs you can compile at all in a language which makes extensive use of macros like CL. For instance, if you ever use defun you're not going to be able to compile the code in its body. People do use defun occasionally, I think.
So this approach simply won't work: it worked by happenstance in very old Lisps but it can't work, even at the huge cost of preventing compilation, in any modern Lisp.
More to the point this approach obfuscates the understanding of what macros are: as I said at the start, macros are functions between programming languages, and understanding that is critical. When you are designing macros you are implementing a new programming language.
Generics seem to offer a nice facility for pulling out a common word and letting it act on things according to the types you pass it, with extensibility after-the-fact.
But what about common words that are already taken and aren't defined as generic? If I try to define REMOVE, for instance:
(defclass reticulator () (splines))
(defmethod remove ((item reticulator)
sequence &key from-end test test-not start end count key))
I get an error in SBCL:
COMMON-LISP:REMOVE already names an ordinary function or a macro.
Is there an idiom or approved way to "generic-ify" one of these built-in functions? Do people do this?
Just to see what would happen I tried overriding REMOVE with a generic:
(defgeneric remove (item sequence &key from-end test test-not start end count key))
WARNING: redefining COMMON-LISP:REMOVE in DEFGENERIC
I don't know if there's a "good" way of doing that, which would pipe to the old implementation with allowance of overloading for specific types that want to give the word new meaning.
Why are things how they are?
The first version of Common Lisp was designed starting in 1981/82 and the result was published as the book Common Lisp the Language in 1984. The language itself was largely designed based on Lisp Machine Lisp (aka Zetalisp). Zetalisp was much larger than the then published Common Lisp and included an early object system called Flavors. Much of the stuff in Zetalisp was implemented in an object-oriented fashion - which did cost performance and on Lisp Machines the performance hit wasn't so large - but they had specialized processors which had optimized instruction sets. So Common Lisp did not include any object-system and thus it was slightly optimized for performance on typical processors of that time. In a later version of Common Lisp, an object-system was to be added - when there was enough experience with object-oriented extensions to Lisp - remember, we are talking about the early 80s.
This 1984 Common Lisp had a limited form of generic behavior. For example the function REMOVE works on sequences - and sequences is a new type, which has vectors and lists as subtypes.
Later Common Lisp was standardized starting in 1986 and one were looking for an object-system for Common Lisp - none of the proposed were good enough - so a new one was developed based on New Flavors (from Symbolics, a newer version of the above mentioned Flavors) and Common Loops (from Xerox PARC). Those had already generic functions, but single dispatch. CLOS then added multiple dispatch.
It was decided to not replace basic functions with CLOS generic functions - one reason for that is performance: CLOS generic functions need a relatively complex dispatch mechanism and this dispatch is decided at runtime. There is no CLOS feature to have static compile-time dispatch and there is also no standardized feature to make the parts of the classes 'sealed' -> so that they cannot be changed. Thus a highly-dynamic system like CLOS has runtime costs.
Some functions are defined as CLOS generic functions (like PRINT-OBJECT) and some implementations have large parts of Common Lisp with CLOS implementations (streams, conditions, ...) - but this is implementation specific and not required by the standard. There are also several libraries, which provide CLOS based functionality of built-in CL features: like I/O with extensible CLOS-based streams.
Also note that redefining existing Common Lisp functions is undefined behavior.
So Common Lisp chose to provide a powerful object-system, but leave it to the individual implementations how much they want to use CLOS in the base language - with the limitations that functions which are standardized as normal non-CLOS-generic functions usually should not be replaced with CLOS functions by users.
Some Lisp dialects/implementations tried to deal with these problems and tried to define a faster CLOS variant, which then would be the base for much of the language. See for example Apple's Dylan language. For some newer approach see the language Julia.
Your own improved Common Lisp
Packages (-> symbol namespaces) allow you to define your own improved CL: here we define a new package which has all CL symbols, with only cl:remove being shadowed by its own symbol. We then define a CLOS generic function named bettercl::remove and write two example methods.
CL-USER 165 > (defpackage "BETTERCL" (:use "CL") (:shadow cl:remove))
#<The BETTERCL package, 1/16 internal, 0/16 external>
CL-USER 166 > (in-package "BETTERCL")
#<The BETTERCL package, 1/16 internal, 0/16 external>
BETTERCL 167 > (defgeneric remove (item whatever))
#<STANDARD-GENERIC-FUNCTION REMOVE 4060000C64>
BETTERCL 168 > (defmethod remove (item (v vector)) (cl:remove item v))
#<STANDARD-METHOD REMOVE NIL (T VECTOR) 40200AB12B>
BETTERCL 169 > (remove 'a #(1 2 3 a b c))
#(1 2 3 B C)
BETTERCL 170 > (defmethod remove ((digit integer) (n integer))
"remove a digit from an integer, returns a new integer"
(let ((s (map 'string
(lambda (item)
(character (princ-to-string item)))
(cl:remove digit
(map 'vector
#'digit-char-p
(princ-to-string n))))))
(if (= (length s) 0) 0 (read-from-string s))))
#<STANDARD-METHOD REMOVE NIL (INTEGER INTEGER) 40200013C3>
BETTERCL 171 > (remove 8 111888111348111)
11111134111
Now you could also export the symbols from BETTERCL, such that you can use this package in your application packages, instead of the package CL.
This approach has been used before. For example CLIM (the Common Lisp Interface Manager) defines a package CLIM-LISP, which is used as the dialect to program in.
Common Lisp provides sometimes both a function and a related CLOS generic function
See the standard function DESCRIBE, which can be extended by writing methods for the standard CLOS generic function DESCRIBE-OBJECT.
Improvement experiments for Common Lisp
Another way to make standard Common Lisp functions extensible, is to replace them with versions which then use an extensible CLOS-based protocol. Note that HOW to replace standard functions is implementation specific and the effects may also be implementation specific. If a compiler for example has inlined a built-in function into code, then redefining will have no effect on the already inlined code.
For an example of this approach see the paper (PDF) User-extensible sequences in Common Lisp by Christophe Rhodes.
When to use generic functions?
There are a few things to keep in mind:
define CLOS generic functions when there are multiple related methods, which ideally benefit from a common extension mechanism and which are likely to be extended
think about the performance hit
don't use a single CLOS generic function for functions which have a similar arglist and even the same name, but which work for very different domains
This means that you should not define functions in a single CLOS generic function like:
; do some astrophysics calculations
(defmethod rotate-around ((s star) (u galaxy)) ...)
; do some computation with graphics objects
(defmethod rotate-around (shape (a axis)) ...)
For example writing :before, :around and :after methods might not lead to useful results.
One could have two different rotate-around generic functions, one in a package ASTRO-PHYSICS and the other one in a package GRAPHICS-OBJECTS. Thus the methods would not be in the same CLOS generic function and extending those would be probably easier.
Are there any practical differences between special forms and macros? In what do they differ?
The terms aren't quite synonymous, but they aren't exclusive either (this answer assumes Scheme):
A special form (also known as a syntax in the Scheme Reports) is an expression that's not evaluated according to the default rule for function application. (The default rule, just to be explicit, is to eval all of the subexpressions, and then apply the result of the first one to the list of the results of the others.)
The macro system is a language feature that allows definition of new special forms within the language itself. A macro is a special form defined using the macro system.
So you could say that "special form" is a term that pertains to interface or semantics, whereas "macro" is a term that pertains to implementation. "Special form" means "these expressions are evaluated with a special rule," while "macro" means "here's an implementation of a special rule for evaluating some expressions."
Now one important thing is that most Scheme special forms can be defined as macros from a really small core of primitives: lambda, if and macros. A minimal Scheme implementation that provides only these can still implement the rest as macros; recent Scheme Reports have made that distinction by referring to such special forms as "library syntax" that can be defined in terms of macros. In practice, however, practical Scheme systems often implement a richer set of forms as primitives.
Semantically speaking, the only thing that matters about an expression is what rule is used to evaluate it, not how that rule is implemented. So in that sense, it's not important whether a special form is implemented as a macro or a primitive. But on the other hand, the implementation details of a Scheme system often "leak," so you may find yourself caring about it...
Lisp has certain language primitives, which make up Lisp forms:
literal data: numbers, strings, structures, ...
function calls, like (sin 2.1) or like ((lambda (a b) (+ a b 2)) 3 4)
special operators used in special forms. These are the primitive built-in language elements. See Special Operators in Common Lisp. These need to be implemented in the interpreter and compiler. Common Lisp provides no way for the developer to introduce new special operators or to provide your own version of these. A code parsing tool will need to understand these special operators; these tools are usually called 'code walkers' in the Lisp community. During the definition of the Common Lisp standard, it was made sure that the number is very small and that all extensions otherwise are done via new functions and new macros.
macros: macros are functions which are transforming source code. The transformation will happen recursively until no macro is left in the source code. Common Lisp has built-in macros and allows the user to write new ones.
So the most important practical difference between special forms and macros is this: special operators are built-in syntax and semantics. They can't be written by the developer. Macros can be written by the developer.
In contrast to special forms, macro forms can be macroexpanded:
CL-USER(1): (macroexpand '(with-slots (x y z)
foo
(format t "~&X = ~A" x)))
(LET ((#:G925 FOO))
(DECLARE (IGNORABLE #:G925))
(DECLARE (SB-PCL::%VARIABLE-REBINDING #:G925 FOO))
#:G925
(SYMBOL-MACROLET ((X (SLOT-VALUE #:G925 'X))
(Y (SLOT-VALUE #:G925 'Y))
(Z (SLOT-VALUE #:G925 'Z)))
(FORMAT T "~&X = ~A" X)))
T
For me the most practical difference has been in the debugger: Macros don't show up in the debugger; instead, the (typically) obscure code from the macro's expansion shows up in the debugger. It is a real pain to debug such code and a good reason to ensure your macros are rock solid before you start relying upon them.
the super short answer for the lazy
You can write your own macroes any time you want, though you can't add special forms without recompiling clojure.
Why is the function/macro dichotomy present in Common Lisp?
What are the logical problems in allowing the same name representing both a macro (taking precedence when found in function position in compile/eval) and a function (usable for example with mapcar)?
For example having second defined both as a macro and as a function would allow to use
(setf (second x) 42)
and
(mapcar #'second L)
without having to create any setf trickery.
Of course it's clear that macros can do more than functions and so the analogy cannot be complete (and I don't think of course that every macro shold also be a function) but why forbidding it by making both sharing a single namespace when it could be potentially useful?
I hope I'm not offending anyone, but I don't really find a "Why doing that?" response really pertinent... I'm looking for why this is a bad idea. Imposing an arbitrary limitation because no good use is known is IMO somewhat arrogant (sort of assumes perfect foresight).
Or are there practical problems in allowing it?
Macros and Functions are two very different things:
macros are using source (!!!) code and are generating new source (!!!) code
functions are parameterized blocks of code.
Now we can look at this from several angles, for example:
a) how do we design a language where functions and macros are clearly identifiable and are looking different in our source code, so we (the human) can easily see what is what?
or
b) how do we blend macros and functions in a way that the result is most useful and has the most useful rules controlling its behavior? For the user it should not make a difference to use a macro or a function.
We really need to convince ourselves that b) is the way to go and we would like to use a language where macros and functions usage looks the same and is working according to similar principles. Take ships and cars. They look different, their use case is mostly different, they transport people - should we now make sure that the traffic rules for them are mostly identical, should we make them different or should we design the rules for their special usage?
For functions we have problems like: defining a function, scope of functions, life-time of functions, passing functions around, returning functions, calling functions, shadowing of functions, extension of functions, removing the definition a function, compilation and interpretation of functions, ...
If we would make macros appear mostly similar to functions, we need to address most or all above issues for them.
In your example you mention a SETF form. SETF is a macro that analyses the enclosed form at macro expansion time and generates code for a setter. It has little to do with SECOND being a macro or not. Having SECOND being a macro would not help at all in this situation.
So, what is a problem example?
(defmacro foo (a b)
(if (and (numberp b) (zerop b))
a
`(- ,a ,b)))
(defun bar (x list)
(mapcar #'foo (list x x x x) '(1 2 3 4)))
Now what should that do? Intuitively it looks easy: map FOO over the lists. But it isn't. When Common Lisp was designed, I would guess, it was not clear what that should do and how it should work. If FOO is a function, then it was clear: Common Lisp took the ideas from Scheme behind lexically scoped first-class functions and integrated it into the language.
But first-class macros? After the design of Common Lisp a bunch of research went into this problem and investigated it. But at the time of Common Lisp's design, there was no wide-spread use of first-class macros and no experience with design approaches. Common Lisp is standardizing on what was known at the time and what the language users thought necessary to develop (the object-system CLOS is kind of novel, based on earlier experience with similar object-systems) software with. Common Lisp was not designed to have the theoretically most pleasing Lisp dialect - it was designed to have a powerful Lisp which allows the efficient implementation of software.
We could work around this and say, passing macros is not possible. The developer would have to provide a function under the same name, which we pass around.
But then (funcall #'foo 1 2) and (foo 1 2) would invoke different machineries? In the first case the function fooand in the second case we use the macro foo to generate code for us? Really? Do we (as human programmers) want this? I think not - it looks like it makes programming much more complicated.
From a pragmatic point of view: Macros and the mechanism behind it are already complicated enough that most programmers have difficulties dealing with it in real code. They make debugging and code understanding much harder for a human. On the surface a macro makes code easier to read, but the price is the need to understand the code expansion process and result.
Finding a way to further integrate macros into the language design is not an easy task.
readscheme.org has some pointers to Macro-related research wrt. Scheme: Macros
What about Common Lisp
Common Lisp provides functions which can be first-class (stored, passed around, ...) and lexically scoped naming for them (DEFUN, FLET, LABELS, FUNCTION, LAMBDA).
Common Lisp provides global macros (DEFMACRO) and local macros (MACROLET).
Common Lisp provides global compiler macros (DEFINE-COMPILER-MACRO).
With compiler macros it is possible to have a function or macro for a symbol AND a compiler macro. The Lisp system can decide to prefer the compiler macro over the macro or function. It can also ignore them entirely. This mechanism is mostly used for the user to program specific optimizations. Thus it does not solve any macro related problems, but provides a pragmatic way to program global optimizations.
I think that Common Lisp's two namespaces (functions and values), rather than three (macros, functions, and values), is a historical contingency.
Early Lisps (in the 1960s) represented functions and values in different ways: values as bindings on the runtime stack, and functions as properties attached to symbols in the symbol table. This difference in implementation led to the specification of two namespaces when Common Lisp was standardized in the 1980s. See Richard Gabriel's paper Technical Issues of Separation in Function Cells and Value Cells for an explanation of this decision.
Macros (and their ancestors, FEXPRs, functions which do not evaluate their arguments) were stored in many Lisp implementations in the symbol table, in the same way as functions. It would have been inconvenient for these implementations if a third namespace (for macros) had been specified, and would have caused backwards-compatibility problems for many programs.
See Kent Pitman's paper Special Forms in Lisp for more about the history of FEXPRs, macros and other special forms.
(Note: Kent Pitman's website is not working for me, so I've linked to the papers via archive.org.)
Because then the exact same name would represent two different objects, depending on the context. It makes the programme unnecessarily difficult to understand.
My TXR Lisp dialect allows a symbol to be simultaneously a macro and function. Moreover, certain special operators are also backed by functions.
I put a bit of thought into the design, and haven't run into any problems. It works very well and is conceptually clean.
Common Lisp is the way it is for historic reasons.
Here is a brief rundown of the system:
When a global macro is defined for symbol X with defmacro, the symbol X does not become fboundp. Rather, what becomes fboundp is the compound function name (macro X).
The name (macro X) is then known to symbol-function, trace and in other situations. (symbol-function '(macro X)) retrieves the two-argument expander function which takes the form and an environment.
It's possible to write a macro using (defun (macro X) (form env) ...).
There are no compiler macros; regular macros do the job of compiler macros.
A regular macro can return the unexpanded form to indicate that it's declining to expand. If a lexical macrolet declines to expand, the opportunity goes to a more lexically outer macrolet, and so on up to the global defmacro. If the global defmacro declines to expand, the form is considered expanded, and thus is necessarily either a function call or special form.
If we have both a function and macro called X, we can call the function definition using (call (fun X) ...) or (call 'X ...), or else using the Lisp-1-style dwim evaluator (dwim X ...) that is almost always used through its [] syntactic sugar as [X ...].
For a sort of completeness, the functions mboundp, mmakunbound and symbol-macro are provided, which are macro analogs of fboundp, fmakunbound and symbol-function.
The special operators or, and, if and some others have function definitions also, so that code like [mapcar or '(nil 2 t) '(1 0 3)] -> (1 2 t) is possible.
Example: apply constant folding to sqrt:
1> (sqrt 4.0)
2.0
2> (defmacro sqrt (x :env e :form f)
(if (constantp x e)
(sqrt x)
f))
** warning: (expr-2:1) defmacro: defining sqrt, which is also a built-in defun
sqrt
3> (sqrt 4.0)
2.0
4> (macroexpand '(sqrt 4.0))
2.0
5> (macroexpand '(sqrt x))
(sqrt x)
However, no, (set (second x) 42) is not implemented via a macro definition for second. That would not work very well. The main reason is that it would be too much of a burden. The programmer may want to have, for a given function, a macro definition which has nothing to do with implementing assignment semantics!
Moreover, if (second x) implements place semantics, what happens when it is not embedded in an assignment operation, such that the semantics is not required at all? Basically, to hit all the requirements would require concocting a scheme for writing macros whose complexity would equal or exceed that of existing logic for handling places.
TXR Lisp does, in fact, feature a special kind of macro called a "place macro". A form is only recognized as a place macro invocation when it is used as a place. However, place macros do not implement place semantics themselves; they just do a straightforward rewrite. Place macros must expand down to a form that is recognized as a place.
Example: specify that (foo x), when used as a place, behaves as (car x):
1> (define-place-macro foo (x) ^(car ,x))
foo
2> (macroexpand '(foo a)) ;; not a macro!
(foo a)
3> (macroexpand '(set (foo a) 42)) ;; just a place macro
(sys:rplaca a 42)
If foo expanded to something which is not a place, things would fail:
4> (define-place-macro foo (x) ^(bar ,x))
foo
5> (macroexpand '(foo a))
(foo a)
6> (macroexpand '(set (foo a) 42))
** (bar a) is not an assignable place
Scheme macros, at least the syntax-case variety, are said to allow arbitrary computation on the code to be transformed. However (both in the general case and in the specific case I'm currently looking at) this requires the computation to be specified in terms of recursive functions. When I try various variants of this, I get e.g.
main.scm:32:71: compile: unbound identifier in module (in the transformer environment, which does not include the run-time definition) in: expand-vars
(The implementation is Racket, if it matters.)
The upshot seems to be that you can't define named functions until after macro processing.
I suppose I could resort to the Y combinator, but I figure it's worth asking first whether there's a better approach?
Yes, the fact that you're using Racket matters -- in Racket, there is something that is called "phase separation", which means that the syntax level cannot use runtime functions. For example, this:
#lang racket
(define (bleh) #'123)
(define-syntax (foo stx)
(bleh))
(foo)
will not work since bleh is bound at a runtime, not available for syntax. Instead, it should be
(define-for-syntax (bleh) #'123)
or
(begin-for-syntax (define (bleh) #'123))
or moved as an internal definition to the macro body, or moved to its own module and required using (require (for-syntax "bleh.rkt")).