Access value defined in a macro in another macro in ClojureScript - macros

Assume the following code
(defmacro sdef [sname]
`(def ~sname 3))
(defmacro sinc [sname]
(inc ... we want to access sname's value here...))
In ClojureScript we'd use it like this
(sdef sfoo)
(sinc sfoo) ; => 4
In our case it is mandatory that sinc is evaluated at compile time.
We managed to achieve this in Clojure using resolve, which didn't work in ClojureScript. We are aware, that especially in ClojureScript the macro evaluation is strictly separated from ClojureScript code execution.
However, is there a way to achieve the above?

Related

Are Lisp macros just syntactic sugar? [duplicate]

This question already has answers here:
What makes Lisp macros so special?
(15 answers)
Closed 3 months ago.
I keep reading that Lisp macros are one of the most powerful features of the language. But reading over the specifications and manuals, they are just functions whose arguments are unevaluated.
Given any macro (defmacro example (arg1 ... argN) (body-forms)) I could just write (defun example (arg1 ... argN) ... (body-forms)) with the last body-form turned into a list and then call it like (eval (example 'arg1 ... 'argN)) to emulate the same behavior of the macro. If this were the case, then macros would just be syntactic sugar, but I doubt that syntactic sugar would be called a powerful language feature. What am I missing? Are there cases where I cannot carry out this procedure to emulate a macro?
I can't talk about powerful because it can be a little bit subjective, but macros are regular Lisp functions that work on Lisp data, so they are as expressive as other functions. This isn't the case with templates or generic functions in other languages that rely more on static types and are more restricted (on purpose).
In some way, yes macros are simple syntactic facilities, but you are focused in your emulation on the dynamic semantics of macros, ie. how you can run code that evaluates macros at runtime. However:
the code using eval is not equivalent to expanded code
the preprocessing/compile-time aspect of macros is not emulated
Lexical scope
Function, like +, do not inherit the lexical scope:
(let ((x 30))
(+ 3 4))
Inside the definition of +, you cannot access x. Being able to do so is what "dynamic scope" is about (more precisely, see dynamic extent, indefinite scope variables). But nowadays it is quite the exception to rely on dynamic scope. Most functions use lexical scope, and this is the case for eval too.
The eval function evaluates a form in the null lexical environment, and it never has access to the surrounding lexical bindings. As such, it behaves like any regular function.
So, in you example, calling eval on the transformed source code will not work, since arg1 to argnN will probably be unbound (it depends on what your macro does).
In order to have an equivalent form, you have to inject bindings in the transformed code, or expand at a higher level:
(defun expand-square (var)
(list '* var var))
;; instead of:
(defun foo (x) (eval (expand-square 'x))) ;; x unbound during eval
;; inject bindings
(defun foo (x) (eval `(let ((z ,x)) (expand-square z))))
;; or expand the top-level form
(eval `(defun foo (x) ,(expand-square 'x)))
Note that macros (in Common Lisp) also have access to the lexical environment through &environment parameters in their lambda-list. The use of this environment is implementation dependent, but can be used to access the declarations associated with a variable, for example.
Notice also how in the last example you evaluate the code when defining the function, and not when running it. This is the second thing about macro.
Expansion time
In order to emulate macros you could locally replace a call to a macro by a form that emulates it at runtime (using let to captures all the bindings you want to see inside the expanded code, which is tedious), but then you would miss the useful aspect of macros that is: generating code ahead of time.
The last example above shows how you can quote defun and wrap it in eval, and basically you would need to do that for all functions if you wanted to emulate the preprocessing work done by macros.
The macro system is a way to integrate this preprocessing step in the language in a way that is simple to use.
Conclusion
Macros themselves are a nice way to abstract things when functions can't. For example you can have a more human-friendly, stable syntax that hides implementation details. That's how you define pattern-matching abilities in Common Lisp that make it look like they are part of the language, without too much runtime penalty or verbosity.
They rely on simple term-rewriting functions that are integrated in the language, but you can emulate their behavior either at compile-time or runtime yourself if you want. They can be used to perform different kinds of abstraction that are usually missing or more cumbersome to do in other languages, but are also limited: they don't "understand" code by themselves, they don't give access to all the facilities of the compiler (type propagation, etc.). If you want more you can use more advanced libraries or compiler tools (see deftransform), but macros at least are portable.
Macros are not just functions whose arguments are unevaluated. Macros are functions between programming languages. In other words a macro is a function whose argument is a fragment of source code of a programming language which includes the macro, and whose value is a fragment of source code of a language which does not include the macro (or which includes it in a simpler way).
In very ancient, very rudimentary, Lisps, before people really understood what macros were, you could simulate macros with things called FEXPRs combined with EVAL. A FEXPR was simply a function which did not evaluate its arguments. This worked in such Lisps only because they were completely dynamically scoped, and the cost of it working was that compilation of such things was not possible at all. Those are two enormous costs.
In any modern Lisp, this won't work at all. You can write a toy version of FEXPRs as a macro (this may be buggy):
(defmacro deffex (fx args &body body)
(assert (every (lambda (arg)
(and (symbolp arg)
(not (member arg lambda-list-keywords))))
args)
(args) "not a simple lambda list")
`(defmacro ,fx ,args
`(let ,(mapcar (lambda (argname argval)
`(,argname ',argval))
',args (list ,#args))
,#',body)))
So now we could try to write a trivial binding construct I'll call with using this thing:
(deffex with (var val form)
(eval `(let ((,var ,val)) ,form)))
And this seems to work:
> (with a 1 a)
1
Of course, we're paying the cost that no code which uses this construct can ever be compiled so all our programs will be extremely slow, but perhaps that is a cost we're willing to accept (it's not, but never mind).
Except, of course, it doesn't work, at all:
> (with a 1
(with b 2
(+ a b)))
Error: The variable a is unbound.
Oh dear.
Why doesn't it work? It doesn't work because Common Lisp is lexically scoped, and eval is a function: it can't see the lexical bindings.
So not only does this kind of approach prevent compilation in a modern Lisp, it doesn't work at all.
People often, at this point, suggest some kind of kludge solution which would allow eval to be able to see lexical bindings. The cost of such a solution is that all the lexical bindings need to exist in compiled code: no variable can ever be compiled away, not even its name. That's essentially saying that no good compilers can ever be used, even for the small part of your programs you can compile at all in a language which makes extensive use of macros like CL. For instance, if you ever use defun you're not going to be able to compile the code in its body. People do use defun occasionally, I think.
So this approach simply won't work: it worked by happenstance in very old Lisps but it can't work, even at the huge cost of preventing compilation, in any modern Lisp.
More to the point this approach obfuscates the understanding of what macros are: as I said at the start, macros are functions between programming languages, and understanding that is critical. When you are designing macros you are implementing a new programming language.

`apply` or `funcall` for macros instead of functions

In Lisp, a function's arguments are evaluated first before entering the function body. Macro arguments stay not evaluated.
But sometimes, one wants to inject code pieces stored in variables into a macro. This means evaluating the argument for the macro first, and then apply the macro-of-choice on this evaluated result.
One has to resort to
(eval `(macro ,arg))
To achieve this - but eval does not behave correctly in different environments.
The best thing would be, if one could do:
(apply macro (list arg))
or
(funcall macro arg)
But since the macro is not a function this doesn't work.
Is it possible to achieve something like this? - To circumvent that problem oder to make the macro available in the functions namespace?
Or am I missing some other ways to solve such problems?
I came to this question while trying to answer How to produce HTML from a list. but also in Generate TYPECASE with macro in common lisp, Evaluate arguments passed to a macro that generates functions in lisp, and How to convert a list to code/lambda in scheme?. But I always thought while answering them it would be good to have an apply or funcall-like function which can take macros.
It is not clear what you are trying to do, although it is almost certain that you are confused about something. In particular if you are calling eval inside macroexpansions then in almost all cases you are doing something both seriously wrong and seriously dangerous. I can't ever think of a case where I've wanted macros which expand to things including eval and I have written Lisp for a very very long time.
That being said, here is how you call the function associated with a macro, and why it is very seldom what you want to do.
Macros are simply functions whose domain and range is source code: they are compilers from a language to another language. It is perfectly possible to call the function associated with a macro, but what that function will return is source code, and what you will then need to do with that source code is evaluate it. If you want a function which deals with run-time data which is not source code, then you need that function, and you can't turn a macro into that function by some magic trick which seems to be what you want to do: that magic trick does not, and can not, exist.
So for instance if I have a macro
(defmacro with-x (&body forms)
`(let ((x 1))
,#forms))
Then I can call its macro function on a bit of source code:
> (funcall (macro-function 'with-x)
'(with-x (print "foo")) nil)
(let ((x 1)) (print "foo"))
But the result of this is another bit of source code: I need to compile or evaluate it, and nothing I can do will get around this.
Indeed in (almost?) all cases this is just the same as macroexpand-1):
> (macroexpand-1 '(with-x (print "foo")))
(let ((x 1)) (print "foo"))
t
And you can probably write macroexpand-1 in terms of macro-function:
(defun macroexpand-1/equivalent (form &optional (env nil))
(if (and (consp form)
(symbolp (first form))
(macro-function (first form)))
(values (funcall (macro-function (first form)) form env)
t)
(values form nil)))
So, if the result of calling a macro is source code, what do you do with that source code to get a result which is not source code? Well, you must evaluate it. And then, well, since the evaluator expands macros for you anyway, you might as well just write something like
(defun evaluate-with-x (code)
(funcall (compile nil `(lambda ()
(with-x ,#code)))))
So you didn't need to call the macro's function in any case. And this is not the magic trick which turns macros into functions dealing with data which is not source code: it is a terrible horror which is entirely made of exploding parts.
A concrete example: CL-WHO
It looks like this question might have its origins in this one and the underlying problem there is that that's not what CL-WHO does. In particular it is a confusion to think that something like CL-WHO is a tool for taking some kind of list and turning it into HTML. It's not: it's a tool for taking the source code of a language which is built on CL but includes a way of expressing HTML output mingled with CL code, and compiles it into CL code which will do the same thing. It happens to be the case that CL source code is expressed as lists & symbols, but CL-WHO isn't really about that: it's a compiler from, if you like, 'the CL-WHO language' to CL.
So, let's try the trick we tried above and see why it's a disaster:
(defun form->html/insane (form)
(funcall
(compile nil `(lambda ()
(with-html-output-to-string (,(make-symbol "O"))
,#form)))))
And you might, if you did not look at this too closely, think that this function does in fact do the magic trick:
> (form->html/insane '(:p ((:a :href "foo") "the foo")))
"<p></p><a href='foo'>the foo</a>"
But it doesn't. What happens if we call form->html/insane on this perfectly innocuous list:
(:p (uiop/run-program:run-program "rm -rf $HOME" :output t))
Hint: don't call form->html/insane on this list if you don't have very good backups.
CL-WHO is an implementation of a programming language which is a strict superset of CL: if you try to turn it into a function to turn lists into HTML you end up with something involving the same nuclear weapon you tinker with every time you call eval, except that nuclear weapon is hidden inside a locked cupboard where you can't see it. But it doesn't care about that: if you set it off it will still reduce everything within a few miles to radioactive ash and rubble.
So if you want a tool which will turn lists – lists which aren't source code – into HTML then write that tool. CL-WHO might have the guts of such a tool in its implemenentation, but you can't use it as it is.
And this is the same problem you face whenever you are trying to abuse macros this way: the result of calling a macro's function is Lisp source code, and to evaluate that source code you need eval or an equivalent of eval. And eval is not only not a terrible solution to almost any problem: it's also a nuclear weapon. There are, perhaps problems for which nuclear weapons are good solutions, but they are few and far between.

Confused by Lisp Quoting

I have a question concerning evaluation of lists in lisp.
Why is (a) and (+ a 1) not evaluated,
(defun test (a) (+ a 1))
just like (print 4) is not evaluated here
(if (< 1 2) (print 3) (print 4))
but (print (+ 2 3)) is evaluated here
(test (print (+ 2 3)))
Does it have something to do with them being standard library functions? Is it possible for me to define functions like that in my lisp program?
As you probably know, Lisp compound forms are generally processed from the outside in. You must look at the symbol in the first position of the outermost nesting to understand a form. That symbol completely determines the meaning of the form. The following expressions all contain (b c) with completely different meaning; therefore, we cannot understand them by analyzing the (b c) part first:
;; Common Lisp: define a class A derived from B and C
(defclass a (b c) ())
;; Common Lisp: define a function of two arguments
(defun a (b c) ())
;; add A to the result of calling function B on variable C:
(+ a (b c))
Traditionally, Lisp dialects have divided forms into operator forms and function call forms. An operator form has a completely arbitrary meaning, determined by the piece of code which compiles or interprets that functions (e.g. the evaluation simply recurses over all of the function call's argument forms, and the resulting values are passed to the function).
From the early history, Lisp has allowed users to write their own operators. There existed two approaches to this: interpretive operators (historically known as fexprs) and compiling operators known as macros. Both hinge around the idea of a function which receives the unevaluated form as an argument, so that it can implement a custom strategy, thereby extending the evaluation model with new behaviors.
A fexpr type operator is simply handed the form at run-time, along with an environment object with which it can look up the values of variables and such. That operator then walks the form and implements the behavior.
A macro operator is handed the form at macro-expansion time (which usually happens when top-level forms are read, just before they are evaluated or compiled). Its job is not to interpret the form's behavior, but instead to translate it by generating code. I.e. a macro is a mini compiler. (Generated code can contain more macro calls; the macro expander will take care of that, ensuring that all macro calls are decimated.)
The fexpr approach fell out of favor, most likely because it is inefficient. It basically makes compilation impossible, whereas Lisp hackers valued compilation. (Lisp was already a compiled language from as early as circa 1960.) The fexpr approach is also hostile toward lexical environments; it requires the fexpr, which is a function, to be able to peer into the variable binding environment of the form in which its invoked, which is a kind of encapsulation violation that is not allowed by lexical scopes.
Macro writing is slightly more difficult, and in some ways less flexible than fexprs, but support for macro writing improved in Lisp through the 1960's into the 70's to make it close to as easy as possible. Macro originally had receive the whole form and then have to parse it themselves. The macro-defining system developed into something that provides macro functions with arguments that receive the broken-down syntax in easily digestible pieces, including some nested aspects of the syntax. The backquote syntax for writing code templates was also developed, making it much easier to express code generation.
So to answer your question, how can I write forms like that myself? For instance if:
;; Imitation of old-fashioned technique: receive the whole form,
;; extract parts from it and return the translation.
;; Common Lisp defmacro supports this via the &whole keyword
;; in macro lambda lists which lets us have access to the whole form.
;;
;; (Because we are using defmacro, we need to declare arguments "an co &optional al",
;; to make this a three argument macro with an optional third argument, but
;; we don't use those arguments. In ancient lisps, they would not appear:
;; a macro would be a one-argument function, and would have to check the number
;; of arguments itself, to flag bad syntax like (my-if 42) or (my-if).)
;;
(defmacro my-if (&whole if-form an co &optional al)
(let ((antecedent (second if-form)) ;; extract pieces ourselves
(consequent (third if-form)) ;; from whole (my-if ...) form
(alternative (fourth if-form)))
(list 'cond (list antecedent consequent) (list t alternative))))
;; "Modern" version. Use the parsed arguments, and also take advantage of
;; backquote syntax to write the COND with a syntax that looks like the code.
(defmacro my-if (antecedent consequent &optional alternative)
`(cond (,antecedent ,consequent) (t ,alternative))))
This is a fitting example because originally Lisp only had cond. There was no if in McCarthy's Lisp. That "syntactic sugar" was invented later, probably as a macro expanding to cond, just like my-if above.
if and defun are macros. Macros expand a form into a longer piece of code. At expansion time, none of the macro's arguments are evaluated.
When you try to write a function, but struggle because you need to implement a custom evaluation strategy, its a strong signal that you should be writing a macro instead.
Disclaimer: Depending on what kind of lisp you are using, if and defun might technically be called "special forms" and not macros, but the concept of delayed evaluation still applies.
Lisp consists of a model of evaluation of forms. Different Lisp dialects have different rules for those.
Let's look at Common Lisp.
data evaluates to itself
a function form is evaluated by calling the function on the evaluated arguments
special forms are evaluated according to rules defined for each special operator. The Common Lisp standard lists all of those, defines what they do in an informal way and there is no way to define new special operators by the user.
macros forms are transformed, the result is evaluated
How IF, DEFUN etc. works and what they evaluated, when it is doen and what is not evaluated is defined in the Common Lisp standard.

Is destructuring of macro parameters "really needed"?

I understand that destructuring in LISP macro parameters is a nice thing to have; I am wondering whether it is essential. As an example,
(defmacro m1 (a) (car a))
and
(defmacro m2 ((a1 a2)) a1)
seem to be (roughly) equivalent - except for checking for the right form of the parameter(s).
My guess is that destructuring makes the code easier to write/understand, but any code using it may be translated into one that does not. Am I right or is this a stupid beginner's mistake?
It is not essential. You can either let the macro call be destructured by the Lisp system or you write your own code for that inside the macro.
If you would write your own destructuring code you would combine it usually with a &rest or &body parameter list. One usual reason for that is also that the syntax possibilities of the macro lambda list is not flexible enough for a certain purpose. An example for that would be the Common Lisp LOOP macro.
It is good style to use the macro lambda list. It provides an interface with parameters and some structure information. This also allows the Lisp system to provide a simple form of syntactic error checking of macro calls. Something one would have to write by hand.

Why the function/macro dichotomy?

Why is the function/macro dichotomy present in Common Lisp?
What are the logical problems in allowing the same name representing both a macro (taking precedence when found in function position in compile/eval) and a function (usable for example with mapcar)?
For example having second defined both as a macro and as a function would allow to use
(setf (second x) 42)
and
(mapcar #'second L)
without having to create any setf trickery.
Of course it's clear that macros can do more than functions and so the analogy cannot be complete (and I don't think of course that every macro shold also be a function) but why forbidding it by making both sharing a single namespace when it could be potentially useful?
I hope I'm not offending anyone, but I don't really find a "Why doing that?" response really pertinent... I'm looking for why this is a bad idea. Imposing an arbitrary limitation because no good use is known is IMO somewhat arrogant (sort of assumes perfect foresight).
Or are there practical problems in allowing it?
Macros and Functions are two very different things:
macros are using source (!!!) code and are generating new source (!!!) code
functions are parameterized blocks of code.
Now we can look at this from several angles, for example:
a) how do we design a language where functions and macros are clearly identifiable and are looking different in our source code, so we (the human) can easily see what is what?
or
b) how do we blend macros and functions in a way that the result is most useful and has the most useful rules controlling its behavior? For the user it should not make a difference to use a macro or a function.
We really need to convince ourselves that b) is the way to go and we would like to use a language where macros and functions usage looks the same and is working according to similar principles. Take ships and cars. They look different, their use case is mostly different, they transport people - should we now make sure that the traffic rules for them are mostly identical, should we make them different or should we design the rules for their special usage?
For functions we have problems like: defining a function, scope of functions, life-time of functions, passing functions around, returning functions, calling functions, shadowing of functions, extension of functions, removing the definition a function, compilation and interpretation of functions, ...
If we would make macros appear mostly similar to functions, we need to address most or all above issues for them.
In your example you mention a SETF form. SETF is a macro that analyses the enclosed form at macro expansion time and generates code for a setter. It has little to do with SECOND being a macro or not. Having SECOND being a macro would not help at all in this situation.
So, what is a problem example?
(defmacro foo (a b)
(if (and (numberp b) (zerop b))
a
`(- ,a ,b)))
(defun bar (x list)
(mapcar #'foo (list x x x x) '(1 2 3 4)))
Now what should that do? Intuitively it looks easy: map FOO over the lists. But it isn't. When Common Lisp was designed, I would guess, it was not clear what that should do and how it should work. If FOO is a function, then it was clear: Common Lisp took the ideas from Scheme behind lexically scoped first-class functions and integrated it into the language.
But first-class macros? After the design of Common Lisp a bunch of research went into this problem and investigated it. But at the time of Common Lisp's design, there was no wide-spread use of first-class macros and no experience with design approaches. Common Lisp is standardizing on what was known at the time and what the language users thought necessary to develop (the object-system CLOS is kind of novel, based on earlier experience with similar object-systems) software with. Common Lisp was not designed to have the theoretically most pleasing Lisp dialect - it was designed to have a powerful Lisp which allows the efficient implementation of software.
We could work around this and say, passing macros is not possible. The developer would have to provide a function under the same name, which we pass around.
But then (funcall #'foo 1 2) and (foo 1 2) would invoke different machineries? In the first case the function fooand in the second case we use the macro foo to generate code for us? Really? Do we (as human programmers) want this? I think not - it looks like it makes programming much more complicated.
From a pragmatic point of view: Macros and the mechanism behind it are already complicated enough that most programmers have difficulties dealing with it in real code. They make debugging and code understanding much harder for a human. On the surface a macro makes code easier to read, but the price is the need to understand the code expansion process and result.
Finding a way to further integrate macros into the language design is not an easy task.
readscheme.org has some pointers to Macro-related research wrt. Scheme: Macros
What about Common Lisp
Common Lisp provides functions which can be first-class (stored, passed around, ...) and lexically scoped naming for them (DEFUN, FLET, LABELS, FUNCTION, LAMBDA).
Common Lisp provides global macros (DEFMACRO) and local macros (MACROLET).
Common Lisp provides global compiler macros (DEFINE-COMPILER-MACRO).
With compiler macros it is possible to have a function or macro for a symbol AND a compiler macro. The Lisp system can decide to prefer the compiler macro over the macro or function. It can also ignore them entirely. This mechanism is mostly used for the user to program specific optimizations. Thus it does not solve any macro related problems, but provides a pragmatic way to program global optimizations.
I think that Common Lisp's two namespaces (functions and values), rather than three (macros, functions, and values), is a historical contingency.
Early Lisps (in the 1960s) represented functions and values in different ways: values as bindings on the runtime stack, and functions as properties attached to symbols in the symbol table. This difference in implementation led to the specification of two namespaces when Common Lisp was standardized in the 1980s. See Richard Gabriel's paper Technical Issues of Separation in Function Cells and Value Cells for an explanation of this decision.
Macros (and their ancestors, FEXPRs, functions which do not evaluate their arguments) were stored in many Lisp implementations in the symbol table, in the same way as functions. It would have been inconvenient for these implementations if a third namespace (for macros) had been specified, and would have caused backwards-compatibility problems for many programs.
See Kent Pitman's paper Special Forms in Lisp for more about the history of FEXPRs, macros and other special forms.
(Note: Kent Pitman's website is not working for me, so I've linked to the papers via archive.org.)
Because then the exact same name would represent two different objects, depending on the context. It makes the programme unnecessarily difficult to understand.
My TXR Lisp dialect allows a symbol to be simultaneously a macro and function. Moreover, certain special operators are also backed by functions.
I put a bit of thought into the design, and haven't run into any problems. It works very well and is conceptually clean.
Common Lisp is the way it is for historic reasons.
Here is a brief rundown of the system:
When a global macro is defined for symbol X with defmacro, the symbol X does not become fboundp. Rather, what becomes fboundp is the compound function name (macro X).
The name (macro X) is then known to symbol-function, trace and in other situations. (symbol-function '(macro X)) retrieves the two-argument expander function which takes the form and an environment.
It's possible to write a macro using (defun (macro X) (form env) ...).
There are no compiler macros; regular macros do the job of compiler macros.
A regular macro can return the unexpanded form to indicate that it's declining to expand. If a lexical macrolet declines to expand, the opportunity goes to a more lexically outer macrolet, and so on up to the global defmacro. If the global defmacro declines to expand, the form is considered expanded, and thus is necessarily either a function call or special form.
If we have both a function and macro called X, we can call the function definition using (call (fun X) ...) or (call 'X ...), or else using the Lisp-1-style dwim evaluator (dwim X ...) that is almost always used through its [] syntactic sugar as [X ...].
For a sort of completeness, the functions mboundp, mmakunbound and symbol-macro are provided, which are macro analogs of fboundp, fmakunbound and symbol-function.
The special operators or, and, if and some others have function definitions also, so that code like [mapcar or '(nil 2 t) '(1 0 3)] -> (1 2 t) is possible.
Example: apply constant folding to sqrt:
1> (sqrt 4.0)
2.0
2> (defmacro sqrt (x :env e :form f)
(if (constantp x e)
(sqrt x)
f))
** warning: (expr-2:1) defmacro: defining sqrt, which is also a built-in defun
sqrt
3> (sqrt 4.0)
2.0
4> (macroexpand '(sqrt 4.0))
2.0
5> (macroexpand '(sqrt x))
(sqrt x)
However, no, (set (second x) 42) is not implemented via a macro definition for second. That would not work very well. The main reason is that it would be too much of a burden. The programmer may want to have, for a given function, a macro definition which has nothing to do with implementing assignment semantics!
Moreover, if (second x) implements place semantics, what happens when it is not embedded in an assignment operation, such that the semantics is not required at all? Basically, to hit all the requirements would require concocting a scheme for writing macros whose complexity would equal or exceed that of existing logic for handling places.
TXR Lisp does, in fact, feature a special kind of macro called a "place macro". A form is only recognized as a place macro invocation when it is used as a place. However, place macros do not implement place semantics themselves; they just do a straightforward rewrite. Place macros must expand down to a form that is recognized as a place.
Example: specify that (foo x), when used as a place, behaves as (car x):
1> (define-place-macro foo (x) ^(car ,x))
foo
2> (macroexpand '(foo a)) ;; not a macro!
(foo a)
3> (macroexpand '(set (foo a) 42)) ;; just a place macro
(sys:rplaca a 42)
If foo expanded to something which is not a place, things would fail:
4> (define-place-macro foo (x) ^(bar ,x))
foo
5> (macroexpand '(foo a))
(foo a)
6> (macroexpand '(set (foo a) 42))
** (bar a) is not an assignable place