I want to write a macro that uses functions from the clj-time library. In one namespace I would like to call the macro like this:
(ns budget.account
(:require [budget.time]))
(budget.time/next-date interval frequency)
The next-date macro would be defined in another file like this:
(ns budget.time
(:require [clj-time.core :as date]))
(defmacro next-date [interval freq]
`(~interval ~freq))
If the macro were called with the following arguments (budget.time/next-date interval freq) and interval and freq where "weeks" and "2" repectively then the macro expand would look something like this (clj-time.core/weeks 2)
Whenever I try this from the REPL it cannot resolve the namespace.
Is there a way to force the macro to resolve interval to the arguments to the clj-time namespace? What is the best way to do this?
Thanks!
Macros return a list which is then evaluated in the namespace it is called from, not the namespace it is defined in. this is different than functions which evaluate in the namespace in which they where defined. This is because macros return the code to be run, instead of just running it.
if i go to another namespace, for instance hello.core and expand a call to next-date i get:
hello.core> (macroexpand-1 '(next-date weeks 2))
(weeks 2)
then after the expansion, weeks is resolved from hello.core, in which it is of course not defined. to fix this we need the returned symbol to carry the name-space information with it.
fortunately you can explicitly resolve a symbol in a namespace with ns-resolve. It takes a namespace and a symbol and tries to find it in the namespace returning nil if it's not found
(ns-resolve 'clj-time.core (symbol "weeks"))
#'clj-time.core/weeks
next your macro will be taking a symbol and a number so we can dispense with the explicit call to symbol
(ns-resolve 'clj-time.core 'weeks)
#'clj-time.core/weeks
so now you just need a function that resolves the function and then creates a list of the resolved function followed by the number,
(defmacro next-date [interval freq]
(list (ns-resolve 'clj-time.core interval) freq))
In the above macro all it does is make a function call which is immediatly called, so you don't even need a macro for this:
(defn next-date [interval freq]
((ns-resolve 'clj-time.core interval) freq))
(next-date 'weeks 2)
#<Weeks P2W>
the non-macro version requires you to quote the interval because it need it not to be evaluated before you can look it up. What the macro really buys you here is not having to include the quote, at the cost of requiring all the callers to require clj-time
of course you could also just require clj-time everywhere, but that's not really the point.
Related
I'm learning blocks in Common lisp and did this example to see how blocks and the return-from command work:
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from b1)
(print 6)
)
(print 7))
It will print 1, 2, 3, 4, and 5, as expected. Changing the return-from to (return-from b2) it'll print 1, 2, 3, 4, 5, and 7, as one would expect.
Then I tried turn this into a function and paremetrize the label on the return-from:
(defun test-block (arg) (block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from (eval arg))
(print 6)
)
(print 7)))
and using (test-block 'b1) to see if it works, but it doesn't. Is there a way to do this without conditionals?
Using a conditional like CASE to select a block to return from
The recommended way to do it is using case or similar. Common Lisp does not support computed returns from blocks. It also does not support computed gos.
Using a case conditional expression:
(defun test-block (arg)
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(case arg
(b1 (return-from b1))
(b2 (return-from b2)))
(print 6))
(print 7)))
One can't compute lexical go tags, return blocks or local functions from names
CLTL2 says about the restriction for the go construct:
Compatibility note: The ``computed go'' feature of MacLisp is not supported. The syntax of a computed go is idiosyncratic, and the feature is not supported by Lisp Machine Lisp, NIL (New Implementation of Lisp), or Interlisp. The computed go has been infrequently used in MacLisp anyway and is easily simulated with no loss of efficiency by using a case statement each of whose clauses performs a (non-computed) go.
Since features like go and return-from are lexically scoped constructs, computing the targets is not supported. Common Lisp has no way to access lexical environments at runtime and query those. This is for example also not supported for local functions. One can't take a name and ask for a function object with that name in some lexical environment.
Dynamic alternative: CATCH and THROW
The typically less efficient and dynamically scoped alternative is catch and throw. There the tags are computed.
I think these sorts of things boils down to the different types of namespaces bindings and environments in Common Lisp.
One first point is that a slightly more experienced novice learning Lisp might try to modify your attempted function to say (eval (list 'return-from ,arg)) instead. This seems to make more sense but still does not work.
Namespaces
A common beginner mistake in a language like scheme is having a variable called list as this shadows the top level definition of this as a function and stops the programmer from being able to make lists inside the scope for this binding. The corresponding mistake in Common Lisp is trying to use a symbol as a function when it is only bound as a variable.
In Common Lisp there are namespaces which are mappings from names to things. Some namespaces are:
The functions. To get the corresponding thing either call it: (foo a b c ...), or get the function for a static symbol (function foo) (aka #'foo) or for a dynamic symbol (fdefinition 'foo). Function names are either symbols or lists of setf and one symbol (e.g. (serf bar)). Symbols may alternatively be bound to macros in this namespace in which case function and fdefinition signal errors.
The variables. This maps symbols to the values in the corresponding variable. This also maps symbols to constants. Get the value of a variable by writing it down, foo or dynamically as (symbol-value). A symbol may also be bound as a symbol-macro in which case special macro expansion rules apply.
Go tags. This maps symbols to labels to which one can go (like goto in other languages).
Blocks. This maps symbols to places you can return from.
Catch tags. This maps objects to the places which catch them. When you throw to an object, the implementation effectively looks up the corresponding catch in this namespace and unwinds the stack to it.
classes (and structs, conditions). Every class has a name which is a symbol (so different packages may have a point class)
packages. Each package is named by a string and possibly some nicknames. This string is normally the name of a symbol and therefore usually in uppercase
types. Every type has a name which is a symbol. Naturally a class definition also defines a type.
declarations. Introduced with declare, declaim, proclaim
there might be more. These are all the ones I can think of.
The catch-tag and declarations namespaces aren’t like the others as they don’t really map symbols to things but they do have bindings and environments in the ways described below (note that I have used declarations to refer to the things that have been declared, like the optimisation policy or which variables are special, rather than the namespace in which e.g. optimize, special, and indeed declaration live which seems too small to include).
Now let’s talk about the different ways that this mapping may happen.
The binding of a name to a thing in a namespace is the way in which they are associated, in particular, how it may come to be and how it may be inspected.
The environment of a binding is the place where the binding lives. It says how long the binding lives for and where it may be accessed from. Environments are searched for to find the thing associated with some name in some namespace.
static and dynamic bindings
We say a binding is static if the name that is bound is fixed in the source code and a binding is dynamic if the name can be determined at run time. For example let, block and tags in a tagbody all introduce static bindings whereas catch and progv introduce dynamic bindings.
Note that my definition for dynamic binding is different from the one in the spec. The spec definition corresponds to my dynamic environment below.
Top level environment
This is the environment where names are searched for last and it is where toplevel definitions go to, for example defvar, defun, defclass operate at this level. This is where names are looked up last after all other applicable environments have been searched, e.g. if a function or variable binding can not be found at a closer level then this level is searched. References can sometimes be made to bindings at this level before they are defined, although they may signal warnings. That is, you may define a function bar which calls foo before you have defined foo. In other cases references are not allowed, for example you can’t try to intern or read a symbol foo::bar before the package FOO has been defined. Many namespaces only allow bindings in the top level environment. These are
constants (within the variables namespace)
classes
packages
types
Although (excepting proclaim) all bindings are static, they can effectively be made dynamic by calling eval which evaluates forms at the top level.
Functions (and [compiler] macros) and special variables (and symbol macros) may also be defined top level. Declarations can be defined toplevel either statically with the macro declaim or dynamically with the function proclaim.
Dynamic environment
A dynamic environment exists for a region of time during the programs execution. In particular, a dynamic environment begins when control flow enters some (specific type of) form and ends when control flow leaves it, either by returning normally or by some nonlocal transfer of control like a return-from or go. To look up a dynamically bound name in a namespace, the currently active dynamic environments are searched (effectively, ie a real system wouldn’t be implemented this way) from most recent to oldest for that name and the first binding wins.
Special variables and catch tags are bound in dynamic environments. Catch tags are bound dynamically using catch while special variables are bound statically using let and dynamically using progv. As we shall discuss later, let can make two different kinds of binding and it knows to treat a symbol as special if it has been defined with defvar or ‘defparameteror if it has been declared asspecial`.
Lexical environment
A lexical environment corresponds to a region of source code as it is written and a specific runtime instantiation of it. It (slightly loosely) begins at an opening parenthesis and ends at the corresponding closing parenthesis, and is instantiated when control flow hits the opening parenthesis. This description is a little complicated so let’s have an example with variables which are bound in a lexically environment (unless they are special. By convention the names special variables are wrapped in * symbols)
(defun foo ()
(let ((x 10))
(bar (lambda () x))))
(defun bar (f)
(let ((x 20))
(funcall f)))
Now what happens when we call (foo)? Well if x were bound in a dynamic environment (in foo and bar) then the anonymous function would be called in bar and the first dynamic environment with a binding for x would have it bound to 20.
But this call returns 10 because x is bound in a lexical environment so even though the anonymous function gets passed to bar, it remembers the lexical environment corresponding to the application of foo which created it and in that lexical environment, x is bound to 10. Let’s now have another example to show what I mean by ‘specific runtime instantiation’ above.
(defun baz (islast)
(let ((x (if islast 10 20)))
(let ((lx (lambda () x)))
(if islast
lx
(frob lx (baz t))))))
(defun frob (a b)
(list (funcall a) (funcall b)))
Now running (baz nil) will give us (20 10) because the first function passed to frob remembers the lexical environment for the outer call to baz (where islast is nil) whilst the second remembers the environment for the inner call.
For variables which are not special, let creates static lexical bindings. Block names (introduced statically by block), go tags (scopes inside a tagbody), functions (by felt or labels), macros (macrolet), and symbol macros (symbol-macrolet) are all bound statically in lexical environments. Bindings from a lambda list are also lexically bound. Declarations can be created lexically using (declare ...) in one of the allowed places or by using (locally (declare ...) ...) anywhere.
We note that all lexical bindings are static. The eval trick described above does not work because eval happens in the toplevel environment but references to lexical names happen in the lexical environment. This allows the compiler to optimise references to them to know exactly where they are without running code having to carry around a list of bindings or accessing global state (e.g. lexical variables can live in registers and the stack). It also allows the compiler to work out which bindings can escape or be captured in closures or not and optimise accordingly. The one exception is that the (symbol-)macro bindings can be dynamically inspected in a sense as all macros may take an &environment parameter which should be passed to macroexpand (and other expansion related functions) to allow the macroexpander to search the compile-time lexical environment for the macro definitions.
Another thing to note is that without lambda-expressions, lexical and dynamic environments would behave the same way. But note that if there were only a top level environment then recursion would not work as bindings would not be restored as control flow leaves their scope.
Closure
What happens to a lexical binding captured by an anonymous function when that function escapes the scope it was created in? Well there are two things that can happen
Trying to access the binding results in an error
The anonymous function keeps the lexical environment alive for as long as the functions referencing it are alive and they can read and write it as they please.
The second case is called a closure and happens for functions and variables. The first case happens for control flow related bindings because you can’t return from a form that has already returned. Neither happens for macro bindings as they cannot be accessed at run time.
Nonlocal control flow
In a language like Java, control (that is, program execution) flows from one statement to the next, branching for if and switch statements, looping for others with special statements like break and return for certain kinds of jumping. For functions control flow goes into the function until it eventually comes out again when the function returns. The one nonlocal way to transfer control is by using throw and try/catch where if you execute a throw then the stack is unwound piece by piece until a suitable catch is found.
In C there are is no throw or try/catch but there is goto. The structure of C programs is secretly flat with the nesting just specifying that “blocks” end in the opposite order to the order they start. What I mean by this is that it is perfectly legal to have a while loop in the middle of a switch with cases inside the loop and it is legal to goto the middle of a loop from outside of that loop. There is a way to do nonlocal control transfer in C: you use setjmp to save the current control state somewhere (with the return value indicating whether you have successfully saved the state or just nonlocally returned there) and longjmp to return control flow to a previously saved state. No real cleanup or freeing of memory happens as the stack unwinds and there needn’t be checks that you still have the function which called setjmp on the callstack so the whole thing can be quite dangerous.
In Common Lisp there’s a range of ways to do nonlocal control transfer but the rules are more strict. Lisp doesn’t really have statements but rather everything is built out of a tree of expressions and so the first rule is that you can’t nonlocally transfer control into a deeper expression, you may only transfer out. Let’s look at how these different methods of control transfer work.
block and return-from
You’ve already seen how these work inside a single function but recall that I said block names are lexically scoped. So how does this interact with anonymous functions?
Well suppose you want to search some big nested data structure for something. If you were writing this function in Java or C then you might implement a special search function to recurse through your data structure until it finds the right thing and then return it all the way up. If you were implementing it in Haskell then you would probably want to do it as some kind of fold and rely on lazy evaluation to not do too much work. In Common Lisp you might have a function which applies some other function passed as a parameter to each item in the data structure. And now you can call that with a searching function. How might you get the result out? Well just return-from to the outer block.
tagbody and go
A tagbody is like a progn but instead of evaluating single symbols in the body, they are called tags and any expression within the tagbody can go to them to transfer control to it. This is partly like goto, if you’re still in the same function but if your go expression happens inside some anonymous function then it’s like a safe lexically scoped longjmp.
catch and throw
These are most similar to the Java model. The key difference between block and catch is that block uses lexical scoping and catch uses dynamic scoping. Therefore their relationship is like that between special and regular variables.
Finally
In Java one can execute code to tidy things up if the stack has to unwind through it as an exception is thrown. This is done with try/finally. The Common Lisp equivalent is called unwind-protect which ensures a form is executed however control flow may leave it.
Errors
It’s perhaps worth looking a little at how errors work in Common Lisp. Which of these methods do they use?
Well it turns out that the answer is that errors instead of generally unwinding the stack start by calling functions. First they look up all the possible restarts (ways to deal with an error) and save them somewhere. Next they look up all applicable handlers (a list of handlers could, for example, be stored in a special variable as handlers have dynamic scope) and try each one at a time. A handler is just a function so it might return (ie not want to handle the error) or it might not return. A handler might not return if it invokes a restart. But restarts are just normal functions so why might these not return? Well restarts are created in a dynamic environment below the one where the error was raised and so they can transfer control straight out of the handler and the code that threw the error to some code to try to do something and then carry on. Restarts can transfer control using go or return-from. It is worth noting that it is important here that we have lexical scope. A recursive function could define a restart on each successive call and so it is necessary to have lexical scope for variables and tags/block names so that we can make sure we transfer control to the right level on the call stack with the right state.
defmacro is documented at http://clhs.lisp.se/Body/m_defmac.htm but the documentation is not entirely clear on exactly when things happen. By experiment with Clisp, I have found the following (assuming all macros and functions defined at top level):
Straight top-level code can only call macros and functions that have been defined earlier.
Code within a macro or function, or generated by a macro, can call any function it likes, including one define later (as expected from the need to support mutual recursion).
Code within a macro can only call a macro defined earlier than the calling site of the first macro.
Code generated by a macro can call a macro defined later.
Is it the case that Clisp is just following the specification, or is there any variation between implementations in this regard?
Is the exact intended set of rules, and the rationale behind them, documented anywhere?
You are asking about macro expansion - but I'd like to clarify how functions are handled first.
Pay attention to when the calls and the defines actually happens. In your second point you say code within a function can call a function that is defined later. This isn't strictly true.
In languages like C++ you declare and define functions and then compile your app. Ignoring inlining, templates, lambdas and other magic..., when compiling a function, the declarations of all other functions used by that function need to be present - and at link time, the compiled definitions need to be present - all before the program starts running. Once the program starts running, all functions are already fully prepared and ready to be called.
Now in Lisp, things are different. Ignore compilation for now - let's just think about an interpreted environment. If you run:
;; time 1
(defun a () (b))
;; time 2
(defun b () 123)
;; time 3
(a)
At time 1 your program has no functions.
The first defun then creates a function (lambda () (b)), and associates it with the symbol a. This function contains a reference to the symbol b, but at this point in time it is not calling b. a will only call b when a itself gets called.
So, at time 2 your program has one function, associated with the symbol a, but it has not been executed yet.
Now the second defun creates a function (lambda () 123), and associates it with the symbol b.
At time 3 your program has two functions, associated with the symbols a and b, but neither has been called yet.
Now you call a. During its execution, it looks for the function associated with the symbol b, finds that such a function already exists at this point in time, and calls it. b executes and returns 123.
Let's add more code:
;; time 4
(defun b () 456)
;; time 5
(a)
After time 4, a new defun creates a function returning 456, and associates it with the symbol b. This replaces the reference b was holding to the function returning 123, which will then be garbage collected (or whatever you implementation does to take out the trash).
Calling a (or more correctly, the lambda referenced by the function attribute of the symbol a), will now result in a call to a function that returns 456.
If, instead, we had originally written:
;; time 1
(defun a () (b))
;; time 2
(a)
;; time 3
(defun b () 123)
... this would not have worked, because after time 2 when we call a, it can't find a function associated with the symbol b and so it will fail.
Now - compile, eval-when, optimisation and other magic can do all kinds of funky things different from what I've described above, but make sure you first have a grasp of these basics before worrying about that more advanced stuff.
Functions are only created at the time that defun is called. (The interpreter doesn't "look ahead in the file".)
One of the attributes of a symbol is a reference to a function. (The function itself doesn't actually have a name.)
Multiple symbols can reference the same function. ((setf (symbol-function 'd) (symbol-function 'b)))
Defining a function a that calls function b (speaking colloquially), is OK as long as the symbol b has an associated function by the time a is called. (It is not required at the time of defunning a.)
A symbol can refer to different functions at different times. This affects any functions "calling" that symbol.
The rules for macros are different (their expansions are static after "read" time), but many of the principles remain the same (Lisp doesn't "look ahead in the file" to find them). Understand that Lisp programs are far more dynamic and "run-time" than most (lesser ;-) ) languages you may be used to. Understand what happens when during execution of a Lisp program, and the rules governing macro expansion will start making sense.
When I was learning HTML it was very helpful for me to know that ol means ordered list, tr is table row, etc. Some of the lisp primitives/forms are easy: funcall should be function call, defmacro - define macro. Some are in the middle - incf is... increment... f??? But because common lisp is so old, this primitives/special forms/etc... don't seem to ring a bell. Can you guys, help me with figuring them out? And even more importantly: Where can I find an authoritative resource on learning the meaning/history behind each and every one of them? (I will accept an answer based on this second question)
The documentation doesn't help me too:
* (describe #'let)
#<CLOSURE (:SPECIAL LET) {10013DC6AB}>
[compiled closure]
Lambda-list: (&REST ARGS)
Derived type: (FUNCTION (&REST T) NIL)
Documentation:
T
Source file: SYS:SRC;COMPILER;INFO-FUNCTIONS.LISP
* (documentation 'let 'function)
"LET ({(var [value]) | var}*) declaration* form*
During evaluation of the FORMS, bind the VARS to the result of evaluating the
VALUE forms. The variables are bound in parallel after all of the VALUES forms
have been evaluated."
* (inspect 'let)
The object is a SYMBOL.
0. Name: "LET"
1. Package: #<PACKAGE "COMMON-LISP">
2. Value: "unbound"
3. Function: #<CLOSURE (:SPECIAL LET) {10013DC6AB}>
4. Plist: (SB-WALKER::WALKER-TEMPLATE SB-WALKER::WALK-LET)
What do the following lisp primitives/special forms/special operators/functions mean?
let, flet
progn
car
cdr
acc
setq, setf
incf
(write more in the comments so we can make a good list!)
let: LET variable-name be bound to a certain value
flet: LET Function-name be bound to a certain function
progn: execute a PROGram sequence and return the Nth value (the last value)
car: Contents of the Address Register (historic)
cdr: Contents of the Decrement Register (historic)
acc: ACCumulator
setq: SET Quote, a variant of the set function, where in setq the user doesn't quote the variable
setf: SET Function quoted, shorter name of the original name setfq. Here the function/place is not evaluated.
incf: INCrement Function quoted, similar to setf. Increments a place.
Other conventions:
Macros / Special Forms who change a place should have an f at the end: setf, psetf, incf, decf, ...
Macros who are DEFining something should have def or define in front: defun, defmethod, defclass, define-method-combination...
Functions who are destructive should have an n in front, for Non-consing: nreverse, ...
Predicates have a p or -p at the end: adjustable-array-p, alpha-char-p,...
special variables have * at the front and back: *standard-output*, ...
There are other naming conventions.
let: Well, that's a normal word, and is used like in maths ("let x = 3 in ...").
flet: I'd guess "function let", because that's what it does.
progn: This is probably related also to prog1 and prog2. I read is as "program whose nth form dictates the result value". The program has n forms, so it's the last one that forms the result value of the progn form.
car and cdr: "Contents address register" resp. "Contents decrement register". This is related to the IBM 704 which Lisp was originally implemented for.
setq: "set quote", originally an abbreviation for (set (quote *abc*) value).
setf: "set field", came up when lexical variables appeared. This is a good read on the set functions.
Where can I find an authoritative resource on learning the meaning/history behind each and every one of them?
The HyperSpec is a good place to start, and ultimately the ANSI standard. Though "Common Lisp The Language" could also shine some light on the history of some names.
(Oh, and you got defmacro wrong, that's "Definition for Mac, Read Only." ;) )
I have been trying to use Clojure tagged literals, and noticed that the reader does not evaluate the arguments, very much like a macro. This makes sense, but what is the appropriate solution for doing this? Explicit eval?
Example: given this function
(defn my-data
([[arg]]
(prn (symbol? arg))
:ok))
and this definition data_readers.clj
{myns/my-data my.ns/my-data}
The following behave differently:
> (let [x 1] (my.ns/my-data [x]))
false
:ok
So the x passed in is evaluated before being passed into my-data. On the other hand:
> (let [x 1] #myns/my-data [x])
true
:ok
So if I want to use the value of x inside my-data, the my-data function needs to do something about it, namely check that x is a symbol, and if so, use (eval x). That seems ugly. Is there a better approach?
Summary
There is no way to get at the value of the local x in your example, primarily because locals only get assigned values at runtime, whereas tagged literals are handled at read time. (There's also compile time in between; it is impossible to get at locals' values at compile time; therefore macros cannot get at locals' values either.)1
The better approach is to use a regular function at runtime, since after all you want to construct a value based on the runtime values of some parameters. Tagged literals really are literals and should be used as such.
Extended discussion
To illustrate the issue described above:
(binding [*data-readers* {'bar (fn [_] (java.util.Date.))}]
(eval (read-string "(defn foo [] #bar x)")))
foo will always return the same value, because the reader has only one opportunity to return a value for #bar x which is then baked into foo's bytecode.
Notice also that instead of passing it to eval directly, we could store the data structure returned by the call to read-string and compile it at an arbitrary point in the future; the value returned by foo would remain the same. Clearly there's no way for such a literal value to depend on the future values of any locals; in fact, during the reader's operation, it is not even clear which symbols will come to name locals -- that's for the compiler to determine and the result of that determination may be non-obvious in cases where macros are involved.
Of course the reader is free to return a form which looks like a function call, the name of a local etc. To show one example:
(binding [*data-readers* {'bar (fn [sym] (list sym 1 2 3 4 5))}]
(eval (read-string "#bar *")))
;= 120
;; substituting + for * in the string yields a value of 15
Here #bar f becomes equivalent to (f 1 2 3 4 5). Needless to say, this is an abuse of notation and doesn't really do what you asked for.
1 It's worth pointing out that eval has no access to locals (it always operates in the global scope), but the issue with locals not having values assigned before runtime is more fundamental.
The Noir macro defpage is giving me a little bit of trouble. I am trying to construct a call similar to this:
(defpage [:post "some/url"] [data]
;; some stuff...
)
However, instead of using the keyword :post I would like to use a variable, like this:
(def my-method :post)
(defpage [my-method "some/url"] [data]
;; some stuff...
)
The problem is that when the macro expands, it wants to resolve the variable my-method in the compojure.core namespace instead of my own, giving me the error:
No such var: compojure.core/MY-METHOD
How can I force my-method to resolve in the current context?
I guess this is a similar problem to: How can I apply clojure's doc function to a sequence of functions
A macro can do whatever it wants with its args, so passing a naked symbol in can result in unpredictable results.
A way to solve it, but it ain't pretty:
(eval (list 'defpage (vector my-method "some/url") '[data]
; some stuff
))
Notice that my-method is not a literal here, so it gets resolved and evaluated in our own namespace first, before going into eval.
It seems, that noir is not meant to be used this way, because it takes the method argument and transforms it to the symbol in compojure.core (see https://github.com/ibdknox/noir/blob/master/src/noir/core.clj#L36). It means, that it doesn't expect a variable in this place, only literals. So I don't think you can do anything about that, except post an issue to noir...
If we look through noir/core.clj file (source), find parse-route function and reason what it does with its method argument (it is called action there), we could find that method keyword is converted to string, uppercased and resolved in compojure.core namespace. All this is done during macro expansion time. So it is not possible to use variable instead of keyword without altering noir code.
What about passing my-method along with namespace it is in:
(defpage [myns/my-method "some/url"] [data]
;;
)