I am trying to write recursive anonymous function, so far what works for me is to use the Y combinator.
%{ Call lambda function with any number of arguments. %}
call_ = #feval;
%{ Internal subroutine for function IF_. %}
if__ = #(varargin) varargin{3 - varargin{1}}();
%{ If predicate PRED is true, evaluate thunk IF_CLAUSE. %}
% { Otherwise, evaluate thunk ELSE_CLAUSE. %}
if_ = #(pred, if_clause, else_clause) if__(pred, if_clause, else_clause);
%{ Determine if X is a cell array. %}
is_cell_ = #(x) isa(x, 'cell');
%{ Call function FUNC with elements in argument list ARG_LS as arguments. %}
apply_ = #(func, arg_ls) if_(is_cell_(arg_ls), ...
#() func(arg_ls{:}), ...
#() func(arg_ls(:)));
%{ Compute a fixed point for the function H. %}
Y_ = #(h) call_(#(x) x(x), #(g) h(#(varargin) apply_(g(g), varargin)));
So, for example, to define factorial, we can use the Y combinator:
make_fact__ = #(f) #(n, accum) if_(n == 0, #() accum, #() f(n - 1, n * accum))
fact__ = Y_(make_fact__)
fact_ = #(n) fact__(n, 1)
fact_(5)
Note the factorial defined above is tail recursive. Is there anyway to optimize the tail call, so that the tail recursive function won't blow the stack for large N?
Searching the internet gives me the equivalent question in Python (Does Python optimize tail recursion?), but I can't really make it work in Octave/Matlab, using only anonymous function.
I have also found some articles about trampoline, again I find it difficult to implement try-catch using only anonymous function, which is needed in the implementation of trampoline.
Thanks!
Related
I have the following macro and it should allow me to build a struct with one or 2 arguments only!
macro baseStruct(name, arg)
if length(arg.args)==2 && isa(arg.args[1],Symbol) || length(arg.args)==1
aakws = Pair{Symbol,Any}[]
defaultValues=Array{Any}(nothing,1)
field_define(aakws,defaultValues,1,arg,first=true)
:(struct $name
$(arg.args[1])
function $name(;$(arg.args[1])=$(defaultValues[1]))
new(check($name,$(arg.args[1]);$aakws...))
end
end)
else length(arg.args)==2 && !isa(arg.args[1],Symbol)
aakws1 = Pair{Symbol,Any}[]
aakws2 = Pair{Symbol,Any}[]
defaultValues=Array{Any}(nothing,2)
field_define(aakws1,defaultValues,1,arg)
field_define(aakws2,defaultValues,2,arg)
:(struct $name
$(arg.args[1].args[1])
$(arg.args[2].args[1])
function $name(;$(arg.args[1].args[1])=$(defaultValues[1]),$(arg.args[2].args[1])=$(defaultValues[2]))
new(check($name,$(arg.args[1].args[1]);$aakws1...),check($name,$(arg.args[2].args[1]);$aakws2...))
end
end)
#baseStruct test(
(arg1,(max=100.0,description="this arg1",min=5.5)),
(arg2,(max=100,default=90))
)
The macro will expand to:
struct test
arg1
arg2
end
and the following instance:
test1=test(arg1=80.5)
should give:
test1(arg1=80.5,arg2=90)
#check_function returns the argument. It allows me to check the argument in a specific way.
check(str,arg;type=DataType,max=nothing,min=nothing,default=nothing,description="")
#do somthing
return arg
end
#field_define_function it takes the second parameter from the macro as Expr. and convert it to a array with Pair{Symbol, Any} and then assign it to aakws.
I can extend this Code for more arguments, but as you can see it will be so long and complex. Have anybody a tip or other approach to implement this code for more/unultimate arguments in a more efficient way?
I don't have the code for field_define or check, so I can't work your example directly. Instead I'll work with a simpler macro that demonstrates how you can splat-interpolate a sequence of Expr into a quoted Expr:
# series of Expr like :(a::Int=1)
macro kwstruct(name, arg_type_defaults...)
# :(a::Int)
arg_types = [x.args[1]
for x in arg_type_defaults]
# :a
args = [y.args[1]
for y in arg_types]
# build :kw Expr because isolated :(a=1) is parsed as assignment
arg_defaults = [Expr(:kw, x.args[1].args[1], x.args[2])
for x in arg_type_defaults]
# splatting interpolation into a quoted Expr
:(struct $name
$(arg_types...)
function $name(;$(arg_defaults...))
new($(args...))
end
end
)
end
This macro expands #kwstruct D a::Int=1 b::String="12" to:
struct D
a::Int
b::String
function D(; a = 1, b = "12")
new(a, b)
end
end
And you can put as many arguments as you want, like #kwstruct(F, a::Int=1, b::String="12", c::Float64=1.2).
P.S. Splatting interpolation $(iterable...) only works in quoted Expr, which look like :(___) or quote ___ end. If you're constructing with an Expr() call, just use splatting:
vars = (:b, :c)
exq = :( f(a, $(vars...), d) ) # :(f(a, b, c, d))
ex = Expr(:call, :f, :a, vars..., :d) # :(f(a, b, c, d))
exq == ex # true
I am a noob at metaprogramming so maybe I am not understanding this. I thought the purpose of the #nloops macro in Base.Cartesian was to make it possible to code an arbitrary number of nested for loops, in circumstances where the dimension is unknown a priori. In the documentation for the module, the following example is given:
#nloops 3 i A begin
s += #nref 3 A i
end
which evaluates to
for i_3 = 1:size(A,3)
for i_2 = 1:size(A,2)
for i_1 = 1:size(A,1)
s += A[i_1,i_2,i_3]
end
end
end
Here, the number 3 is known a priori. For my purposes, however, and for the purposes that I thought nloops was created, the number of nested levels is not known ahead of time. So I would not be able to hard code the integer 3. Even in the documentation, it is stated:
The (basic) syntax of #nloops is as follows:
The first argument must be an integer (not a variable) specifying the number of loops.
...
If I assign an integer value - say the dimension of an array that is passed to a function - to some variable, the nloops macro no longer works:
b = 3
#nloops b i A begin
s += #nref b A i
end
This returns an error:
ERROR: LoadError: MethodError: no method matching _nloops(::Symbol, ::Symbol, ::Symbol, ::Expr)
Closest candidates are:
_nloops(::Int64, ::Symbol, ::Symbol, ::Expr...) at cartesian.jl:43
...
I don't know how to have nloops evaluate the b variable as an integer rather than a symbol. I have looked at the documentation and tried various iterations of eval and other functions and macros but it is either interpreted as a symbol or an Expr. What is the correct, julian way to write this?
See supplying the number of expressions:
julia> A = rand(4, 4, 3) # 3D array (Array{Int, 3})
A generated function is kinda like a macro, in that the resulting expression is not returned, but compiled and executed on invocation/call, it also sees the type (and their type parameters of course) of the arguments, ie:
inside the generated function, A is Array{T, N}, not the value of the array.
so T is Int and N is 3!
Here inside the quoted expression, N is interpolated into the expression, with the syntax $N, which evaluates to 3:
julia> #generated function mysum(A::Array{T,N}) where {T,N}
quote
s = zero(T)
#nloops $N i A begin
s += #nref $N A i
end
s
end
end
mysum (generic function with 1 method)
julia> mysum(A)
23.2791638775186
You could construct the expression and then evaluate it, ie.:
julia> s = 0; n = 3;
julia> _3loops = quote
#nloops $n i A begin
global s += #nref $n A i
end
end
quote
#nloops 3 i A begin
global s += #nref(3, A, i)
end
end
julia> eval(_3loops)
julia> s
23.2791638775186
I have scrubbed manually the LineNumberNodes from the AST for readability (there is also MacroTools.prettify, that does it for you).
Running this example in the REPL needs to declare s as global inside the loop in Julia 1.0.
There is a host of related questions regarding other languages, but this is one is about MATLAB.
(How) can I access the caller's scope from an anonymous function? I had considered using eval, but this works even worse than just using the variable. An example:
clearvars;
f1 = #() n
f2 = #() eval('n')
n = 1
f3 = #() n
f4 = #() eval('n')
n = 2
f3() runs, but outputs 1, while I would like 2. My preferred solution would be f2(), but all of f1(), f2(), f4() fail with almost the same error message:
(Error using eval)
Undefined function or variable 'n'.
Interestingly, f4() cannot even access the original scope, probably because eval hides the use of n so that n is not stored alongside f4 for memory reasons.
So what can I do to access n from f2?
This works:
clearvars;
f5 = #() evalin('caller', 'n')
n = 2
f5()
Read more about the evalin command in the MATLAB documentation.
I would like to inject code into a function. For concreteness, consider a simple simulater:
function simulation(A, x)
for t in 1:1000
z = randn(3)
x = A*x + z
end
end
Sometimes I would like to record the values of x every ten time-steps, sometimes the values of z every 20 time-steps, and sometimes I don't want to record any values. I could, of course, put some flags as arguments to the function, and have some if-else statements. But I would like to rather keep the simulation code clean, and only inject a piece of code like
if t%10 == 0
append!(rec_z, z)
end
into particular places of the function whenever I need it. For that, I'd like to write a macro such that monitoring a particular value becomes
#monitor(:z, 10)
simulation(A, x)
Is that possible with Julia's Metaprogramming capabilities?
No, you cannot use metaprogramming to inject code into an already-written function. Metaprogramming can only do things that you could directly write yourself at precisely the location where the macro itself is written. That means that a statement like:
#monitor(:z, 10); simulation(A, x)
cannot even modify the simulation(A, x) function call. It can only expand out to some normal Julia code that runs before simulation is called. You could, perhaps, include the simulation function call as an argument to the macro, e.g., #monitor(:z, 10, simulation(A, x)), but now all the macro can do is change the function call itself. It still cannot "go back" and add new code to a function that was already written.
You could, however, carefully and meticulously craft a macro that takes the function definition body and modifies it to add your debug code, e.g.,
#monitor(:z, 10, function simulation(A, x)
for t in 1:1000
# ...
end
end)
But now you must write code in the macro that traverses the code in the function body, and injects your debug statement at the correct place. This is not an easy task. And it's even harder to write in a robust manner that wouldn't break the moment you modified your actual simulation code.
Traversing code and inserting it is a much easier task for you to do yourself with an editor. A common idiom for debugging statements is to use a one-liner, like this:
const debug = false
function simulation (A, x)
for t in 1:1000
z = rand(3)
x = A*x + z
debug && t%10==0 && append!(rec_z, z)
end
end
What's really cool here is that by marking debug as constant, Julia is able to completely optimize away the debugging code when it's false — it doesn't even appear in the generated code! So there is no overhead when you're not debugging. It does mean, however, that you have to restart Julia (or reload the module it's in) for you to change the debug flag. Even when debug isn't marked as const, I cannot measure any overhead for this simple loop. And chances are, your loop will be more complicated than this one. So don't worry about performance here until you actually double-check that it's having an effect.
You might be interested in this which i just whipped up. It doesn't QUITE do what you are doing, but it's close. Generally safe and consistent places to add code are the beginning and end of code blocks. These macros allow you to inject some code in those location (and even pass code parameters!)
Should be useful for say toggle-able input checking.
#cleaninject.jl
#cleanly injects some code into the AST of a function.
function code_to_inject()
println("this code is injected")
end
function code_to_inject(a,b)
println("injected code handles $a and $b")
end
macro inject_code_prepend(f)
#make sure this macro precedes a function definition.
isa(f, Expr) || error("checkable macro must precede a function definition")
(f.head == :function) || error("checkable macro must precede a function definition")
#be lazy and let the parser do the hard work.
b2 = parse("code_to_inject()")
#inject the generated code into the AST.
unshift!(f.args[2].args, b2)
#return the escaped function to the parser so that it generates the new function.
return Expr(:escape, f)
end
macro inject_code_append(f)
#make sure this macro precedes a function definition.
isa(f, Expr) || error("checkable macro must precede a function definition")
(f.head == :function) || error("checkable macro must precede a function definition")
#be lazy and let the parser do the hard work.
b2 = parse("code_to_inject()")
#inject the generated code into the AST.
push!(f.args[2].args, b2)
#return the escaped function to the parser so that it generates the new function.
return Expr(:escape, f)
end
macro inject_code_with_args(f)
#make sure this macro precedes a function definition.
isa(f, Expr) || error("checkable macro must precede a function definition")
(f.head == :function) || error("checkable macro must precede a function definition")
#be lazy and let the parser do the hard work.
b2 = parse(string("code_to_inject(", join(f.args[1].args[2:end], ","), ")"))
#inject the generated code into the AST.
unshift!(f.args[2].args, b2)
#return the escaped function to the parser so that it generates the new function.
return Expr(:escape, f)
end
################################################################################
# RESULTS
#=
julia> #inject_code_prepend function p()
println("victim function")
end
p (generic function with 1 method)
julia> p()
this code is injected
victim function
julia> #inject_code_append function p()
println("victim function")
end
p (generic function with 1 method)
julia> p()
victim function
this code is injected
julia> #inject_code_with_args function p(a, b)
println("victim called with $a and $b")
end
p (generic function with 2 methods)
julia> p(1, 2)
injected code handles 1 and 2
victim called with 1 and 2
=#
So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head