I would like to call a macro in a Julia package (#defNLExpr in JuMP) using an argument that is runtime dependent. The argument is an expression that depends on the runtime parameter n. The only way that I can think of doing this is something like the following:
macro macro1(x)
y=length(x.args);
return esc(:(k=$y-1))
end
macro macro2(n)
x="0";
for i=1:n
x="$x+$i"
end
x=parse(x);
return :(#macro1($x))
end
n=rand(1:3)
println(n)
if (n==1)
#macro2(1)
elseif (n==2)
#macro2(2)
elseif (n==3)
#macro2(3)
else
error("expected n in 1:3")
end
println(k)
Here I have assumed that my runtime n will always be in the range 1-3. I use macro2 to build-up all possible expressions for these different possible values of n, and call the external macro (which I have replaced by the simplified macro1 here) for each of them. The calls to macro1 are in if statements, so only the correct one (determined from the value of n at runtime) will actually be executed.
Although this seems to work, is there a more efficient way of achieving this?
Seems that you might be looking for eval? Be aware that it should be used with care though, and that it not very fast since it has to income the compiler each time it is called.
If it's a limitation to you that it evaluates the expression in global scope then, there are some ways to work around that.
Related
Background -- I was reading up on accessing shadowed functions, and started playing with builtin . I wrote a little function:
function klear(x)
% go to parent environment...
evalin('base', builtin('clear','x')) ;
end
This throws the error:
Error using clear
Too many output arguments.
I think this happens because evalin demands an output from whatever it's being fed, but clear is one of the functions which has no return value.
So two questions: am I interpreting this correctly, and if so, is there an alternative function that allows me to execute a function in the parent environment (that doesn't require an output)?
Note: I'm fully aware of the arguments against trying to access shadowed funcs (or rather, to avoid naming functions in a way that overload base funcs, etc). This is primarily a question to help me learn what can and can't be done in MATLAB.
Note 2
My original goal was to write an overload function that would require an input argument, to avoid the malware-ish behavior of clear, which defaults to deleting everything. In Q&D pseudocode,
function clear(x)
if ~exist('x','var') return
execute_in_base_env(builtin(clear(x)))
end
There's a couple issues with your clear override:
It will always clear in the base workspace regardless of where it's called from.
It doesn't support multiple inputs, which is a common use case for clear.
Instead I'd have it check for whether it was called from the base workspace, and special-case that for your check for whether it's clearing everything. If some function is calling plain clear to clear all its variables, that's bad practice, but it's still how that function's logic works, and you don't want to break that. Otherwise it could error, or worse, return incorrect results.
So, something like this:
function clear(varargin)
stk = dbstack;
if numel(stk) == 1 && (nargin == 0 || ismember('all', varargin))
fprintf('clear: balking at clearing all vars in base workspace. Nothing cleared.\n');
return;
end
% Check for quoting problems
for i = 1:numel(varargin)
if any(varargin{i} == '''')
error('You have a quote in one of your args. That''s not valid.');
end
end
% Construct a clear() call that works with evalin()
arg_strs = strcat('''', varargin, '''');
arg_strs = [{'''clear'''} arg_strs];
expr = ['builtin(' strjoin(arg_strs, ', '), ')'];
% Do it
evalin('caller', expr);
end
I hope it goes without saying that this is an atrocious hack that I wouldn't recommend in practice. :)
What happens in your code:
evalin('base', builtin('clear','x'));
is that builtin is evaluated in the current context, and because it is used as an argument to evalin, it is expected to produce an output. It is exactly the same as:
ans = builtin('clear','x');
evalin('base',ans);
The error message you see occurs in the first of those two lines of code, not in the second. It is not because of evalin, which does support calling statements that don't produce an output argument.
evalin requires a string to evaluate. You need to build this string:
str = 'builtin(''clear'',''x'')';
evalin('base',ans);
(In MATLAB, the quote character is escaped by doubling it.)
You function thus would look like this:
function clear(var)
try
evalin('base',['builtin(''clear'',''',var,''')'])
catch
% ignore error
end
end
(Inserting a string into another string this way is rather awkward, one of the many reasons I don't like eval and friends).
It might be better to use evalin('caller',...) in this case, so that when you call the new clear from within a function, it deletes something in the function's workspace, not the base one. I think 'base' should only be used from within a GUI that is expected to control variables in the user's workspace, not from a function that could be called anywhere and is expected (by its name in this case) to do something local.
There are reasons why this might be genuinely useful, but in general you should try to avoid the use of clear just as much as the use of eval and friends. clear slows down program execution. It is much easier (both on the user and on the MATLAB JIT) to assign an empty array to a variable to remove its contents from memory (as suggested by rahnema1 in a comment. Your base workspace would not be cluttered with variables if you used function more: write functions, not scripts!
I'm currently following the book Learn You Some Erlang for Great Good by Fred Herbert and one of the sections is regarding Macros.
I understand using macros for variables (constant values, mainly), however, I don't understand the use case for macros as functions. For example, Herbert writes:
Defining a "function" macro is similar. Here's a simple macro used to subtract one number from another:
-define(sub(X, Y), X-Y).
Why not just define this as a function elsewhere? Why use a macro? Is there some sort of performance advantage from the compiler or is this merely just a "this function is so simple, let's just define it in one line" type of thing?
I'm not trying to start a debate or preference argument, but after seeing some production Erlang code, I've started noticing lots of macros function usage.
In this case, the one obvious advantage of the macro not being a function (-define(sub(X, Y), X-Y), which would be safer as -define(sub(X, Y), (X-Y))) is that it can be used as a guard since custom function calls are forbidden.
In many cases it would otherwise be safer to define the function as an inlined one.
On the other hand, there are other interesting cases, such as assertions in tests or shortcuts where what you want is to keep some local context in the final place.
For example, let's say I want to make a generic call for a test where the objective is 'match a given pattern and return a given value, or fail after M milliseconds'.
I cannot make this generic with code since patterns are not data structures you are allowed to carry around. However, with macros:
-define(wait_for(PAT, Timeout),
receive
PAT -> VAL
after Timeout ->
error(timeout)
end).
This macro can then be used as:
my_test() ->
Pid = start_whatever(),
%% ...
?wait_for({'EXIT', Pid, Reason}, 5000),
?assertMatch(shutdown, Reason).
By doing this, I'm able to simplify the form of text in some tests without needing a bunch of nesting, and in a way that is not possible with functions.
Do note that the assertion itself as defined by eunit is using a function macro, and does something akin to
-define(assertMatch(PAT, TERM),
%% funs to avoid leaking bindings into parent scope
(fun() ->
try
PAT = TERM,
true
catch _:_ ->
error({assertion_failed, ?LINE, ...})
end
end)()).
This similarly lets you carry patterns and bindings and do fancy forms that couldn't be possible otherwise.
In this last case, you'll notice I used the ?LINE macro. That's another advantage of macros: you preserve information and locality about the call site, such as its module name, line number, and so on. This is useful when such metadata is required, such as when you're reporting test failures.
If you're looking at old code, there might be macros used as a way of inlining small functions under the assumption that function calls are very expensive. I'm not sure if that was ever true, but it's not something you need to worry about today.
Macros can be used to define constants, like
-define(MAX_TIMEOUT, 30 * 1000).
%% ...
gen_server:call(my_server, {do_stuff, Data}, ?MAX_TIMEOUT),
%% ...
I mostly prefer to pass in environment variables for this job, but it's more work to read them on startup and stash them somewhere and write accessors.
Finally, you can do some simple metaprogramming:
-define(MAKE_REQUEST_FUN(Method),
Method(Request, HTTPOptions, Options) ->
httpc:request(Method, Request, HTTPOptions, Options)).
?MAKE_REQUEST_FUN(get).
?MAKE_REQUEST_FUN(put).
%% Now we've defined a get/3 that can be called as
%% get(Request, [], []).
Im trying to make a n-nested loop method in Julia
function fun(n::Int64)
#nloops n i d->1:3 begin\n
#nexprs n j->(print(i_j))\n
end
end
But the #nloops definition is limited to
_nloops(::Int64, ::Symbol, ::Expr, ::Expr...)
and I get the error
_nloops(::Symbol, ::Symbol, ::Expr, ::Expr)
Is there any way to make this work? Any help greatly appreciated
EDIT:
What I ended up doing was using the combinations method
For my problem, I needed to get all k-combinations of indices to pull values from an array, so the loops would had to look like
for i_1 in 1:100
for i_2 in i_1:100
...
for i_k in i_[k-1]:100
The number of loops needs to be a compile-time constant – a numeric literal, in fact: the code generated for the function body cannot depend on a function argument. Julia's generated functions won't help either since n is just a plain value and not part of the type of any argument. Your best bet for having the number of nested loops depend on a runtime value like n is to use recursion.
In julia-0.4 and above, you can now do this:
function fun(n::Int)
for I in CartesianRange(ntuple(d->1:3, n))
#show I
end
end
In most cases you don't need the Base.Cartesian macros anymore (although there are still some exceptions). It's worth noting that, just as described in StefanKarpinski's answer, this loop will not be "type stable" because n is not a compile-time constant; if performance matters, you can use the "function barrier technique." See http://julialang.org/blog/2016/02/iteration for more information about all topics related to these matters.
I want to be able to allow users of my package to define functions in a more mathematical manner, and I think a macro is the right direction. The problem is as follows. The code allows the users to define functions which are then used in specialized solvers to solve PDEs. However, to make things easier for the solver, some of the inputs are "matrices" in ways that you wouldn't normally wouldn't think they would be. For example, the solvers can take in functions f(x,t), but x[:,1] is what you'd think of as x and x[:,2] is what you'd think of as y (and sometimes it is 3D).
The bigger issues is that when the PDE is nonlinear, I place everything in a u vector, when in many cases (like Reaction-Diffusion equations) these things are named. So in this general case, I'd like to be able to write
#mathdefine f(RA,RABP,RAR,x,y,t) = RA*RABP + RA*x + RAR*t
and have it translate to
f(u,x,t) = u[:,1].*u[:,2] + u[:,1].*x[:,1] + u[:,3]*t
I am not up to snuff on my macro-foo, so I was hoping someone could get me started (or if macros are not the right way to approach this, explain why).
It's not too hard if the user has to give what is being translated to what, but I'd like to have it be as clean to use as possible, so somehow know that it's to the spatial variables and so everything before is part of a u, but after is part of x.
The trick to macro "find/replace" is just to pass off processing to a recursive function that updates the expression args. Your signature will come in as a bunch of symbols, so you can loop through the call signature and add to two dicts, mapping variable name to column index. Then recursively replace the arg tree when you see any of the variables. This is untested:
function replace_vars!(expr::Expr, xd::Dict{Symbol,Int}, ud::Dict{Symbol,Int})
for (i,arg) in enumerate(expr.args)
if haskey(xd, arg)
expr.arg[i] = :(x[:,$(xd[arg])])
elseif haskey(ud, arg)
expr.arg[i] = :(u[:,$(ud[arg])])
elseif isa(arg,Expr)
replace_vars!(arg, xd, ud)
end
end
end
macro mathdefine(expr)
# todo: loop through function signature (expr.args[1]?) to build xd/ud
replace_vars!(expr)
expr
end
I left a little homework for you, but this should get you started.
I am writing a signal processing program using matlab. I know there are two types of float-pointing variables, single and double. Considering the memory usage, I want my code to work with only single type variable when the system's memory is not large, while it can also be adapted to work with double type variables when necessary, without significant modification (simple and light modification before running is OK, i.e., I don't need runtime-check technique). I know this can be done by macro in C and by template in C++. I don't find practical techniques which can do this in matlab. Do you have any experience with this?
I have a simple idea that I define a global string containing "single" or "double", then I pass this string to any memory allocation method called in my code to indicate what type I need. I think this can work, I just want to know which technique you guys use and is widely accepted.
I cannot see how a template would help here. The type of c++ templates are still determined in compile time (std::vector vec ...). Also note that Matlab defines all variables as double by default unless something else is stated. You basically want runtime checks for your code. I can think of one solution as using a function with a persistent variable. The variable is set once per run. When you generate variables you would then have to generate all variables you want to have as float through this function. This will slow down assignment though, since you have to call a function to assign variables.
This example is somehow an implementation of the singleton pattern (but not exactly). The persistent variable type is set at the first use and cannot change later in the program (assuming that you do not do anything stupid as clearing the variable explicitly). I would recommend to go for hardcoding single in case performance is an issue, instead of having runtime checks or assignment functions or classes or what you can come up with.
function c = assignFloat(a,b)
persistent type;
if (isempty(type) & nargin==2)
type = b;
elseif (isempty(type))
type = 'single';
% elseif(nargin==2), error('Do not set twice!') % Optional code, imo unnecessary.
end
if (strcmp(type,'single'))
c = single(a);
return;
end
c = double(a);
end