For reasons of performance, I need gradients and Hessians that perform as fast as user-defined functions ( the ForwardDiff library, for example, makes my code significantly slower). I then tried metaprogramming using the #generated macro, testing with a simple function
using Calculus
hand_defined_derivative(x) = 2x - sin(x)
symbolic_primal = :( x^2 + cos(x) )
symbolic_derivative = differentiate(symbolic_primal,:x)
#generated functional_derivative(x) = symbolic_derivative
This gave me exactly what I wanted:
rand_x = rand(10000);
exact_values = hand_defined_derivative.(rand_x)
test_values = functional_derivative.(rand_x)
isequal(exact_values,test_values) # >> true
#btime hand_defined_derivative.(rand_x); # >> 73.358 μs (5 allocations: 78.27 KiB)
#btime functional_derivative.(rand_x); # >> 73.456 μs (5 allocations: 78.27 KiB)
I now need to generalize this to functions with more arguments. The obvious extrapolation is:
symbolic_primal = :( x^2 + cos(x) + y^2 )
symbolic_gradient = differentiate(symbolic_primal,[:x,:y])
The symbolic_gradient behaves as expected (just as in the 1-dimensional case), but the #generated macro does not respond to multiple dimensions as I believed it would:
#generated functional_gradient(x,y) = symbolic_gradient
functional_gradient(1.0,1.0)
>> 2-element Array{Any,1}:
:(2 * 1 * x ^ (2 - 1) + 1 * -(sin(x)))
:(2 * 1 * y ^ (2 - 1))
That is, it doesn't transform the symbols into generated functions. Is there an easy way to solve this?
P.S.: I know I could define the derivatives with respect to each argument as one-dimensional functions and bundle them together into a gradient (that's what I'm currently doing), but I'm sure there must be a better way.
First, I think you don't need to use #generated here: this is a "simple" case of code generation, where I'd argue using #eval is simpler and less surprising.
So the 1D case could be rewritten like this:
julia> using Calculus
julia> symbolic_primal = :( x^2 + cos(x) )
:(x ^ 2 + cos(x))
julia> symbolic_derivative = differentiate(symbolic_primal,:x)
:(2 * 1 * x ^ (2 - 1) + 1 * -(sin(x)))
julia> hand_defined_derivative(x) = 2x - sin(x)
hand_defined_derivative (generic function with 1 method)
# Let's check first what code we'll be evaluating
# (`quote` returns the unevaluated expression passed to it)
julia> quote
functional_derivative(x) = $symbolic_derivative
end
quote
functional_derivative(x) = begin
2 * 1 * x ^ (2 - 1) + 1 * -(sin(x))
end
end
# Looks OK => let's evaluate it now
# (since `#eval` is macro, its argument will be left unevaluated
# => no `quote` here)
julia> #eval begin
functional_derivative(x) = $symbolic_derivative
end
functional_derivative (generic function with 1 method)
julia> rand_x = rand(10000);
julia> exact_values = hand_defined_derivative.(rand_x);
julia> test_values = functional_derivative.(rand_x);
julia> #assert isequal(exact_values,test_values)
# Don't forget to interpolate array arguments when using `BenchmarkTools`
julia> using BenchmarkTools
julia> #btime hand_defined_derivative.($rand_x);
104.259 μs (2 allocations: 78.20 KiB)
julia> #btime functional_derivative.($rand_x);
104.537 μs (2 allocations: 78.20 KiB)
Now the 2D case does not work because the output of differentiate is an array of expressions (one expression per component), which you need to transform into an expression which builds an array (or a Tuple, for performance) of components. This is symbolic_gradient_expr in the example below:
julia> symbolic_primal = :( x^2 + cos(x) + y^2 )
:(x ^ 2 + cos(x) + y ^ 2)
julia> hand_defined_gradient(x, y) = (2x - sin(x), 2y)
hand_defined_gradient (generic function with 1 method)
# This is a vector of expressions
julia> symbolic_gradient = differentiate(symbolic_primal,[:x,:y])
2-element Array{Any,1}:
:(2 * 1 * x ^ (2 - 1) + 1 * -(sin(x)))
:(2 * 1 * y ^ (2 - 1))
# Wrap expressions for all components of the gradient into a single expression
# generating a tuple of them:
julia> symbolic_gradient_expr = Expr(:tuple, symbolic_gradient...)
:((2 * 1 * x ^ (2 - 1) + 1 * -(sin(x)), 2 * 1 * y ^ (2 - 1)))
julia> #eval functional_gradient(x, y) = $symbolic_gradient_expr
functional_gradient (generic function with 1 method)
Like in the 1D case, this performs identically to the hand-written version:
julia> rand_x = rand(10000); rand_y = rand(10000);
julia> exact_values = hand_defined_gradient.(rand_x, rand_y);
julia> test_values = functional_gradient.(rand_x, rand_y);
julia> #assert isequal(exact_values,test_values)
julia> #btime hand_defined_gradient.($rand_x, $rand_y);
113.182 μs (2 allocations: 156.33 KiB)
julia> #btime functional_gradient.($rand_x, $rand_y);
112.283 μs (2 allocations: 156.33 KiB)
Related
What is the difference in defining a function with syms or # (x)?
For example:
syms f(t)
f(t) = 0.5*cos(280*pi * t) + 0.5 * sin(260*pi * t) + 0.5 * cos(300*pi * t);
and
f = # (t) 0.5*cos(280*pi * t) + 0.5 * sin(260*pi * t) + 0.5 * cos(300*pi * t);
Thanks
Symbolic function
syms f(t)
f(t) = 0.5*cos(280*pi * t) + 0.5 * sin(260*pi * t) + 0.5 * cos(300*pi * t);
defines a symbolic function. This means that there is no loss of precision when using the function. The results are always exact:
>> f(1)
ans =
1
Also, you can apply symbolic operations to the function. For example, you can compute its derivative function, among other operations:
>> g = diff(f)
g(t) =
130*pi*cos(260*pi*t) - 140*pi*sin(280*pi*t) - 150*pi*sin(300*pi*t)
This is a new symbolic function, which can be used normally:
>> g(1/260)
ans =
140*pi*sin(pi/13) - 130*pi + 150*pi*sin((2*pi)/13)
Note how, again, the result is exact. If you want to obtain its numerical value (which will necessarily be an approximation) you can convert to double floating-point representation:
>> double(g(1/260))
ans =
-84.154882885760415
or, if you need more decimals, you can use vpa:
>> vpa(g(1/260), 50)
ans =
-84.154882885760413712114778738680201384788201830179
Using symbolic functions incurs a cost in efficiency: the code will generally be slower than with standard functions. Besides, they are limited to mathematical operations, whereas standard functions can do many other things, such as dealing with text or files.
Standard, numerical function
f = # (t) 0.5*cos(280*pi * t) + 0.5 * sin(260*pi * t) + 0.5 * cos(300*pi * t);
defines a standard, numerical function. More specifically, it defines an anonymous function, and then defines f as a handle to that function, with the result that f can be used as the function name.
You can "only" use this function to pass inputs and get outputs as result. Besides, with the default floating-point double data type you may encounter some numerical inaccuracies. For example, f(1) gives
>> f(1)
ans =
0.999999999999963
instead of the exact result 1.
In contrast with the symbolic case, Matlab doesn't know how to compute its derivative, so in this case you would have to do it numerically (using finite differences), which is an approximation (both because of using finite differences and because of working with floating-point values):
>> t0 = 1/260;
>> h = 1e-9;
>> (f(t0+h)-f(t0))/h
ans =
-84.154498258826038
Compared with symbolic functions, standard functions are faster (and don't require the Symbolic Toolbox), and as mentioned above, are not limited to mathematical operations.
How to convert a symbolic expression to a Octave function from the Symbolic Package?
After installing the symbolic package on octave with pkg install -forge symbolic. Using the symbolic package on octave I may write this:
octave> pkg load symbolic;
octave> a = sym( "a" );
octave> int ( a^2 + csc(a) )
which will result in:
ans = (sym)
3
a log(cos(a) - 1) log(cos(a) + 1)
-- + --------------- - ---------------
3 2 2
But how to do this Integral (int(1)) symbolic result just above to became a valuable function like this below?
function x = f( x )
x = x^3/3 + log( cos(x) - 1 )/2 - log( cos(x) + 1 )/2
end
f(3)
# Which evaluates to: 11.6463 + 1.5708i
I want to take the symbolic result from int ( a^2 + csc(a) ), and call result(3), to compute it at 3, i.e., return the numeric value 11.6463 + 1.5708i, from the symbolic expression integral a^2 + csc(a). Basically, how to use the symbolic expression as numerically evaluable expressions? It is the as this other question for Matlab.
References:
http://octave.sourceforge.net/symbolic/index.html
How do I declare a symbolic matrix in Octave?
Octave symbolic expression
Julia: how do I convert a symbolic expression to a function?
What is symbolic computation?
You can use pretty.
syms x;
x = x^3/3 + log( cos(x) - 1 )/2 - log( cos(x) + 1 )/2;
pretty(x)
which gives this:
3
log(cos(x) - 1) log(cos(x) + 1) x
--------------- - --------------- + --
2 2 3
Update (Since the question is edited):
Make this function:
function x = f(y)
syms a;
f(a) = int ( a^2 + csc(a) );
x = double(f(y));
end
Now when you call it using f(3), it gives:
ans =
11.6463 + 1.5708i
it seems like you answered your own question by linking to the other question about Matlab.
Octave has an implementation of matlabFunction which is a wrapper for function_handle in the symbolic toolbox.
>> pkg load symbolic;
>> syms x;
>> y = x^3/3 + log( cos(x) - 1 )/2 - log( cos(x) + 1 )/2
y = (sym)
3
x log(cos(x) - 1) log(cos(x) + 1)
-- + --------------- - ---------------
3 2 2
>> testfun = matlabFunction(y)
testfun =
#(x) x .^ 3 / 3 + log (cos (x) - 1) / 2 - log (cos (x) + 1) / 2
testfun(3)
>> testfun(3)
ans = 11.6463 + 1.5708i
>> testfun([3:1:5]')
ans =
11.646 + 1.571i
22.115 + 1.571i
41.375 + 1.571i
>> testfun2 = matlabFunction(int ( x^2 + csc(x) ))
testfun2 =
#(x) x .^ 3 / 3 + log (cos (x) - 1) / 2 - log (cos (x) + 1) / 2
>> testfun2(3)
ans = 11.6463 + 1.5708i
>> testfun2([3:1:5]')
ans =
11.646 + 1.571i
22.115 + 1.571i
41.375 + 1.571i
I'm sure there are other ways you could implement this, but it may allow you to avoid hardcoding your equation in a function.
For example,
I have a
f(x)=
9+4(x+3), if -4<=x<-1 (subf1)
7-9(x-0.4), if -1<=x<1 (subf2)
How can I create a function of f(x) in matlab?
I tried
f=0
syms x
f=f+ subf1 with heaviside+ subf2 with heaviside
But I cannot give a v to solve f(v) and I cannot plot f(x) only from -4 to 1.
So is there another way to write conditional function?
Sorry my description is a little hard to follow. If you don't understand what I am asking, please let me know and I will try to rephrase. Thank you!
Depends on what you want to do with it. If for some reason you need symbolic, here is one way to write your symbolic function:
syms x
f1 = (9 + 4 * (x + 3)) * heaviside(x + 4) * (1 - heaviside(x + 1));
f2 = (7 - 9 * (x - 0.4)) * heaviside(x + 1) * (1 - heaviside(x - 1));
f = symfun(f1 + f2, x);
Otherwise, you can write your function in a file as:
function out = f(x)
out = (9 + 4 * (x + 3))*(x>=-4)*(x<-1) + (7 - 9 * (x - 0.4))*(x>=-1)*(x<1);
Or you can define it as an anonymous function:
f = #(x) (9 + 4 * (x + 3))*(x>=-4)*(x<-1) + (7 - 9 * (x - 0.4))*(x>=-1)*(x<1);
Then, you can plot any of the functions using, for instance, fplot:
fplot(f, [-4, 1])
I am trying to compute log(N(x | mu, sigma)) in MATLAB where
x is the data vector(Dimensions D x 1) , mu(Dimensions D x 1) is mean and sigma(Dimensions D x D) is covariance.
My present implementation is
function [loggaussian] = logmvnpdf(x,mu,Sigma)
[D,~] = size(x);
const = -0.5 * D * log(2*pi);
term1 = -0.5 * ((x - mu)' * (inv(Sigma) * (x - mu)));
term2 = - 0.5 * logdet(Sigma);
loggaussian = const + term1 + term2;
end
function y = logdet(A)
y = log(det(A));
end
For some cases I get an error
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND =
NaN
I know you will point out that my data is not consistent, but I need to implement the function so that I can get the best approximate instead of throwing an warning. . How do I ensure that I always get a value.
I think the warning comes from using inv(Sigma). According to the documentation, you should avoid using inv where its use can be replaced by \ (mldivide). This will give you both better speed and accuracy.
For your code, instead of inv(Sigma) * (x - mu) use Sigma \ (x - mu).
The following approach should be (a little) less sensitive to ill-conditioning of the covariance matrix:
function logpdf = logmvnpdf (x, mu, K)
n = length (x);
R = chol (K);
const = 0.5 * n * log (2 * pi);
term1 = 0.5 * sum (((R') \ (x - mu)) .^ 2);
term2 = sum (log (diag (R)));
logpdf = - (const + term1 + term2);
end
If K is singular or near-singular, you can still have warnings (or errors) when calling chol.
I want to calculate Fourier series for some function func.
I build this method:
function y = CalcFourier(accurate, func, a, b, val_x)
f = #(x) eval(func);
% calculate coefficients
a0 = (2 / (b - a)) * calcArea(func, a , b);
an = (2 / (b - a)) * calcArea(strcat(func, '*cos(2*n*pi*x / (b - a))'), a , b);
an = (2 / (b - a)) * calcArea(strcat(func, '*sin(2*n*pi*x / (b - a))'), a , b);
partial = 0;
an_f = #(n) an;
bn_f = #(n) bn;
for n = 1:accurate
partial = partial + an_f(n)* cos(2*n*pi*val_x / (b - a)) + bn_f(n) * sin(2*n*pi*val_x / (b - a));
end
y = (a0 / 2) + partial;
end
And this - to approximate the coefficient's:
function area = calcArea(func, a, b)
f = #(x) eval(func);
area = (a - b) * (f(a) - f(b)) / 2;
end
On line an = (2 / (b - a)) * calcArea(strcat(func, '*cos(2*n*pi*x / (b - a))'), a , b); I'm getting error:
??? Error using ==> eval
Undefined function or variable 'n'.
Error in ==> calcArea>#(x)eval(func) at 2
f = #(x) eval(func);
Error in ==> calcArea at 3
area = (a - b) * (f(a) - f(b)) / 2;
Error in ==> CalcFourier at 5
an = (2 / (b - a)) * calcArea(strcat(func,
'*cos(2*n*pi*x / (b - a))'), a , b);
>>
Is there any option to declate n as "some constant"? Thanks!
You try to use a variable called n in line 4 of your code. However at that time n is not defined yet, that only happens in the for loop. (Tip: Use dbstop if error at all times, that way you can spot the problem more easily).
Though I don't fully grasp what you are doing I believe you need something like this:
n=1 at the start of your CalcFourier function. Of course you can also choose to input n as a variable, or to move the corresponding line to a place where n is actually defined.
Furthermore you seem to use n in calcArea, but you don't try to pass it to the function at all.
All of this would be easier to find if you avoided the use of eval, perhaps you can try creating the function without it, and then matlab will more easily guide you to the problems in your code.
if the symbolic toolbox is available it can be used to declare symbolic variables , which can be treated as 'some variable' and substituted with a value later.
however a few changes should be made for it to be implemented, generally converting anonymous functions to symbolic functions and any function of n to symbolic functions. And finally the answer produced will need to be converted from a symbolic value to some more easy to handle value e.g. double
quickly implementing this to your code as follows;
function y = test(accurate, func, a, b, val_x)
syms n x % declare symbolic variables
%f = symfun(eval(func),x); commented out as not used
The two lines above show the declaration of symbolic variables and the syntax for creating a symbolic function
% calculate coefficients
a0 = symfun((2 / (b - a)) * calcArea(func, a , b),x);
an = symfun((2 / (b - a)) * calcArea(strcat(func, '*cos(2*n*pi*x / (b - a))'),...
... a , b),[x n]);
bn = symfun((2 / (b - a)) * calcArea(strcat(func, '*sin(2*n*pi*x / (b - a))'),...
... a , b),[x n]);
partial = 0;
the function definitions in in your code are combined into the lines above, note they functions are of x and n, the substitution of x_val is done later here...
for n = 1:accurate
partial = partial + an(val_x,n)* cos(2*n*pi*val_x / (b - a)) +...
... bn(val_x,n) * sin(2*n*pi*val_x / (b - a));
end
The for loop which now replaces the symbolic n with values and calls the symbolic functions with x_val and each n value
y = (a0 / 2) + partial;
y = double(y);
end
Finally the solution is calculated and then converted to double;
Disclaimer: I have not checked if this code generates the correct solution, however I hope it gives you enough information to understand what has been changed and why to carry out the process given in your code above, using the symbolic toolbox to address the issue...