I'm trying to have MATLAB integrate a function of two variables, like fun=#(x,y)x+y;
against one variable. I can define, say fun2=#(y)quad(#(x)fun(x,y),1,2); and it'll give me fun2(1), say, no problem. But it gives me errors when I try to evaluate fun2 of a, say, matrix. When I try to integrate fun2 using quad(), which is what I need to do, it gives me the same error.
And I can't just use quad2d() because (a) I need something like fun2 in several different places and (b) the integrals I need to calculate are 4D.
Are there any other ways of doing this?
So you want to use the fun2 for every element of an array. You can do this by using the arrayfun and define fun2.
You can see the info on arrayfun at: http://www.mathworks.com/help/matlab/ref/arrayfun.html.
Instead of quad, use integral with the 'ArrayValued' property set to true.
fun2 = #(y)integral(#(x)fun(x,y),1,2,'ArrayValued',true);
For your four-dimensional problem, you can use integral(#(x)integral3(…)) or arrayfun. See my answer to this question how to do this exactly.
You should use integral instead of quad anyway, because the Matlab quad documentation says:
quad will be removed in a future release. Use integral instead.
In the documentation of integral, you also find options that specify the accuracy of the result ('AbsTol' and 'RelTol').
Related
I am now doing an optimization problem. Currently, I have codes in optimization min f(x_1,x_2,...,x_n). It may give x_1,x_2,...,x_n to be different in values after optimization. However, suppose I want to make that x_1=x_2=...=x_n and do the optimization again, what I expect is to find y such that f(y,y,...,y) is minimized. Setting x_1,x_2,...,x_n to be all the same as the initial input, but still, it may produce different values of x_1,x_2,...,x_n. Are there any good ways to solve the problem without rewriting the codes? Any existing functions/techniques can help me to do it? If possible, you can treat the function is not known (the codes of functions is not accessible, what I know is that for the function, when input n parameters, it gives a value).
As pointed out by Erwin Kalvelagen, the most general approach is to define it as equality constraints, but if it is your goal to simplify your problem, you can define a new function which accepts a single input value and forwards it to every input of your function f. Assuming you are using fmincon, the solution is:
x = fmincon(#(x) f(x, x, x, ..., x), x0, ...)
Lets say, I have a function 'x' and a function '2sin(x)'
How do I output the intersects, i.e. the roots in MATLAB? I can easily plot the two functions and find them that way but surely there must exist an absolute way of doing this.
If you have two analytical (by which I mean symbolic) functions, you can define their difference and use fzero to find a zero, i.e. the root:
f = #(x) x; %defines a function f(x)
g = #(x) 2*sin(x); %defines a function g(x)
%solve f==g
xroot = fzero(#(x)f(x)-g(x),0.5); %starts search from x==0.5
For tricky functions you might have to set a good starting point, and it will only find one solution even if there are multiple ones.
The constructs seen above #(x) something-with-x are called anonymous functions, and they can be extended to multivariate cases as well, like #(x,y) 3*x.*y+c assuming that c is a variable that has been assigned a value earlier.
When writing the comments, I thought that
syms x; solve(x==2*sin(x))
would return the expected result. At least in Matlab 2013b solve fails to find a analytic solution for this problem, falling back to a numeric solver only returning one solution, 0.
An alternative is
s = feval(symengine,'numeric::solve',2*sin(x)==x,x,'AllRealRoots')
which is taken from this answer to a similar question. Besides using AllRealRoots you could use a numeric solver, manually setting starting points which roughly match the values you have read from the graph. This wa you get precise results:
[fzero(#(x)f(x)-g(x),-2),fzero(#(x)f(x)-g(x),0),fzero(#(x)f(x)-g(x),2)]
For a higher precision you could switch from fzero to vpasolve, but fzero is probably sufficient and faster.
I need to take derivatives in Matlab of a lot of equations w.r.t. generic functions, which will provide me with generic derivatives, of the type:
diff(f(x,y),x)
or
D([1],f(x,y)).
What I need is to transform these derivatives into actual symbolic variables, in order to be able to use solve, etc. What I am doing now, but which is highly inefficient, is brute force string replacement. Here is a minimal working example of what I am doing:
syms x y
f(x,y) = sym('f(x,y)')
jacobian(f)
first_d = jacobian(f)
strrep(char(first_d),'D([1], f)(x, y)','fx')
In my real application, I have lots of derivatives to take from lots of equations, so looping such replacements is not the smartest thing to do. Can anybody shed some light into a more efficient solution?
Note: I'm using R2014b. Symbolic Math functionality has changed greatly in recent versions and continues to do so. Users on different versions may need to do slightly different things to achieve the results below, which relies on accessing undocumented functionality.
First, since this is about performance, it is sufficient to simply declare
syms f(x,y)
which also defines x and y as symbolic variables.
As I mention in my comments above, Matlab/MuPAD's symbolic math is all about manipulating strings. Doing this more directly and and adding in you own knowledge of the problem can help speed things up. You want to avoid unnecessary conversions between strings and the sym/symfun types.
1. The first thing to do is investigate how a particular symbolic math function is handling input and output and what lower level private functions it is calling. In the case of your jacobian function example, type edit jacobian in your command window to view the code in the Editor. Much of what you see may be confusing, but you should see this line:
res = mupadmex('symobj::jacobian',Fsym.s,v.s);
This calls the low level 'symobj::jacobian' function and passes in string versions of the function and variables. To call this yourself, you can do (this also assumes you know your variables are x and y):
syms f(x,y)
first_d = mupadmex('symobj::jacobian',char(f),char([x,y]))
This returns [ diff(f(x, y), x), diff(f(x, y), y)]. The undocumented mupadmex function is a direct way of calling MuPAD function from within Matlab – there are others, which are documented.
2. You'll notice that that the first_d output above is symfun class. We actually don't want want the output to be converted back to a symbolic function. To avoid this, we can pass an addition argument to mupadmex:
syms f(x,y)
first_d = mupadmex('symobj::jacobian',char(f),char([x,y]),0)
with now returns the string matrix([[diff(f(x, y), x), diff(f(x, y), y)]]). (I only know this trick of adding the additional 0 argument from having browsed through a lot of Symbolic Math toolbox code.)
3. From this string, we can now find and replace various patterns for partial derivatives with simple variables. The strrep function that you're using is generally a good choice for this. It is much faster than regexprep. However, if you have a large number of different, but similar, patterns to replace, you might do a performance comparison between the two. That would probably be the subject of a separate question.
I'm not sure what your overall goal is or the full extent of your problem, but here is my final code for your example:
syms f(x,y)
first_d = mupadmex('symobj::jacobian',char(f),char([x,y]),0)
first_d = strrep(first_d(9:end-2),'diff(f(x, y), x)','fx');
first_d = sym(strrep(first_d,'diff(f(x, y), y)','fy'));
This returns the symbolic vector [ fx, fy]. If you want a symfun, you'll need to modify the last line slightly. In some simple testing, this basic example is about 10% faster than calling jacobian and converting the result back to a string. If you directly specify the inputs as strings instead of allocating a symbolic function, the result is about 30% faster then your original:
first_d = mupadmex('symobj::jacobian','f(x,y)','[x,y]',0)
first_d = strrep(first_d(9:end-2),'diff(f(x, y), x)','fx');
first_d = sym(strrep(first_d,'diff(f(x, y), y)','fy'));
Using subs, as in this answer, while convenient, is the slowest approach. Converting back and forth to and from strings is costly.
I am using the Opti Toolbox, a free optimization toolbox for Matlab. I am solving a Mixed Integer Nonlinear Program, a MINLP. Inside the Opti Toolbox, the MINLP solver used is SCIP.
I define my own objective as a separate function (fun argument in Opti), and this function needs to call other matlab functions which take double arguments.
The problem is that whenever Opti invokes my function to evaluate the objective, it first calls it using a vector of 'scipvar' objects and then it calls it again using a vector of 'double' objects. My obj function does not work with the scipvar objects, it returns an error.
I tried (just for testing) setting the output of my function for something fixed when the type is 'scipvar', and for the actual real thing when the type is 'double', and this doesn't work, changing the fixed value actually changes the final optimal value.
I basically need to convert a scipvar object to double, is this possible? Or is there any other alternative?
Thank you.
Ok, so after enlightenment by J. Currie, an Opti toolbox developer, I understood the cause of the problem above.
The first call to the objective with a vector of scipvar variables is actually a parser sweeping the objective function to see if it can be properly mapped to something that can be handled by SCIP. I reimplemented the objective function to use only methods allowed by scip - obtained by typing methods(scipvar) in matlab:
abs dot log minus mrdivide norm power rdivide sqrt times
display exp log10 mpower mtimes plus prod scipvar sum uminus
Once the objective could be parsed by scip my problem worked fine.
I'm trying to write a generic function for finding the cosine of a value inputted into the function. The formula for cosine that I'm using is:
n
cosx = sum((-1)^n*x^(2n)/(2n)!)
n=1
I've looked at the matlab documentation and this page implies that the "sum" function should be able to do it so I tried to test it by entering:
sum(x^n, n=1..3)
but it just gives me "Error: The expression to the left of the equals sign is not a valid target for an assignment".
Is summing an infinite series something that matlab is able to do by default or do I have to simulate it using a function and loops?
Well if you want to approximate it to a finite number of terms you can do it in Matlab without toolboxes or loops:
sumCos = #(x, n)(sum(((-1).^(0:n)).*(x.^(2*(0:n)))./(factorial(2*(0:n)))));
and then use it like this
sumCos(pi, 30)
The first parameter is the angle, the second is the number of terms you want to take the series to (i.e. effects the precision). This is a numerical solution which I think is really what you're after.
btw I took the liberty of correcting your initial sum, surely n must start from 0 if you are trying to approximate cos
If you want to understand my formula (which surely you do) then you need to read up on some essential Matlab basics namely the colon operator and then the concept of using . to perform element-wise operations.
In MATLAB itself, no, you cannot solve an infinite sum. You would have to estimate it as you suggested. The page you were looking at is part of the Symbolic Math toolbox, which is an add-on to MATLAB. In particular, you were looking at MuPAD, which is rather similar to Mathematica. It is a symbolic math workspace, whereas MATLAB is more of a numeric math workspace. If you own the Symbolic Math toolbox, you can either use MuPAD as you tried to above, or you can use the symsum function from within MATLAB itself to carry out sums of series.