vector valued limits in matlab integral - matlab

Is it possible to use vector limits for any matlab function of integration? I have to avoid from for loops because of the speed of my program. Can you please give me a clue on do
k=0:5
f=#(x)x^2
quad(f,k,k+1)
If somebody need, I found the answer of my question:quad with vector limit

I will try giving you an answer, based on my experience with quad function.
Starting from this:
k=0:5;
f=#(x) x.^2;
Notice the difference in your f definition (incorrect) and mine (correct).
If you only mean to integrate f within the range (0,5) you can easily call
quad(f,k(1),k(end))
Without handle function, you may reach the same results in a different way, by making use of trapz:
x = 0:5;
y = x.^2;
trapz(x,y)
If, instead, you mean to perform a step-by-step integration in the small range [k(i),k(i+1)] you may type
arrayfun(#(ii) quad(f,k(ii),k(ii+1)),1:numel(k)-1)
For a sake of convenince, notice that
sum(arrayfun(#(ii) quad(f,k(ii),k(ii+1)),1:numel(k)-1)) == quad(f,k(1),k(end))
I hope this helps.

Related

Normalization of integrand for numerical integration in Matlab

First off, I'm not sure if this is the best place to post this, but since there isn't a dedicated Matlab community I'm posting this here.
To give a little background, I'm currently prototyping a plasma physics simulation which involves triple integration. The innermost integral can be done analytically, but for the outer two this is just impossible. I always thought it's best to work with values close to unity and thus normalized the my innermost integral such that it is unit-less and usually takes values close to unity. However, compared to an earlier version of the code where the this innermost integral evaluated to values of the order of 1e-50, the numerical double integration, which uses the native Matlab function integral2 with target relative tolerance of 1e-6, now requires around 1000 times more function evaluations to converge. As a consequence my simulation now takes roughly 12h instead of the previous 20 minutes.
Question
So my questions are:
Is it possible that the faster convergence in the older version is simply due to the additional evaluations vanishing as roundoff errors and that the results thus arn't trustworthy even though it passes the 1e-6 relative tolerance? In the few tests I run the results seemed to be the same in both versions though.
What is the best practice concerning the normalization of the integrand for numerical integration?
Is there some way to improve the convergence of numerical integrals, especially if the integrand might have singularities?
I'm thankful for any help or insight, especially since I don't fully understand the inner workings of Matlab's integral2 function and what should be paid attention to when using it.
If I didn't know any better I would actually conclude, that the integrand which is of the order of 1e-50 works way better than one of say the order of 1e+0, but that doesn't seem to make sense. Is there some numerical reason why this could actually be the case?
TL;DR when multiplying the function to be integrated numerically by Matlab 's integral2 with a factor 1e-50 and then the result in turn with a factor 1e+50, the integral gives the same result but converges way faster and I don't understand why.
edit:
I prepared a short script to illustrate the problem. Here the relative difference between the two results was of the order of 1e-4 and thus below the actual relative tolerance of integral2. In my original problem however the difference was even smaller.
fun = #(x,y,l) l./(sqrt(1-x.*cos(y)).^5).*((1-x).*sin(y));
x = linspace(0,1,101);
y = linspace(0,pi,101).';
figure
surf(x,y,fun(x,y,1));
l = linspace(0,1,101); l=l(2:end);
v1 = zeros(1,100); v2 = v1;
tval = tic;
for i=1:100
fun1 = #(x,y) fun(x,y,l(i));
v1(i) = integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t1 = toc(tval)
tval = tic;
for i=1:100
fun1 = #(x,y) 1e-50*fun(x,y,l(i));
v2(i) = 1e+50*integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t2 = toc(tval)
figure
hold all;
plot(l,v1);
plot(l,v2);
plot(l,abs((v2-v1)./v1));

Finding intersection between two functions using Matlab or Scilab

I'm doing this:
>> plot(x,y1,x,y2);
>> x=0:0.001:5;
>> y1=sin(x)+cos(1+x.^2)-1;
>> y2 = ((1/2).*x)-1;
>> find (y1==y2)
And getting this:
ans =
Empty matrix: 1-by-0
As an answer and it is simply driving me crazy! I do not know why Matlab and Scilab does not give me the answer of the intersects. I have been trying to make the intervals smaller like x = 0:0.0001:5; but it did not change anything. How can I make it return to me the intersection values?
Thank you.
You have to remember that Matlab is used to find numerical solutions to problems. You are providing a discrete set of input points x=0:0.001:5; and asking it to calculate the discrete output points y1[x] and y2[x]. This means that y1 and y2 are not continuous and don't necessarily intersect as their continuous counterparts do. I don't have Matlab so I did not run your code, but your discrete functions most likely do not interset. That is to say, there is no pair of points a = y1[x_i] and b = y2[x_i] where a = b. Instead what you most likely want to do is look for points where y2-y1 is on one side of zero at a particular input, and on the other side of zero for the next input. This would mean that the function's continuous conterparts would have crossed somewhere in between.
The case where the functions meet but don't cross is a little more tricky but the same kind of idea.
EDIT:
This sort of thing is easiest to wrap your head around with image so I created one illustrate what I mean.
Here I used many fewer points than you are trying to use, but the idea is the same. You can see that the continuous versions of y1 and y2 cross in several places, but what you're asking matlab to do is find a point in y1 that is equal to a point in y2 for identical values of x. In this image you can see that many are close, but your computer stores floating point numbers to a very high precision and so the chances of them actually being equal is very small.
When you increase the number of sample points, the image starts to look more like its' continuous counterpart.
The two existing answers explain why you can't find an exact intersection so easily. But what you really need is an answer to what to do instead to obtain precise intersections?
In your specific case, you know the analytical functions which you want to figure out the intersection of. You can use fzero with an (optionally anonymous) function to find the zero of the function defined by the difference of your two original functions:
y1fun = #(x) sin(x)+cos(1+x.^2)-1;
y2fun = #(x) ((1/2).*x)-1;
diff_fun = #(x) y1fun(x)-y2fun(x);
x0 = 1; % starting point for fzero's zero search
x_cross = fzero(diff_fun,x0);
Now, this will give you one zero of the difference function, i.e. one intersection of your functions. It turns out that finding every zero of a function is a challenging task. Generally you have to call fzero multiple times with various starting points x0. If you suspect what your functions look like, this is not hopeless at all.
So what happens if your functions are more messy? In the general case, you can use an interpolating function to play the part of y1fun and y2fun in the example above, for instance by using interp1:
% generate data
xdata = 0:0.001:5;
y1data = sin(xdata)+cos(1+xdata.^2)-1;
y2data = ((1/2).*xdata)-1;
y1fun = #(x) interp1(xdata,y1data,x);
y2fun = #(x) interp1(xdata,y2data,x);
x0 = 1; % starting point for fzero's zero search
x_cross = fzero(#(x)y1fun(x)-y2fun(x),x0);
which leads back to the original problem. Note that interp1 by default uses linear interpolation, depending on what your function looks like and how your data are scatted you can choose other options. Also note the option for extrapolation (to be avoided).
So in both cases, you get one crossing for each call to fzero. By choosing the starting points carefully, you should be able to find all the zeros, as exactly as possible.
Maybe the two vectors do not have exactly equal values anywhere. You could try to search for a smallest difference:
abs(y1-y2)<tolerance
where tolerance=0.001 is a small number

How to find the intersections of two functions in MATLAB?

Lets say, I have a function 'x' and a function '2sin(x)'
How do I output the intersects, i.e. the roots in MATLAB? I can easily plot the two functions and find them that way but surely there must exist an absolute way of doing this.
If you have two analytical (by which I mean symbolic) functions, you can define their difference and use fzero to find a zero, i.e. the root:
f = #(x) x; %defines a function f(x)
g = #(x) 2*sin(x); %defines a function g(x)
%solve f==g
xroot = fzero(#(x)f(x)-g(x),0.5); %starts search from x==0.5
For tricky functions you might have to set a good starting point, and it will only find one solution even if there are multiple ones.
The constructs seen above #(x) something-with-x are called anonymous functions, and they can be extended to multivariate cases as well, like #(x,y) 3*x.*y+c assuming that c is a variable that has been assigned a value earlier.
When writing the comments, I thought that
syms x; solve(x==2*sin(x))
would return the expected result. At least in Matlab 2013b solve fails to find a analytic solution for this problem, falling back to a numeric solver only returning one solution, 0.
An alternative is
s = feval(symengine,'numeric::solve',2*sin(x)==x,x,'AllRealRoots')
which is taken from this answer to a similar question. Besides using AllRealRoots you could use a numeric solver, manually setting starting points which roughly match the values you have read from the graph. This wa you get precise results:
[fzero(#(x)f(x)-g(x),-2),fzero(#(x)f(x)-g(x),0),fzero(#(x)f(x)-g(x),2)]
For a higher precision you could switch from fzero to vpasolve, but fzero is probably sufficient and faster.

solve trig equation over boundary

Firstly, I'm sure a simple answer exists for this, maybe I'm just not wording it right in searching for an answer online.
I'm trying to solve an equation that looks like this:
a*x*cot(a*x) == b
Where a and b are constants. Using
solve(a*x*cot(a*x) == b, x)
I'm getting a result I know is wrong (with the values I'm using for the constants, I'm getting like -227, and it should be something around +160.) I plotted it up in Mathematica as two separate functions, and they do cross each other right around there, but since the cot part is periodic, they do so many times.
I want to constrain Matlab's search for the solution to a specific interval, such as 0 to 200; how do I do that?
I'm pretty new to Matlab (rather more experienced in Mathematica).
You can specify the bounds on x using fzero with only two requirements
The function must be in a "residual" form (i.e., r(x) = 0)
The residual values at the two bounds must have opposite sign (this guarantees that a root exists within the interval for continuous functions).
So we re-write the function in residual form:
r = #(x) a*x*cot(a*x) - b;
define the interval
% These are just random numbers; the actual bounds should come
% from the graph the ensures r has different signs a xL and xR
xL = 150;
xR = 170;
and solve
x = fzero(r,[xL,xR]);
I see you were trying to use the Symbolic Toolbox for a solution, but since the equation is a non-linear combination of a polynomial and a trigonometric function, there is more than likely no closed form solution. So I differed to a non-linear, numeric root-finder.
I tried some values and it seems solve returns a numeric solution. This is the documented behaviour if no analytic solution is found.
In this case, you may directly call the numeric solver with a matching start value
vpasolve(a*x*cot(a*x) == b, x,160)
It's not exactly what you asked for, but using your reading from the plot as a start value should do it.

Doing a PCA using an optimization in Matlab

I'd like to find the principal components of a data matrix X in Matlab by solving the optimization problem min||X-XBB'||, where the norm is the Frobenius norm, and B is an orthonormal matrix. I'm wondering if anyone could tell me how to do that. Ideally, I'd like to be able to do this using the optimization toolbox. I know how to find the principal components using other methods. My goal is to understand how to set up and solve an optimization problem which has a matrix as the answer. I'd very much appreciate any suggestions or comments.
Thanks!
MJ
The thing about Optimization is that there are different methods to solve a problem, some of which can require extensive computation.
Your solution, given the constraints for B, is to use fmincon. Start by creating a file for the non-linear constraints:
function [c,ceq] = nonLinCon(x)
c = 0;
ceq = norm((x'*x - eye (size(x))),'fro'); %this checks to see if B is orthonormal.
then call the routine:
B = fmincon(#(B) norm(X - X*B*B','fro'),B0,[],[],[],[],[],[],#nonLinCon)
with B0 being a good guess on what the answer will be.
Also, you need to understand that this algorithms tries to find a local minimum, which may not be the solution you ultimately want. For instance:
X = randn(1,2)
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.4904 0.8719
0.8708 -0.4909
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.9864 -0.1646
0.1646 0.9864
So be careful, when using these methods, and try to select a good starting point
The Statistics toolbox has a built-in function 'princomp' that does PCA. If you want to learn (in general, without the optimization toolbox) how to create your own code to do PCA, this site is a good resource.
Since you've specifically mentioned wanting to use the Optimization Toolbox and to set this up as an optimization problem, there is a very well-trusted 3rd-party package known as CVX from Stanford University that can solve the optimization problem you are referring to at this site.
Do you have the optimization toolbox? The documentation is really good, just try one of their examples: http://www.mathworks.com/help/toolbox/optim/ug/brg0p3g-1.html.
But in general the optimization function look like this:
[OptimizedMatrix, OptimizedObjectiveFunction] = optimize( (#MatrixToOptimize) MyObjectiveFunction(MatrixToOptimize), InitialConditionsMatrix, ...optional constraints and options... );
You must create MyObjectiveFunction() yourself, it must take the Matrix you want to optimize as an input and output a scalar value indicating the cost of the current input Matrix. Most of the optimizers will try to minimise this cost. Note that the cost must be a scalar.
fmincon() is a good place to start, once you are used to the toolbox you and if you can you should choose a more specific optimization algorithm for your problem.
To optimize a matrix rather than a vector, reshape the matrix to a vector, pass this vector to your objective function, and then reshape it back to the matrix within your objective function.
For example say you are trying to optimize the 3 x 3 matrix M. You have defined objective function MyObjectiveFunction(InputVector). Pass M as a vector:
MyObjectiveFunction(M(:));
And within the MyObjectiveFunction you must reshape M (if necessary) to be a matrix again:
function cost = MyObjectiveFunction(InputVector)
InputMatrix = reshape(InputVector, [3 3]);
%Code that performs matrix operations on InputMatrix to produce a scalar cost
cost = %some scalar value
end