MATLAB: Plot integral using quad/quadl - matlab

I would like to know if anybody knows how I can plot an integral calculated using quad/quadl, or if this is possible.
I read that I can set the trace parameter to be non-zero, and this results in the information of each iteration being provided, but I'm not sure how and if I can use the information to plot an integral.
Thanks.

quad and quadl do not compute an integral function anyway, i.e., an integral as a function of the parameter. And since tools like this work iteratively, refining their estimate until it satisfies a tolerance on the global value, they are not easily made to produce the plot you desire.
You can do what you desire by using a differential equation solver to generate the solution, ode45 for example.

Related

how to get the area of a region using matlab

I want to plot some equations and inequalities like x>=50, y>=0,4x-5y>=8,x=40,x=60,y=25, y=45 in matlab and want to get the area produced by intersecting these equations and inequalities. Is it possible using matlab? If yes can someone provide me some manual? If not, is there some other software that can do this?
Integrals would work for your purposes, provided you know the points at which the curves intersect (something Matlab is also able to compute). Take a look at the documentation on the integral function.
q = integral(fun,xmin,xmax) approximates the integral of function fun
from xmin to xmax using global adaptive quadrature and default error
tolerances.
EDIT: As an additional resource, take a look at the code provided by user Grzegorz Konz on the Mathworks blog.
EDIT #2: I'm not familiar with any Matlab functions that'll take a vector of functions and return the points of intersection (if any) between all the curves. Users have produced functions that return the set of intersection points between two curves. You could run this function for each pair of equations in your list and use a function like polyarea to compute the area of the enclosed region if the curves are all straight lines.

How to get accuracy result of integral in matlab?

I don't know how to set the intevals of a integral to get the best precise result.
For example, this is the orginal definition of the formula.
y=integral(#(x) log2((f1(x))./(f2(x))), -inf, inf).
Note: f1(x)->0 and f2(x)->0 when x->-inf or inf, and the decreasing speeds are different.
If I use [-inf, inf] Matlab gives me NaN.
If I narrow down the inteval, Matlab gives a number. But if I increas the inteval a little bit, I get another number. So I am wondering how to deal this kind of integral calculation? How to make it as precise as possible without NaN?
Thanks a lot.
I don't think your integral converges for the definitions you have given. For example, for N=1 the integrand simplifies to (1/2 - 2*x)/log(2), which is clearly nonconverging at infinity. For larger N the integrand goes to -inf for x->inf and to inf for x->-inf, and I don't think the integral converges either, though I do not have a full proof at the moment.
It is good practice to examine mathematical functions analytically before running numerical analysis. If this is not possible, then try first plotting the function itself over the relevant range to get an idea of its behavior. A good way to plot functions over many orders of magnitude is by using the logspace function for x values.

Minimizing error of a formula in MATLAB (Least squares?)

I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB

newton raphson method in matlab

I would like to solve one equation in Matlab with two unknown variables using the Newton raphson method.
The equation is
I(:,:,K) = IL(:,:,K)-Io(:,:,K)*(exp((V+I*Rs)/a(:,:,K))-1)-((V+I*Rs(:,:,K))/Rsh(:,:,K));
Can this be done in matlab and if so please guide me since I have not managed to find anything related to this equation form!
Thanks
No. In general, one equation in two unknowns has an infinite number of solutions. (Think of a contour plot. You are essentially looking for the level set, the locus of all points that yields zero for the dependent variable. It will be a curvilinear path in those variables.)
So you can't "solve" it. I would suggest a good solution to visualize the locus is indeed the function contour. ezplot will do it even better.

Fminunc returns indefinite Hessian matrix for a convex objective

In minimizing a convex objective function, does it mean that the Hessian matrix at minimizer should be PSD? If fminunc in Matlab returns a hessian which is not psd what does it mean? am I using a wrong objective?
I do that in environments other than matlab.
Non-PSD means you can't take the Cholesky transform of it (i.e. the matrix square-root), so you can't use it to get standard errors, for example.
To get a good hessian, your objective function has to be really smooth, because you're taking a second derivative, which doubly amplifies any noise. If possible, it is best to use analytic derivatives rather than finite-difference. That is, if you really need the hessian.