I am trying to integrate an analytic function (a composite of sqrt and trig function) on a rectangle area. It has no singularity point in the area and seems to be a perfect candidate to use dblquad. My question is how to evaluate the accuracy of the numerical value that Matlab provided to me. Without knowing the exact value of the integration, how can we justify the significant-digits? When you are required to give a value with certain digits of precision, you should be able to justify. Is it possible to achieve this given the value is calculated by using Matlab?
Unless you set it otherwise, dblquad uses a default tolerance threshold (10-6 in the latest releases) for the absolute quadrature error. The approximation of the integral will be within an error no larger than the specified tolerance.
You could have a peek at the source code for dblquad, somewhere it will be using a certain number of 'steps'. I guess you could make a new m-file with the important bits that get the integral working and play around with the number of steps until it takes the computer a long time and doesn't change the result. Personally I use a custom simpsons rule for numerical integration and just change N (number of steps) to some large number.
Related
I am writing a Matlab script that solves a system of differential equations via the Runge-Kutta method. Due to the iterative nature of this approach, errors accumulate very quickly. Hence I'm interested in carrying an extremely exaggerated number of decimal points, say, 100.
I have identified the digits function, which allows me to define the variable precision accuracy. However, it seems that I have to specify the vpa function in every equation where I want this precision used. Is there a way to put a command in the script header and have the specified number of decimal places used in all calculations? Matlab help is unusually unclear about this.
There is no way to tell matlab to use vpa everywhere. Typically you don't specify it in every equiation, instead cast all inputs and constants to vpa.
What is the difference between abtol and reltol in MATLAB when performing numerical quadrature?
I have an triple integral that is supposed to generate a number between 0 and 1 and I am wondering what would be the best tolerance for my application?
Any other ideas on decreasing the time of integral3 execution.
Also does anyone know whether integral3 or quadgk is faster?
When performing the integration, MATLAB (or most any other integration software) computes a low-order solution qLow and a high-order solution qHigh.
There are a number of different methods of computing the true error (i.e., how far either qLow or qHigh is from the actual solution qTrue), but MATLAB simply computes an absolute error as the difference between the high and low order integral solutions:
errAbs = abs(qLow - qHigh).
If the integral is truly a large value, that difference may be large in an absolute sense but not a relative sense. For example, errAbs might be 1E3, but qTrue is 1E12; in that case, the method could be said to converge relatively since at least 8 digits of accuracy has been reached.
So MATLAB also considers the relative error :
errRel = abs(qLow - qHigh)/abs(qHigh).
You'll notice I'm treating qHigh as qTrue since it is our best estimate.
Over a given sub-region, if the error estimate falls below either the absolute limit or the relative limit times the current integral estimate, the integral is considered converged. If not, the region is divided, and the calculation repeated.
For the integral function and integral2/integral3 functions with the iterated method, the low-high solutions are a Gauss-Kronrod 7-15 pair (the same 7-th order/15-th order set used by quadgk.
For the integral2/integral3 functions with the tiled method, the low-high solutions are a Gauss-Kronrod 3-7 pair (I've never used this option, so I'm not sure how it compares to others).
Since all of these methods come down to a Gauss-Kronrod quadrature rule, I'd say sticking with integral3 and letting it do the adaptive refinement as needed is the best course.
While finding roots of a quadratic equation, subtractive cancellation is an issue when b^2 is much greater than 4ac. So, I need to first check whether the given equation has this issue or not. Then if there's an issue, I need to find an alternative way which is to calculate r=-(b+sign(b)*sqrt(delta)) and then we get the roots 2c/r and r/2a. I am struck at checking b^2 is much greater than 4ac.
Solutions are:
Use VPA (symbolic toolbox), which is probably the best solution to deal with precision errors on arbitrary calculations.
Use the build in function roots, which deals with this issue very well.
The precision of a double value is known. Based on a required precision, you can define a boundary.
I have know that the ode45 solver has adaptive step size controlled by Matlab program itself. The description below is given by Matlab website:
Specifying tspan with more than two elements does not affect the internal time steps that >the solver uses to traverse the interval from tspan(1) to tspan(end). All solvers in the ODE >suite obtain output values by means of continuous extensions of the basic formulas. Although >a solver does not necessarily step precisely to a time point specified in tspan, the >solutions produced at the specified time points are of the same order of accuracy as the >solutions computed at the internal time points.
However, if I specify very_small_step in tspan=[to:very_small_step:tf], will this affect program controlled step size. Will this force step size less than the value of very_small_step? OR matlab will make interpolation calculation to get the corresponding result at specified time point?
From your quote
Specifying tspan with more than two elements does not affect the internal time steps
Also there exists the MaxStep property to configure the maximum step size.
For steps in between the solvers use continuous extension formulas as described here.
Why are you asking anyway? What problem do you encounter?
I am looking for numerical integration with matlab. I know that there is a trapz function in matlab but the precision is not good enough. By searching it online, I found there is a quad function there it seems only accept symbolic expression as input. My data is all discrete and one-dimensional. Is that any way to use quad on my data? Thanks.
An answer to your question would be no. The only way to perform numerical integration for data with no expression in Matlab is by using the trapz function. If it's not accurate enough for you, try writing your own quad function as Li-aung said, it's very simple, this may help.
Another method you may try is to use the powerful Curve Fitting Tool cftool to make a fit then use the integrate function which can operate on cfit objects (it has a weird convention, the upper limit is the first argument!). I don't think you will get much accurate answers than trapz, it depends on the fit.
Use the spline function in MATLAB to interpolate your data, then integrate this data. This is the standard method for integrating data in discrete form.
You can use quadl() to integrate your data if you first create a function in which you interpolate them.
function f = int_fun(x,xdata,ydata)
f = interp1(xdata,ydata,x);
And then feed it to the quadl() function:
integral = quadl(#int_fun,A,B,[],[],x,y) % syntax to pass extra arguments
% to the function
Integration of a function of one variable is the computation of the area under the curve of the graph of the function. For this answer I'll leave aside the nasty functions and the corner cases and all the twists and turns that trip up writers of numerical integration routines, most of which are probably not relevant here.
Simpson's rule is an approach to the numerical integration of a function for which you have a code to evaluate the function at points within its domain. That's irrelevant here.
Let's suppose that your data represents a time series of values collected at regular intervals. Then you can plot your data as a histogram with bars of equal width. The integrand you seek is the sum of the areas of the bars in the histogram between the limits you are interested in.
You should be able to apply this approach to data sets where the x-axis (ie the width of the bars in the histogram) does not show time, to the situation where the bars are not of equal width, to the situation where the data crosses the x-axis, and most reasonable data sets, quite easily.
The discretisation of your data establishes a limit to the accuracy of the result you can get. If, for example, your time series is sampled at 1sec intervals you can't integrate over an interval which is not a whole number of seconds by this approach. But then, you don't really have the data on which to compute a figure with any more accuracy by any approach. Sure, you can use Matlab (or anything else) to generate extra digits of precision but they don't carry any meaning.