I am writing a Matlab script that solves a system of differential equations via the Runge-Kutta method. Due to the iterative nature of this approach, errors accumulate very quickly. Hence I'm interested in carrying an extremely exaggerated number of decimal points, say, 100.
I have identified the digits function, which allows me to define the variable precision accuracy. However, it seems that I have to specify the vpa function in every equation where I want this precision used. Is there a way to put a command in the script header and have the specified number of decimal places used in all calculations? Matlab help is unusually unclear about this.
There is no way to tell matlab to use vpa everywhere. Typically you don't specify it in every equiation, instead cast all inputs and constants to vpa.
Related
I'm using the fminsearch Method of Matlab to minimize a function:
c = cvpartition(200,'KFold',10);
minfn = #(z)kfoldLoss(fitcsvm(cdata,grp,'CVPartition',c,...
'KernelFunction','rbf','BoxConstraint',exp(z(2)),...
'KernelScale',exp(z(1))));
opts = optimset('TolX',5e-4,'TolFun',5e-4);
[searchmin fval] = fminsearch(minfn,randn(2,1),opts)
The minimization is over two parameters.
Now I would like to minimize a third parameter, but this parameter can only take positive integer values, i.e. 1,2,3,...
How can I tell fminsearch to only consider positive integers?
Second, if my third parameter gets initialized to 10 but it actual best value is 100, does fminsearch converge fast in such cases?
You can't tell fminsearch to consider only integers. The algorithm it uses is not suitable for discrete optimization, which in general is much harder than continuous optimization.
If there are only relatively few plausible values for your integer parameter(s), you could just loop over them all, but that might be too expensive. Or you could cook up your own 1-dimensional discrete optimization function and have it call fminsearch for each value of the integer parameter it tries. (E.g., you could imitate some standard 1-dimensional continuous optimization algorithm, and just return once you've found a parameter value that's, say, better than both its neighbours.) You may well be able to adapt this function to the particular problem you're trying to solve.
As #Gareth McCaughan said, you can't tell fminsearch to restrict the search space to integers. If you want to search for solvers that can handle this type of problem, you want to search for "mixed integer programming." Mixed integer is for part continuous, part integer programming. And "programming" is jargon for optimization (horribly confusing name, but like the QWERTY keyboard, we're stuck with it).
Be aware though that integer programming is in general NP-hard! Larger problems may be entirely intractable.
In side the case I handled, i looked for an vector-index which satifies a
condition.
The vector-Index is postive integer.
The workaround for fminsearch I did, was an interpolation of the error-function. Assume, fminsearch proposes 5.1267 as new index. Than I calculated the error-function for indexes 5 and 6 and gave an interpolation back. This leaded to stable and satisfying results.
Holger.Lindow#plr-magdeburg.de
I have to solve a non linear system of 2 equations with 2 unknowns in MATLAB. I used to solve systems using vpasolve but someone told me that this method wasn't very efficient, that I should not abuse of symbolic programming in MATLAB and that I should rather use fsolve instead. Does this hold true everytime? What are the differences between using fsolve and vpasolve in terms of precision and performance?
Basically that's the question when to use variable precision arithmetic (vpa) vs floating point arithmetic. Floating point arithmetic uses a constant precision, the most common type is a 64bit double which is supported by your cpu, thus it can be executed fast. When you need a higher precision than double offers you, you could switch to higher bit length, but this requires you to know which precision you need. vpa allows you to do this the other way round. Using digits you specify the precision of the result and the symbolic toolbox will do all intermediate steps with a sufficient precision.
An example where fzero produces a significant error:
f=#(x)log(log(log(exp(exp(exp(x+1))))))-exp(1)
vpasolve(f(sym('x')))
fsolve(f,0)
What is the difference between abtol and reltol in MATLAB when performing numerical quadrature?
I have an triple integral that is supposed to generate a number between 0 and 1 and I am wondering what would be the best tolerance for my application?
Any other ideas on decreasing the time of integral3 execution.
Also does anyone know whether integral3 or quadgk is faster?
When performing the integration, MATLAB (or most any other integration software) computes a low-order solution qLow and a high-order solution qHigh.
There are a number of different methods of computing the true error (i.e., how far either qLow or qHigh is from the actual solution qTrue), but MATLAB simply computes an absolute error as the difference between the high and low order integral solutions:
errAbs = abs(qLow - qHigh).
If the integral is truly a large value, that difference may be large in an absolute sense but not a relative sense. For example, errAbs might be 1E3, but qTrue is 1E12; in that case, the method could be said to converge relatively since at least 8 digits of accuracy has been reached.
So MATLAB also considers the relative error :
errRel = abs(qLow - qHigh)/abs(qHigh).
You'll notice I'm treating qHigh as qTrue since it is our best estimate.
Over a given sub-region, if the error estimate falls below either the absolute limit or the relative limit times the current integral estimate, the integral is considered converged. If not, the region is divided, and the calculation repeated.
For the integral function and integral2/integral3 functions with the iterated method, the low-high solutions are a Gauss-Kronrod 7-15 pair (the same 7-th order/15-th order set used by quadgk.
For the integral2/integral3 functions with the tiled method, the low-high solutions are a Gauss-Kronrod 3-7 pair (I've never used this option, so I'm not sure how it compares to others).
Since all of these methods come down to a Gauss-Kronrod quadrature rule, I'd say sticking with integral3 and letting it do the adaptive refinement as needed is the best course.
I'm trying to solve a problem using Matlab's genetic algorithm and fmincon functions where the variables' values do not have single upper and lower bounds. Instead, the variables should be allowed to take a value of x=0 or be lb<=x<=ub. This is a turbine allocation problem, where the turbine can either be turned off (x=0) or be within the lower and upper cavitation limits (lb and ub). Of course I can trick the problem by creating a constraint which will violate for values in between 0 and lb, but I'm finding that the problem is having a hard time converging like this. Is there an easier way to do this, which will trim down the search space?
If the number of variables is small enough (say, like 10 or 15 or less) then you can try every subset of variables that are set to be non-zero, and see which subset gives you the optimal value. If you can't make assumptions about the structure of your optimization problem (e.g. you have penalties for non-zero variables but your main objective function is "exotic"), this is essentially the best that you can do. If you are willing to settle for an approximate solution, you can add a so-called "L1" penalty to your objective function which is the sum of a constant times the absolute values of the variables. This will encourage some variables to be zero, and if your main objective function is convex then the resulting objective function will be convex because negative absolute value is convex. It's much easier to optimize convex functions (taking the minimum) because strictly convex functions always have a global minimum that you can reach using any number of optimization routines (including the ones that are implemented in matlab.)
I am trying to integrate an analytic function (a composite of sqrt and trig function) on a rectangle area. It has no singularity point in the area and seems to be a perfect candidate to use dblquad. My question is how to evaluate the accuracy of the numerical value that Matlab provided to me. Without knowing the exact value of the integration, how can we justify the significant-digits? When you are required to give a value with certain digits of precision, you should be able to justify. Is it possible to achieve this given the value is calculated by using Matlab?
Unless you set it otherwise, dblquad uses a default tolerance threshold (10-6 in the latest releases) for the absolute quadrature error. The approximation of the integral will be within an error no larger than the specified tolerance.
You could have a peek at the source code for dblquad, somewhere it will be using a certain number of 'steps'. I guess you could make a new m-file with the important bits that get the integral working and play around with the number of steps until it takes the computer a long time and doesn't change the result. Personally I use a custom simpsons rule for numerical integration and just change N (number of steps) to some large number.