MATLAB: Slow convergence of convex optimization algorithm - matlab

I want to speed up the convergence of a convex optimization problem in MATLAB.
My objective function is convex having three parameters and I am using gradient ascent for the maximization.
Right now I am manually writing the iteration with the termination condition being the difference between the new parameter value and old parameter value is very small (around 0.0000001). I cannot terminate based upon the number of iterations because it doesn't guarantee that it has converged to the optimum solution.
So, it takes a lot of time to converge - almost 2 days! Is there any way to speed this up?
Actually my objective function has only three parameters. I know that my first parameter's value should be greater than that of the second.
So starting with the initial condition, the second parameter's value starts increasing rapidly. After it has reached a certain point, the first parameter's value starts increasing rapidly. While the first parameter's value starts increasing, the second parameter's value starts decreasing slowly. Eventually, I have the first parameter's value greater than that of second.
Is there any way to speed up the process? 2 days is a very long time. Furthermore, calculating the gradient is also time consuming. It needs a lot of matrix computations.
I don't want to start with the defined parameter values like parameter1's value greater than that of second. Also it's not necessary that the first parameter always has to be greater than the the second. I just know which parameter value should be greater. Any suggestions?

If the calculation of gradients is very slow and you still want to do a manual implementation you could try this, it will take more steps but could be a lot quicker as the steps are so simple:
Define a stepsize
Try all the points where your variable moves -1, 0 or 1 times in the direction of the stepsize (3^3 = 27 possibilities)
Pick the best one
If the best one is your previous one, multiply the stepsize with a factor 0.5
Of course the success of this process depends on the properties of your function. Furthermore it should be noted that a much simpler solution could be to set the desired difference to something like 0.0001

Related

Tunning gain table to match two-curves

I have two data set, let us name them "actual speed" and "desired speed". My main objective is to match actual speed with the desired speed.
But for doing that in my case, I need to tune FF(1x10), Integral(10x8) and Proportional gain table(10x8).
My approach till now was as follows:-
First, start the iteration with having 0.1 as the initial value in the first cells(FF[0]) of the FF table
Then find the R-square or Co-relation between two dataset( i.e. Actual Speed and Desired Speed)
Increment the value of first cell(FF[0]) by 0.25 and then again compute R-square or Co-relation of two data set.
Once the cell(FF[0]) value reaches 2(Gains Maximum value. Already defined by the lab). Evaluate R-square and re-write the gain value in FF[0] which gives min. error between the two curve.
Then tune the Integral and Proportional table in the same way for the same RPM Range
Once It is tune then go for higher RPM range and repeat step 2-5 (RPM Range: 800-1000; 1000-1200;....;3000-3200)
Now the problem is that this process is taking way too long time to complete. For example it takes around 1 Hr. time to tune one cell of FF. Which is actually very slow.
If possible, Please suggest any other approach which I can try to tune the tables. I am using MATLAB R2010a and I can't shift to any other version of MATLAB because my controller can communicate with this version only and I can't use any app for tuning since my GUI is already communicating with the controller and those two datasets are being made in real-time
In the given figure, lets us take (X1,Y1) curve as Desired speed and (X2,Y2) curve as Actual speed
UPDATE

Matlab 'Step' response size doesn't change risetime?

In using Matlab's 'Step' command in finding the step response of a system's transfer function, it's possible to change the step size from the default of 1 to something else (eg 1e-2), like so:
stepOpt = stepDataOptions('StepAmplitude', 1e-2);
step(TF_closed_loop, stepOpt);
In this case the TF is a physical system, eg a motor. However, although the resulting step size is indeed different, the time scale doesn't change at all. Eg if it took 100 seconds to reach 1, it still takes 100 seconds to reach 1e-2...and this is not a reasonable result for a physical system that would take less time to go a shorter distance.
Is there another required setting in Matlab to make this accurate?
It's already accurate. By changing the step amplitude you are only multiplying the input by a constant newA/oldA. The response is the same as in the first case, but multiplied by that same constant. But of course, it is going to take the same amount of time to reach a given percentage of the stationary value.

Time step computation in Matlab ODE solver

I tried to find out how MATLAB computes the step size (not the initial one) to solve ODEs with, for example, the ode45 solver. The source code is really complex, so does anyone know hot it works?
You should be aware that the step size is dynamically adapted, there no "The" step size.
To get a general simplified idea: The total error E is composed of the atomic errors of every time step. In first order it is summation, more exactly there is some kind of cumulative magnification of the atomic errors involved.
A sensible approach would be that every time step of length h should have an atomic error of about E·h/T, where T is the length of the integration interval. The order 4 method has an local error of C·h^5 where C is in zeroth order a polynomial in the first 4 derivatives of the ODE function. Since the method computes an order 4 and an order 5 step, call them y4 and y5, one can take y5 as the more precise one so that approximately C·h^5 = |y4-y5|. This allows to compute C and to adapt the step size a·h to get the desired atomic error, since one can solve C·(a·h)^5=E/T·(a·h) to get
a = pow( E/T·h/norm(y4-y5), 1/4)
This does not need to be terribly exact, so that one can just use the adapted step size for the next step if the atomic error is not largely out of range.
Another approach is to compare if the local error |y4-y5|/h falls inside a bracket around the desired local error E/T and increase/decrease the step size by a constant factor, with a repetition of the step if the step size needed to be reduced.
There is more to the advanced/actual implementations, taking into account relative and absolute error goals, detecting stiffness, i.e., where the local error formula breaks down, …

Calculate time of script execution previously with Matlab

Good morning,
I have a question about the time execution of a script on Matlab. Is it possible to know previously how long spend the execution of a script before running it (an estimated time, for example)? I know that with tic and toc command, among others, is it possible to know the time at the end but I don't know if it's possible to know it before.
Thanks in advance,
It is not too hard to make an estimate of how long your calculation will take.
You already know how to record calculation times with tic and toc, so now you can do this:
Start with a small scale test (example, n=1) and record the calculation time
Multiply n with a constant k (I usually choose 2 or 10 for easy calculations), record the calculation time
Keep multiplying with n untill you find a consistent relation: 'If I multiply my input size with k, my calculation time changes like so ...'
Now you can extrapolate your estimated calculation time by:
calculating how many times you need to multiply input size of the biggest small scale example to get your real data size
Applying the consistent relation that you found exactly that many times to the calculation time of your biggest small scale example
Of course this combines well with some common sense, like if you do certain things t times they will take about t times as long. This can easily be used when you have to perform a certain calculation a million times. Just interrupt the loop after a minute or so, if it is still in the first ten calculations you may want to give up!

getting the value of a filter at an arbitrary time

Context: I'm trying to improve the values returned by the iPhone CLLocationManager, although this is a more generally applicable problem. The key is that CLLocationManger returns data on current velocity as and when it feels like it, rather than at a fixed sample rate.
I'd like to use a feedback equation to improve accuracy
v=(k*v)+(1-k)*currentVelocity
where currentVelocity is the speed returned by didUpdateToLocation:fromLocation: and v is the output velocity (and also used for the feedback element).
Because of the "as and when" nature of didUpdateToLocation:fromLocation: I could calculate the time interval since it was last called, and do something like
for (i=0;i<timeintervalsincelastcalled;i++) v=(k*v)+(1-k)*currentVelocity
which would work, but is wasteful of cycles. Especially as I probably want timeintervalsincelastcalled to be measured as 10ths of a second.
Is there a way to solve this without the loop ? i.e. rework (integrate?) the formula so I put an interval into the equation and get the same answer as I would have by iteration ?
If you write your original equation as
v = k*vCurrent + (1-k)*v
you can apply the answer from another SO question.
Instead of iterating, you could just choose the value of k based on the size of the interval. For example, if the interval length is an hour - you'd probably want k to be 0.
It would be easy to precompute k for a variety of interval sizes to give the same answer as the iteration would give. Just compute the change by iterating (you already have code for that), and then compute the value of k that would give you that algebraicly.
It's a common programmer jedi trick to have a table of lookup values in place of expensive calculations. (there, now my answer has something to do with code!)