Efficiently extracting column of a matrix - matlab

I currently have a piece of code that I'm trying to optimise, and the bottleneck seems to be extracting a given column from a fairly large matrix.
In particular, my code spends 50% of its time doing Wi=W(:,minColIdx). I've also tried linear indexing, but there was no change.
I was wondering if anyone knows why this is the case, and if anyone has any tips that could help me optimise this part of my code.
Thanks!
EDIT: Here is my code: http://pastebin.com/TnTy6a8D
It's really poorly optimised right now, I was just playing around a bit with gpuArray on my new GPU. Lines 44 and 53, where I try to extract columns from W, are where the code bottlenecks.

Can the speed of the operation be improved?
Of course
Is it worth it to optimize the indexing code?
Probably not
Matlab is REALLY good at basic matrix operations (if doing it in C++ is 10% faster I would really be surprised). You can forget about finding a better way to index a matrix, if you really want a noticable performance increase improving your hardware is probably your best bet.
That being said, it is of course always worth thinking about whether you really need to do the heavy calculation that you are attempting, or whether you can think of a smarter algorithm.

I'll take at face value your statement about the bottleneck being extracting the column from a matrix since you don't provide enough detail for me to speculate otherwise, though I find this somewhat surprising.
If you have access to the matlab compiler, I suggest you try compiling your bottleneck function. Try:
help mcc
From within that help you will see that a typical use is:
Make a stand-alone C executable for myfun.m:
mcc -m myfun
You could also try writing a c function to get your column and compile with mex:
http://www.mathworks.com/help/matlab/ref/mex.html

Related

lapack simple vs expert driver speed comparison

I want to use lapack to solve problems of type Ax=b, least square, cholesky decomposition and SVD decomposition etc. The manual says two type of drivers exist: simple and expert where expert driver gives more output information but at the cost of more workspace.
I want to know about speed difference between the two drivers.
Is it something like both are same, except for time consumed in copying/saving data to pointers in expert driver mode which is not that significant.
It depends on the driver. For linear square solve ?GESV and ?GESVX the difference is that a condition number estimate is also returned and more importantly the solution is fed to ?GERFS for a refined solution to reduce the error.
Often a relatively(!) considerable slowdown is expected from expert routines. You can test it yourself by using the same input. For GESV/GESVX comparison we had a significant slow down which is now fixed in SciPy 1.0 and solution refining will be skipped while keeping the condition number reporting.
See https://github.com/scipy/scipy/issues/7847 for more information.

Matlab Confidence Interval for Degrees of Freedom

I would like to calculate a Confidence Interval along with my Degrees of Freedom (DOF) estimation in Matlab. I am trying to run the following line of code:
[R, DoF, ciDOF] = copulafit('t', U); % fit the copula
The code line without the "ciDOF" arguments takes between 1-3 hours to run with my data. I tried to run the code with the "ciDOF" argument several times, but the calculations seem to take very long (I stopped the calculation after 8 hours). No error message is generated.
Does anyone have experience with this argument and could kindly tell me how long I should expect the calculation to take (the size of my data is 167*19) and if I have specified the "ciDOF" argument correctly?
Many thanks for the help!
Carolin
If your data matrix U is of size 167 x 19, then what you are asking for is a copula-fit distribution dependent on 19-dimensions, making your copula a distribution in a 20-dimensional space with 19 dependent variables.
This is almost definitely why it is taking so long, because whether it is your intention or not, you are asking MATLAB to solve a minimization problem of taking 19 marginal distributions and come-up with the 19-variate joint distribution (the copula) where each marginal distribution (represented by 167 x 1 row-vectors) is uniform.
Most-likely this is a limit of the MATLAB implementation that is iterating through many independent computations and then trying to combine them together to fit the joint distribution's ideal conditions.
First and foremost -- and not to be insulting or insinuating -- you should definitely check that you really are trying to find a 19-variate copula. Also, just in case, make sure that your matrix U is oriented in the proper way, because if you have it transposed, you could be trying to ask for the solution to a 167-variate distribution.
But, if this is what you are actually trying to do, there is not really an easy way to predict how long it will take or how long it should take. Even with multiple dimensions, if your marginals are simple or uniform already, that would greatly reduce the copula computation. But, really, there is no way to tell.
Although this may seem like a cop-out, you may actually have better luck switching from MATLAB to R, especially if you have a lot of multivariate data, and you will probably find a lot more functionality in R than MATLAB. R is freely available and comes with a Graphical User Interface (GUI), in-case you aren't comfortable with command-line programming.
There are many more sources, but here is one PDF lecture on computing copula-fits in R:
http://faculty.washington.edu/ezivot/econ589/copulasPowerpoint.pdf

Alternatives to FMINCON

Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is.
P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here.
P.P.S I saw the other question regarding the open source alternatives. That's not what I'm looking for. I'm looking for more efficient ones, if someone faced the same problem adn shifted to a better solver.
Hmmm...
Without further information, I'd guess that fmincon runs out of memory because it needs the Hessian (which, given that your decision variable is 10^4, will be 10^4 x numel(f(x1,x2,x3,....)) large).
It also takes a lot of time to determine the values of the Hessian, because fmincon normally uses finite differences for that if you don't specify derivatives explicitly.
There's a couple of things you can do to speed things up here.
If you know beforehand that there will be a lot of zeros in your Hessian, you can pass sparsity patterns of the Hessian matrix via HessPattern. This saves a lot of memory and computation time.
If it is fairly easy to come up with explicit formulae for the Hessian of your objective function, create a function that computes the Hessian and pass it on to fmincon via the HessFcn option in optimset.
The same holds for the gradients. The GradConstr (for your non-linear constraint functions) and/or GradObj (for your objective function) apply here.
There's probably a few options I forgot here, that could also help you. Just go through all the options in the optimization toolbox' optimset and see if they could help you.
If all this doesn't help, you'll really have to switch optimizers. Given that fmincon is the pride and joy of MATLAB's optimization toolbox, there really isn't anything much better readily available, and you'll have to search elsewhere.
TOMLAB is a very good commercial solution for MATLAB. If you don't mind going to C or C++...There's SNOPT (which is what TOMLAB/SNOPT is based on). And there's a bunch of things you could try in the GSL (although I haven't seen anything quite as advanced as SNOPT in there...).
I don't know on what version of MATLAB you have, but I know for a fact that in R2009b (and possibly also later), fmincon has a few real weaknesses for certain types of problems. I know this very well, because I once lost a very prestigious competition (the GTOC) because of it. Our approach turned out to be exactly the same as that of the winners, except that they had access to SNOPT which made their few-million variable optimization problem converge in a couple of iterations, whereas fmincon could not be brought to converge at all, whatever we tried (and trust me, WE TRIED). To this day I still don't know exactly why this happens, but I verified it myself when I had access to SNOPT. Once, when I have an infinite amount of time, I'll find this out and report this to the MathWorks. But until then...I lost a bit of trust in fmincon :)

data fitting in specific equation - matlab

I have 3 sets of data: xdata, ydata and error_ydata.
I need to fit this data according to a equation like this:
y_fit = c1*sin((2*pi*x_data)/c2 - c3) + c4
where c are constants, and the parameters to find.
I've tried several matlab functions like fittype or lsqcurvefit but they require very close initial estimates for the 4 constants to work. The point was to find these constants whichever are the initial estimates you give.
Any idea?
Thank you in advance.
My best regards
Sorry, but the fact is, nonlinear estimation requires at least decent starting values. If you can't bother to supply them, then expect at least some of the time random crapola for results.
Do those tools require VERY close estimates? Hardly so IMHO, but the definition of "very" is a highly subjective one. Perhaps you need to learn more about optimization and the tools that you will use. Once you do, you will start to know how to make them work better. A workman who lacks understanding of their tools should expect to get hurt on a frequent basis.
You might do some reading. Here is one place to start.
There ARE some tools out there that allow a reduction of the problem using a partitioned least squares approach. fminspleas is one. (You can also find pleas in the optimization toips and tricks file.). But in order to use that tool, you will need to learn something about its estimation methodology, understanding how it splits the parameters into two classes. Again, understand your tools.

What's a genetic algorithm that would produce interesting/surprising results and not have a boring/obvious end point?

I find genetic algorithm simulations like this to be incredibly entrancing and I think it'd be fun to make my own. But the problem with most simulations like this is that they're usually just hill climbing to a predictable ideal result that could have been crafted with human guidance pretty easily. An interesting simulation would have countless different solutions that would be significantly different from each other and surprising to the human observing them.
So how would I go about trying to create something like that? Is it even reasonable to expect to achieve what I'm describing? Are there any "standard" simulations (in the sense that the game of life is sort of standardized) that I could draw inspiration from?
Depends on what you mean by interesting. That's a pretty subjective term. I once programmed a graph analyzer for fun. The program would first let you plot any f(x) of your choice and set the bounds. The second step was creating a tree holding the most common binary operators (+-*/) in a random generated function of x. The program would create a pool of such random functions, test how well they fit to the original curve in question, then crossbreed and mutate some of the functions in the pool.
The results were quite cool. A totally weird function would often be a pretty good approximation to the query function. Perhaps not the most useful program, but fun nonetheless.
Well, for starters that genetic algorithm is not doing hill-climbing, otherwise it would get stuck at the first local maxima/minima.
Also, how can you say it doesn't produce surprising results? Look at this vehicle here for example produced around generation 7 for one of the runs I tried. It's a very old model of a bicycle. How can you say that's not a surprising result when it took humans millennia to come up with the same model?
To get interesting emergent behavior (that is unpredictable yet useful) it is probably necessary to give the genetic algorithm an interesting task to learn and not just a simple optimisation problem.
For instance, the Car Builder that you referred to (although quite nice in itself) is just using a fixed road as the fitness function. This makes it easy for the genetic algorithm to find an optimal solution, however if the road would change slightly, that optimal solution may not work anymore because the fitness of a solution may have grown dependent on trivially small details in the landscape and not be robust to changes to it. In real, cars did not evolve on one fixed test road either but on many different roads and terrains. Using an ever changing road as the (dynamic) fitness function, generated by random factors but within certain realistic boundaries for slopes etc. would be a more realistic and useful fitness function.
I think EvoLisa is a GA that produces interesting results. In one sense, the output is predictable, as you are trying to match a known image. On the other hand, the details of the output are pretty cool.