Calculate time of script execution previously with Matlab - matlab

Good morning,
I have a question about the time execution of a script on Matlab. Is it possible to know previously how long spend the execution of a script before running it (an estimated time, for example)? I know that with tic and toc command, among others, is it possible to know the time at the end but I don't know if it's possible to know it before.
Thanks in advance,

It is not too hard to make an estimate of how long your calculation will take.
You already know how to record calculation times with tic and toc, so now you can do this:
Start with a small scale test (example, n=1) and record the calculation time
Multiply n with a constant k (I usually choose 2 or 10 for easy calculations), record the calculation time
Keep multiplying with n untill you find a consistent relation: 'If I multiply my input size with k, my calculation time changes like so ...'
Now you can extrapolate your estimated calculation time by:
calculating how many times you need to multiply input size of the biggest small scale example to get your real data size
Applying the consistent relation that you found exactly that many times to the calculation time of your biggest small scale example
Of course this combines well with some common sense, like if you do certain things t times they will take about t times as long. This can easily be used when you have to perform a certain calculation a million times. Just interrupt the loop after a minute or so, if it is still in the first ten calculations you may want to give up!

Related

use number of solutions rather than maximum time to end solve attempts

I am using the CP-SAT solver on a JSP.
I am iterating so the solver runs many times (basically simulating each day for a year), I do not need to find the optimal solution, just a reasonably good one, so I would like to be a bit smarter on ending the solver than simply allowing it to run for X seconds each time. For example, i would like to take the 5th solution each time, or even to stop once the current solution makespan is only 5% (for example) shorter than the previous solution.
Is this possible? I am only aware of solver.parameters.max_time_in_seconds as a way of limiting the calculation time. Intermediate solutions are printed by SolutionPrinter but i think this is output only and there is no way to break the solver during a run?
wrong, you can stop the search in a callback, see this recipe:
https://github.com/google/or-tools/blob/stable/ortools/sat/docs/solver.md#stopping-search-early

How to calculate difference and aggregate energy as a counter - TimescaleDB

I've got time series data in TimescaleDB from smart meters, the Energy value is stored as a counter.
I have 2 questions:
1) How do I calculate the difference between each row of energy values so I can just see the increase minute by minute for each row?
2) I've got this in 1 minute intervals and I'd like to aggregate as 30m, 60m etc. What's the best way of achieving this?
Thanks in advance for any help you can give me.
There are a couple of challenges here. First, you have to make sure that the intervals of your counter indexes are constant (think communication outages,...) . If not, you'll have to deal with the resulting energy peaks.
Second, your index will probably look like a discrete jigsaw signal, restarting at zero once in a while.
Here's how we did it.
For 2), we use as many continuous aggregates on the indexes as we require resolutions (15min, 60min,...). Use locf where required.
For 1) we do the delta computation on the fly. Meaning that we query the db for the indexes and then loop through the array to compute the delta. This way we can easily handle the jigsaw and peaks.
I've just got an answer to my question, which is very similar to your Part 1, here
In short, the answer I got was to use a before_insert trigger and calculate the difference values on insertion, storing them in a new column. This avoids needing to re-calculate deltas on every query.
I extended the function suggested in the Answer by also calculating the delta_time with
NEW.delta_time = EXTRACT (EPOCH FROM NEW.time - previous_time);
This returns the number of seconds which have passed, allowing you to calculate meter power reliably.
For Part 2, consider a Continuous Aggregate with time buckets as suggested above

faster way to add many large matrix in matlab

Say I have many (around 1000) large matrices (about 1000 by 1000) and I want to add them together element-wise. The very naive way is using a temp variable and accumulates in a loop. For example,
summ=0;
for ii=1:20
for jj=1:20
summ=summ+ rand(400);
end
end
After searching on the Internet for some while, someone said it's better to do with the help of sum(). For example,
sump=zeros(400,400,400);
count=0;
for ii=1:20
for j=1:20
count=count+1;
sump(:,:,count)=rand(400);
end
end
sum(sump,3);
However, after I tested two ways, the result is
Elapsed time is 0.780819 seconds.
Elapsed time is 1.085279 seconds.
which means the second method is even worse.
So I am just wondering if there any effective way to do addition? Assume that I am working on a computer with very large memory and a GTX 1080 (CUDA might be helpful but I don't know whether it's worthy to do so since communication also takes time.)
Thanks for your time! Any reply will be highly appreciated!.
The fastes way is to not use any loops in matlab at all.
In many cases, the internal functions of matlab all well optimized to use SIMD or other acceleration techniques.
An example for using the build in functionalities to create matrices of the desired size is X = rand(sz1,...,szN).
In your explicit case sum(rand(400,400,400),3) should give you then the fastest result.

Tunning gain table to match two-curves

I have two data set, let us name them "actual speed" and "desired speed". My main objective is to match actual speed with the desired speed.
But for doing that in my case, I need to tune FF(1x10), Integral(10x8) and Proportional gain table(10x8).
My approach till now was as follows:-
First, start the iteration with having 0.1 as the initial value in the first cells(FF[0]) of the FF table
Then find the R-square or Co-relation between two dataset( i.e. Actual Speed and Desired Speed)
Increment the value of first cell(FF[0]) by 0.25 and then again compute R-square or Co-relation of two data set.
Once the cell(FF[0]) value reaches 2(Gains Maximum value. Already defined by the lab). Evaluate R-square and re-write the gain value in FF[0] which gives min. error between the two curve.
Then tune the Integral and Proportional table in the same way for the same RPM Range
Once It is tune then go for higher RPM range and repeat step 2-5 (RPM Range: 800-1000; 1000-1200;....;3000-3200)
Now the problem is that this process is taking way too long time to complete. For example it takes around 1 Hr. time to tune one cell of FF. Which is actually very slow.
If possible, Please suggest any other approach which I can try to tune the tables. I am using MATLAB R2010a and I can't shift to any other version of MATLAB because my controller can communicate with this version only and I can't use any app for tuning since my GUI is already communicating with the controller and those two datasets are being made in real-time
In the given figure, lets us take (X1,Y1) curve as Desired speed and (X2,Y2) curve as Actual speed
UPDATE

Making knnsearch fast when one argument remains constant

I have the following problem.
for i=1:3000
[~,dist(i,1)]=knnsearch(C(selectedIndices,:),C);
end
Let me explain the code above. Matrix C is a huge matrix (300000 x 1984). C(selectedIndices,:) is a subset of 100 elements of C depending on the value of i. It means: For i=1, first 100 points of C are selected, for i==2, C(101:200,:) is selected. As you can see, the second argument remains constant.
Is there any way to make this work faster. I have tried the following:
- [~,dist(i,1)]=knnsearch(C,C); %obviously goes out of memory
send a bigger chunk of selectedIndices instead of sending just 100. This adds a little bit post-processing which I am not worried about. But this doesn't work since it takes equivalent amount of time. For example, if I send 100 points of C at a time, it takes 60 seconds. If I send 500, it takes 380 seconds with the post-processing.
Tried using parfor as: different sets of selectedIndices will be executed parallely. It doesn't work as two copies of big matrix C may have got created (not sure how parfor works), but I am sure that computer becomes very slow in turn negating the advantage of parfor.
Haven't tried yet: break both arguments into smaller chunks and send it in parfor. Do you think this will make any difference?
I am open to any suggestion i.e. if you feel braking a matrix in some different way may speed up the computation, do suggest it. Since, at the end I only care about finding closest point from a set of points (here each set has 100 points) for each point in C.