Fast approximation/implementation of power function in matalb - matlab

I am able to find similar answer of C/C++ in the following question:
Fast implementation/approximation of pow() function in C/C++
My code's bottleneck as per profiler is something similar to following code:
a=rand(100,1); % a is an array of doubles
b=1.2; % power can be double instead of only integers.
temp=power(a,b); % this line is taking maximum time
For my requirement, only first 3-4 digits are significant, so any fast approximation of power will be very useful to me. Any suggestions to do the same.
More Information:
this is how I calculated power using exp and log. This has improvement of roughly 50%. Now I need to find approx functions of log and exp and check if that is faster still.
a=rand(1000,1);
tic;
for i=1:100000
powA=power(a,1.1);
end
toc;
tic;
for i=1:100000
powB=exp(1.1*log(a));
end
toc;
Elapsed time is 4.098334 seconds.
Elapsed time is 1.994894 seconds.

Related

Detecting the duration of simulation

I have a simulation which will run for several hours in Simulink. Is it possible to detect see total duration of simulation in Matlab or Simulink?
tic/toc does this. tic starts the timer, toc ends it. Time is given in seconds, adjust according to your needs.
tic
% code here
time = toc;
frpintf('It took %f s',time)

different tic toc results in one-time run or run in steps

i confront with a weird fact! i wrote a matlab code that compare runing time of two algorithms. i embrace codes of each algorithm by tic toc functions, so i can compare time complexities of them. but weird thing is here:
if i run the script at once, algorithm 1 has longer time than algorithm 2, but if i put breakmarks after each toc, algorithm 1 would has shorter time!
in order to understand better what I said, consider following:
tic
% some matlab codes implementing algorithm1
time1 = toc;
disp(['t1 = ',num2str(time1)]) % here is the place for first breakpoint
tic
% some matlab codes implementing algorithm2
time2 = toc;
disp(['t2 = ',num2str(time2)]) % here is the place for second breakpoint
can anyone please explain to me why this happen? thanks

Which of the two sum calculations in Matlab / Octave is optimal on a row vector?

I'm fairly to Matlab / Octave and Machine learning, but so far I've learned you want to avoid iterative loops for summation and vectorize as much as possible.
Given a row vector like: x = [ 1,2,3,4,5]
You can calculate the sum with these two methods:
sum(x)
x * ones(length(x),1)
While my gut tells me to us the built in functions, the second options feels more 'vectorized'.
Which of the is more optimal and why? Are there tradeoffs between the two in performance vs. memory use, etc...?
In general, it seems like sum is better: the time/memory overhead of allocating the "all ones" vector does not worth it.
However, when one needs to repeatedly sum over vectors of the same length, the allocation of the vector can be done only once, thus reducing the average overhead.
On my machine:
"Caching" the all ones vector:
N=100;T=500000; x=rand(T,N);
tic;o=ones(N,1);for ii=1:T, x(ii,:)*o;end;toc
Elapsed time is 0.480388 seconds.
While using sum:
tic;for ii=1:T, sum(x(ii,:));end;toc
Elapsed time is 0.488517 seconds.
So, it's slightly faster to use the "all ones" vector method in case of repeated sums.
If you take the allocation of the "all ones" vector out of the time calculation, you end up with:
N=100;T=500000; x=rand(T,N);o=ones(N,1);
tic;for ii=1:T, x(ii,:)*o;end;toc
Elapsed time is 0.477762 seconds.
But again, you'll have to allocate it at some point...
Ok, did some more digging:
From a performance standpoint the built in sum() is much better:
x = rand(1,100000000);
%slowwwww
t = cputime; x * ones(length(x),1); e= cputime - t; e
% Faster
t = cputime; sum(x); e= cputime - t; e
I guessing using an extra vector of ones is also needless memory use. Since there is no performance gain over sum() the non-native method is far less optimal.

How does TIC TOC in matlab work?

I am working with loops.I am using tic toc to check the time. I get different time when i run the same loop. The time is close to each other eg. 98.2 and 97.7. secondly when i reduce the size of the loop to half, i expect the time to change in half but it doesn't. Can anyone explain me how tic toc actually works?
Thanks.
tic
for i=1:124
for j=1:10
for k=1:11
end
end
end
toc
Secondly i tried to use tic toc inside the loop as shown below. Will it return total time? I get a number but i cant verify if it actually is the total.
for i=1:124
tic
for j=1:10
for k=1:11
end
end
toc
end
tic and toc just measure the elapsed time in seconds. MATLAB has now JIT meaning that the actual time to compute cannot be estimated correctly.
Matlab has (at least in this context) no real-time computation, so you basically always have different elapsed time for the same code.
Read this here, its nicely explained, hopefully it helps: http://www.matlabtips.com/matlab-is-no-longer-slow-at-for-loops/

matlab if statements with CUDA

I have the following matlab code:
randarray = gpuArray(rand(N,1));
N = 1000;
tic
g=0;
for i=1:N
if randarray(i)>10
g=g+1;
end
end
toc
secondrandarray = rand(N,1);
g=0;
tic
for i=1:N
if secondrandarray(i)>10
g=g+1;
end
end
toc
Elapsed time is 0.221710 seconds.
Elapsed time is 0.000012 seconds.
1) Why is the if clause so slow on the GPU? It is slowing down all my attempts at optimisation
2) What can I do to get around this limitation?
Thanks
This is typically a bad thing to do no matter if you are doing it on the cpu or the gpu.
The following would be a good way to do the operation you are looking at.
N = 1000;
randarray = gpuArray(100 * rand(N,1));
tic
g = nnz(randarray > 10);
toc
I do not have PCT and I can not verify if this actually works (number of functions supported on GPU are fairly limited).
However if you had Jacket, you would definitely be able to do the following.
N = 1000;
randarray = gdouble(100 * rand(N, 1));
tic
g = nnz(randarray > 10);
toc
Full disclosure: I am one of the engineers developing Jacket.
No expert on the Matlab gpuArray implementation, but I would suspect that each randarray(i) access in the first loop triggers a PCI-e transaction to retrieve a value from GPU memory, which will incur a very large latency penalty. You might be better served by calling gather to transfer the whole array in a single transaction instead and then loop over a local copy in host memory.
Using MATLAB R2011b and Parallel Computing Toolbox on a now rather old GPU (Tesla C1060), here's what I see:
>> g = 100*parallel.gpu.GPUArray.rand(1, 1000);
>> tic, sum(g>10); toc
Elapsed time is 0.000474 seconds.
Operating on scalar elements of a gpuArray one at a time is always going to be slow, so using the sum method is much quicker.
I cannot comment on a prior solution because I'm too new, but extending on the solution from Pavan. The nnz function is (not yet) implemented for gpuArrays, at least on the Matlab version I'm using (R2012a).
In general, it is much better to vectorize Matlab code. However, in some cases looped code can run fast in Matlab bercause of the JIT compilation.
Check the results from
N = 1000;
randarray_cpu = rand(N,1);
randarray_gpu = gpuArray(randarray_cpu);
threshold = 0.5;
% CPU: looped
g=0;
tic
for i=1:N
if randarray_cpu(i)>threshold
g=g+1;
end
end
toc
% CPU: vectorized
tic
g = nnz(randarray_cpu>threshold);
toc
% GPU: looped
tic
g=0;
for i=1:N
if randarray_gpu(i)>threshold
g=g+1;
end
end
toc
% GPU: vectorized
tic
g_d = sum(randarray_gpu > threshold);
g = gather(g_d); % I'm assuming that you want this in the CPU at some point
toc
Which is (on my core i7+ GeForce 560Ti):
Elapsed time is 0.000014 seconds.
Elapsed time is 0.000580 seconds.
Elapsed time is 0.310218 seconds.
Elapsed time is 0.000558 seconds.
So what we see from this case is:
Loops in Matlab are not considered good praxis, but in your particular case, it does run fast because Matlab somehow "precompiles" it internally. I changed your threshold from 10 to 0.5, as rand will never give you a value higher than 1.
The looped GPU version performs horribly because at each loop iteration, a kernel is launched (or data is read from the GPU, however TMW implemented that...), which is slow. A lot of small memory transfers while calculating basically nothing are the worst thing one could do on the GPU.
From the last (best) GPU result the answer would be: unless the data is already on the GPU, it doesn't make sense to calculate this on the GPU. Since the arithmetic complexity of your operation is basically nonexistent, the memory transfer overhead does not pay off in any way. If this is part of a bigger GPU calculation, it's OK. If not... better stick to the CPU ;)