I have a huge matrix (5000x5000x100) and I'm trying to smooth each index along the third dimension, but it takes HOURS. I am doing something inefficiently.
new_mat=zeros(size(my_mat));
for i = 1:length(mymat)
for j = 1:length(mymat)
new_mat(i,j,:) = wdenoise(squeeze(mymat(i,j,:)));
end
end
I know arrays and indexing would help but I'm not sure how to apply them here. Thanks for any help.
I don't have access to the newer Wavelet Toolbox needed for wdenoise, but since the function will operate across columns if you provide a matrix you should be able to remove the inner loop which may increase the speed a bit:
new_mat=zeros(my_mat)
for i = 1:length(mymat)
new_mat(i,:,:) = wdenoise(squeeze(mymat(i,:,:)));
end
Related
I am trying to avoid the for loops and I have been reading through all the old posts there are about it but I am not able to solve my problem. I am new in MATLAB, so apologies for my ignorance.
The thing is that I have a 300x2 cell and in each one I have a 128x128x256 matrix. Each one is an image with 128x128 pixels and 256 channels per pixel. In the first column of the 300x2 cell I have my parallel intensity values and in the second one my perpendicular intensity values.
What I want to do is to take every pixel of every image (for each component) and sum the intensity values channel by channel.
The code I have is the following:
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for a=1:128
for b=1:128
for j=1:256
for i=1:numfiles
Image_par_channels(a,b,j)=Image_par_channels(a,b,j)+Image_cell_par_per{i,1}(a,b,j);
Image_per_channels(a,b,j)=Image_per_channels(a,b,j)+Image_cell_par_per{i,2}(a,b,j);
end
Image_tot_channels(a,b,j)=Image_par_channels(a,b,j)+2*G*Image_per_channels(a,b,j);
end
end
end
I think I could speed it up introducing (:,:,j) instead of specifying a and b. But still a for loop. I am trying to use cellfun without any success due to my lack of expertise. Could you please give me a hand?
I would really appreciate it.
Many thanks and have a nice day!
Y
I believe you could do something like
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for i=1:numfiles
Image_par_channels = Image_par_channels + Image_cell_par_per{i,1};
Image_per_channels = Image_per_channels + Image_cell_par_per{i,2};
end
Image_tot_channels = Image_par_channels + 2*G*Image_per_channels;
I haven't work with matlab in a long time, but I seem to recall you can do something like this. g is a constant.
EDIT:
Removed the +=. Incremental assignment is not an operator available in matlab. You should also note that Image_tot_channels can be build directly in the loop, if you don't need the other two variables later.
I have a large matrix, and I need to extract a small matrix taken from a sliding window which runs all over the large matrix, but during the operations the content of the extracted matrix does not change, so I'd like to extract the submatrix without creating a new copy but instead just acts like a C pointer that points to a portion of the large matrix. How can I do this? Please help me, thank you very much :)
I did some benchmarking to test if not using an explicit temporary matrix is faster, and it's probably not:
function move_mean(N)
M = randi(100,N);
window_size = [50 50];
dir_time = timeit(#() direct(M,window_size))
tmp_time = timeit(#() with_tmp(M,window_size))
end
function direct(M,window_size)
m = zeros(size(M)./2);
for r = 1:size(M,1)-window_size(1)
for c = 1:size(M,2)-window_size(2)
m(r,c) = mean(mean(M(r:r+window_size(1),c:c+window_size(2))));
end
end
end
function with_tmp(M,window_size)
m = zeros(size(M)./2);
for r = 1:size(M,1)-window_size(1)
for c = 1:size(M,2)-window_size(2)
tmp = M(r:r+window_size(1),c:c+window_size(2));
m(r,c) = mean(mean(tmp));
end
end
end
for M at size 100*100:
dir_time =
0.22739
tmp_time =
0.22339
So it's seems like using a temporary variable only makes your code readable, not slower.
In this answer I describe what is the 'best' solution in general. For this answer I define 'best' as most readable without a significant performance hit. (Partially shown by the existing answer).
Basically there are 2 situations that you may be in.
1. You use your submatrix several times
In this situation the best solution in general is to create a temporary variable containing the submatrix.
A = M(rmin:rmax, cmin:cmax)
There may be ways around it (defining a function/anonymous function that indexes into the matrix for you), but in general that won't make you happy.
2. You use your submatrix only 1 time
In this case the best solution is typically exactly what you referred to in the comments:
M(rmin:rmax, cmin:cmax)
A specific case of using the submatrix only 1 time, is when it is passed once to a function. Of course the contents of the submatrix may be used in that function several times, but that is irrelevant.
I am a newbie to Matlab and I am currently trying to optimize a nested for loop as below. The loop is currently running forever for my input.
for i = 1:size(mat,1)
for j = 1:size(mat,2)
mat(i,j) = some_mapping(mat(i,j)+1);
end
end
However I can't find a way to vectorize it. I have tried bsxfun and arrayfun but it does not seem to work (or even run more slowly than the loop).
Maybe I was doing it in a wrong way. Any help is appreciated!
As suggested by Andras Deak, if some_mapping is simply a look-up-table operation, then
mat = some_mapping( mat+1 );
Notes:
- In order of the mapping to work, the values of mat must be integers in the range [0..numel(some_mapping)-1].
- The size of some_mapping does not affect the size of the result, it will be identical in size to mat.
I am trying to iterate through a set of samples that seems to show periodic changes. I need continuously apply the fit function to get the fourier series coefficients, the regression has to be n samples in the past (in my case, around 30). The problem is, my code is extremely slow! It will take like 1 hour to do this for a set of 50,000 samples. Is there any way to optimize this? What am I doing wrong?
Here's my code:
function[coefnames,coef] = fourier_regression(vect_waves,n)
j = 1;
coef = zeros(length(vect_waves)-n,10);
for i=n+1:length(vect_waves)
take_fourier = vect_waves(i-n+1:i);
x = 1:n;
f = fit(x,take_fourier,'fourier4');
current_coef = coeffvalues(f);
coef(j,1:length(current_coef)) = current_coef;
j = j + 1;
end
coefnames = coeffnames(f);
end
When I call [coefnames,coef] = fourier_regression(VECTOR,30); This takes forever to compute. Is there any way to fix it? What's wrong with my code?
Note: I have a intel i7 5500 U cpu, 16GB RAM, and using Matlab 2015a.
As I am not familiar with your application, I am not sure whether it is possible to vectorize the code to improve performance. However, I have a couple of other tips.
One thing you should consider is preallocation of arrays. In this case, you should preallocate at least the array coef since I believe you do know its size before starting the loop.
Another thing I suggest is to profile your code. This will provide information on what parts of your code are consuming the most time, helping you focus your effort on improving those parts' performance.
I have two lists of timestamps and I'm trying to create a map between them that uses the imu_ts as the true time and tries to find the nearest vicon_ts value to it. The output is a 3xd matrix where the first row is the imu_ts index, the third row is the unix time at that index, and the second row is the index of the closest vicon_ts value above the timestamp in the same column.
Here's my code so far and it works, but it's really slow. I'm not sure how to vectorize it.
function tmap = sync_times(imu_ts, vicon_ts)
tstart = max(vicon_ts(1), imu_ts(1));
tstop = min(vicon_ts(end), imu_ts(end));
%trim imu data to
tmap(1,:) = find(imu_ts >= tstart & imu_ts <= tstop);
tmap(3,:) = imu_ts(tmap(1,:));%Use imu_ts as ground truth
%Find nearest indecies in vicon data and map
vic_t = 1;
for i = 1:size(tmap,2)
%
while(vicon_ts(vic_t) < tmap(3,i))
vic_t = vic_t + 1;
end
tmap(2,i) = vic_t;
end
The timestamps are already sorted in ascending order, so this is essentially an O(n) operation but because it's looped it runs slowly. Any vectorized ways to do the same thing?
Edit
It appears to be running faster than I expected or first measured, so this is no longer a critical issue. But I would be interested to see if there are any good solutions to this problem.
Have a look at knnsearch in MATLAB. Use cityblock distance and also put an additional constraint that the data point in vicon_ts should be less than its neighbour in imu_ts. If it is not then take the next index. This is required because cityblock takes absolute distance. Another option (and preferred) is to write your custom distance function.
I believe that your current method is sound, and I would not try and vectorize any further. Vectorization can actually be harmful when you are trying to optimize some inner loops, especially when you know more about the context of your data (e.g. it is sorted) than the Mathworks engineers can know.
Things that I typically look for when I need to optimize some piece of code liek this are:
All arrays are pre-allocated (this is the biggest driver of performance)
Fast inner loops use simple code (Matlab does pretty effective JIT on basic commands, but must interpret others.)
Take advantage of any special data features that you have, e.g. use sort appropriate algorithms and early exit conditions from some loops.
You're already doing all this. I recommend no change.
A good start might be to get rid of the while, try something like:
for i = 1:size(tmap,2)
C = max(0,tmap(3,:)-vicon_ts(i));
tmap(2,i) = find(C==min(C));
end