I have a .csv file with data on each line in the format (x,y,z,t,f), where f is the value of some function at location (x,y,z) at time t. So each new line in the .csv gives a new set of coordinates (x,y,z,t), with accompanying value f. The .csv is not sorted.
I want to use imagesc to create a video of this data in the xy-plane, as time progresses. The way I've done this is by reformatting M into something more easily usable by imagesc. I'm doing three nested loops, roughly like this
M = csvread('file.csv');
uniqueX = unique(M(:,1));
uniqueY = unique(M(:,2));
uniqueT = unique(M(:,4));
M_reformatted = zeros(length(uniqueX), length(uniqueY), length(uniqueT));
for i = 1:length(uniqueX)
for j = 1:length(uniqueY)
for k = 1:length(uniqueT)
M_reformatted(i,j,k) = M( ...
M(:,1)==uniqueX(i) & ...
M(:,2)==uniqueY(j) & ...
M(:,4)==uniqueT(k), ...
5 ...
);
end
end
end
once I have M_reformatted, I can loop through timesteps k and use imagesc on M_reformatted(:,:,k). But doing the above nested loops is very slow. Is it possible to vectorize the above? If so, an outline of the approach would be very helpful.
edit: as noted in answers/comments below, I made a mistake in that there are several possible z-values, which I haven't taken into account. If only a single z-value, the above would be ok.
This vectorized solution allows for negative values of x and y and is many times faster than the non-vectorized solution (close to 20x times for the test case at the bottom).
The idea is to sort the x, y, and t values in lexicographical order using sortrows and then using reshape to build the time slices of M_reformatted.
The code:
idx = find(M(:,3)==0); %// find rows where z==0
M2 = M(idx,:); %// M2 has only the rows where z==0
M2(:,3) = []; %// delete z coordinate in M2
M2(:,[1 2 3]) = M2(:,[3 1 2]); %// change from (x,y,t,f) to (t,x,y,f)
M2 = sortrows(M2); %// sort rows by t, then x, then y
numT = numel(unique(M2(:,1))); %// number of unique t values
numX = numel(unique(M2(:,2))); %// number of unique x values
numY = numel(unique(M2(:,3))); %// number of unique y values
%// fill the time slice matrix with data
M_reformatted = reshape(M2(:,4), numY, numX, numT);
Note: I am assuming y refers to the columns of the image and x refers to the rows. If you want these flipped, use M_reformatted = permute(M_reformatted,[2 1 3]) at the end of the code.
The test case I used for M (to compare the result to other solutions) has a NxNxN space with T times slices:
N = 10;
T = 10;
[x,y,z] = meshgrid(-N:N,-N:N,-N:N);
numPoints = numel(x);
x=x(:); y=y(:); z=z(:);
s = repmat([x,y,z],T,1);
t = repmat(1:T,numPoints,1);
M = [s, t(:), rand(numPoints*T,1)];
M = M( randperm(size(M,1)), : );
I don't think you need to vectorize. I think you change your algorithm.
You only need one loop to step through the lines of the CSV file. For every line, you have (x,y,z,t,f) so just store it in M_reformatted where it belongs. Something like this:
M_reformatted = zeros(max(M(:,1)), max(M(:,2)), max(M(:,4)));
for line = 1:size(M,2)
z = M(line, 3);
if z ~= 0, continue; end;
x = M(line, 1);
y = M(line, 2);
t = M(line, 4);
f = M(line, 5);
M_reformatted(x, y, t) = f;
end
Also note that pre-allocating M_reformatted is a very good idea, but your code may have been getting the size wrong (depending on the data). I think using max like I did will always do the right thing.
Related
I got 3D data, from which I need to calculate properties.
To reduce computung I wanted to discretize the space and calculate the properties from the Bin instead of the individual data points and then reasign the propertie caclulated from the bin back to the datapoint.
I further only want to calculate the Bins which have points within them.
Since there is no 3D-binning function in MatLab, what i do is using histcounts over each dimension and then searching for the unique Bins that have been asigned to the data points.
a5pre=compositions(:,1);
a7pre=compositions(:,2);
a8pre=compositions(:,3);
%% BINNING
a5pre_edges=[0,linspace(0.005,0.995,19),1];
a5pre_val=(a5pre_edges(1:end-1) + a5pre_edges(2:end))/2;
a5pre_val(1)=0;
a5pre_val(end)=1;
a7pre_edges=[0,linspace(0.005,0.995,49),1];
a7pre_val=(a7pre_edges(1:end-1) + a7pre_edges(2:end))/2;
a7pre_val(1)=0;
a7pre_val(end)=1;
a8pre_edges=a7pre_edges;
a8pre_val=a7pre_val;
[~,~,bin1]=histcounts(a5pre,a5pre_edges);
[~,~,bin2]=histcounts(a7pre,a7pre_edges);
[~,~,bin3]=histcounts(a8pre,a8pre_edges);
bins=[bin1,bin2,bin3];
[A,~,C]=unique(bins,'rows','stable');
a5pre=a5pre_val(A(:,1));
a7pre=a7pre_val(A(:,2));
a8pre=a8pre_val(A(:,3));
It seems like that the unique function is pretty time consuming, so I was wondering if there is a faster way to do it, knowing that the line only can contain integer or so... or a totaly different.
Best regards
function [comps,C]=compo_binner(x,y,z,e1,e2,e3,v1,v2,v3)
C=NaN(length(x),1);
comps=NaN(length(x),3);
id=1;
for i=1:numel(x)
B_temp(1,1)=v1(sum(x(i)>e1));
B_temp(1,2)=v2(sum(y(i)>e2));
B_temp(1,3)=v3(sum(z(i)>e3));
C_id=sum(ismember(comps,B_temp),2)==3;
if sum(C_id)>0
C(i)=find(C_id);
else
comps(id,:)=B_temp;
id=id+1;
C_id=sum(ismember(comps,B_temp),2)==3;
C(i)=find(C_id>0);
end
end
comps(any(isnan(comps), 2), :) = [];
end
But its way slower than the histcount, unique version. Cant avoid find-function, and thats a function you sure want to avoid in a loop when its about speed...
If I understand correctly you want to compute a 3D histogram. If there's no built-in tool to compute one, it is simple to write one:
function [H, lindices] = histogram3d(data, n)
% histogram3d 3D histogram
% H = histogram3d(data, n) computes a 3D histogram from (x,y,z) values
% in the Nx3 array `data`. `n` is the number of bins between 0 and 1.
% It is assumed all values in `data` are between 0 and 1.
assert(size(data,2) == 3, 'data must be Nx3');
H = zeros(n, n, n);
indices = floor(data * n) + 1;
indices(indices > n) = n;
lindices = sub2ind(size(H), indices(:,1), indices(:,2), indices(:,3));
for ii = 1:size(data,1)
H(lindices(ii)) = H(lindices(ii)) + 1;
end
end
Now, given your compositions array, and binning each dimension into 20 bins, we get:
[H, indices] = histogram3d(compositions, 20);
idx = find(H);
[x,y,z] = ind2sub(size(H), idx);
reduced_compositions = ([x,y,z] - 0.5) / 20;
The bin centers for H are at ((1:20)-0.5)/20.
On my machine this runs in a fraction of a second for 5 million inputs points.
Now, for each composition(ii,:), you have a number indices(ii), which matches with another number idx[jj], corresponding to reduced_compositions(jj,:). One easy way to make the assignment of results is as follows:
H(H > 0) = 1:numel(idx);
indices = H(indices);
Now for each composition(ii,:), your closest match in the reduced set is reduced_compositions(indices(ii),:).
Assume I have a 50x1 cell(say Q) with column matrices of varying dimensions (say 1568936x1 , 88x1,5040x1 ) etc
losing values isn't an issue. I need all the matrices inside the cell to be divisible by said number (say 500) so like 1568500x1 , 5000x1 skipping over 88x1 etc.
Currently I have:
z=cell(length(Q),1)
for p=1:length(z)
n=length(Q{p})
for w=1:length(z)
if n-mod(length(Q{w}),500)<500
w=w+1;
else
o=length(Q{w}-mod(length(Q{w}),500));
for k=1:length(z)
z=Q{w}((1:o));
end
end
end
end
but when I reach the 88x1 matrix it throws a dimensions exceeded error although I think I have covered that with the if condition where it should skip the matrix and move on to the next cell.
This should work fine:
Q = {
rand(58,1);
rand(168,1);
rand(33,1);
rand(199,1);
rand(100,1)
};
Q_len = numel(Q);
K = 50;
Z = cell(Q_len,1);
for i = 1:Q_len
Qi = Q{i};
Qi_len = numel(Qi);
k = floor(Qi_len / K) * K
Z{i} = Qi(1:k);
end
Given the starting vectors (shortened down in order to avoid excessive overloads), the final output Z is:
>> cellfun(#numel,Z)
ans =
50
150
0
150
100
If you want a shorter, one-liner version, here is one:
Q = {
rand(58,1);
rand(168,1);
rand(33,1);
rand(199,1);
rand(100,1)
};
K = 50;
Z = cellfun(#(x)x(1:(floor(numel(x)/K)*K)),Q,'UniformOutput',false);
I have some MatLab function which calculates a Wavelet Bispectrum. In this a matrix is initialised and values (complex) are put in it after they are calculated in nested for loops. However when I time the the entire function I see a third of the time (~1000s) is spent adding these values to the matrix.
The Matrix size is (1025 x 1025) and so about 1million values are input. If I generate random numbers and put them in a matrix in a similar way it takes less than I second. So why is it taking up so much time in my code?
Anyone able to help me out?
I've included a sample of the code. It is putting the million or so values into the Bisp Matrix that takes over 1000s. WT is a matrix containing complex values and freq is a vector containing all the values that match the 1st dimension of the WT matrix.
`
nfreq = 1025;
Bisp = nan(nfreq, nfreq);
Norm = nan(nfreq, nfreq);
for j = 1 : nfreq
for k = 1 : nfreq
f3 = freq(j) + freq(k);
if f3 <= freq(end)
idx3 = find(freq <= f3);
idx3 = idx3(end);
x = (WT(j, :) .* WT(k, :)) .* conj(WT(indx3, :));
bb = nansum(x);
norm1 = nansum(abs(WT(j, :) .* WT(k, :)).^2);
norm2 = nansum(abs(WT(idx3, :).^2));
a = sqrt(norm1 * norm2);
Norm(j, k) = a;
Bisp(j, k) = bb; **This takes about 1000seconds**
end
end
end
`
I have two nested for-loops that are used to format data that I load it. The loops have the following construction:
data = magic(20000);
data = data(:,1:3);
for i=0:10
for j=0:10
data_tmp = data((1:100)+100*j+100*10*i,:);
vx(:, i+1,j+1) = data_tmp(:,1);
vy(:, i+1,j+1) = data_tmp(:,2);
vz(:, i+1,j+1) = data_tmp(:,3);
end
end
Arrays vx, vy and vz I do pre-allocate to their desired size. However, is there a way to vectorize the for-loops to increase the efficiency? I'm not convinced it is the case due to the first line in the second loop, data((1:100)+100*j+100*10*i,:), is there a better way to do this?
It turns out that you have repeated index in loop
at i=k, j=10 and i=k+1, j=0 for k<10
for example, you read 1:100 + 100*10 + 100*10*0 and then read 1:100 + 100*0 + 100*10*1 which are identical.
Reshape w/ Repeated index
If this was what you intended to do, then vectorization needs one more step (index generation).
Following is my suggestion (N=100, M=10 where N is the length of data_tmp and M is the maximum loop variable)
index = bsxfun(#plus,bsxfun(#plus,(1:N)',reshape(N*(0:M),1,1,M+1)),M*N*(0:M)); %index generation
vx = data(index);
vy = data(index + size(data,1));
vz = data(index + size(data,1)*2);
This is not that desirable, but it will work.
When I tested on my laptop, it is twice faster than your original code with pre-allocation. As I increase the size of data, the gap gets smaller and smaller.
Reshape w/o Repeated index
If not i.e., you want to reshape each column in the direction of 3rd dimension first, 2nd dimension last), then following would work.
Firstly, this is how I interpreted your code
data = magic(20000);
data = data(:,1:3);
N = 100; M = 10;
for i=0:(M-1)
for j=0:(M-1)
data_tmp = data((1:N)+M*j+N*M*i,:);
vx(:, i+1,j+1) = data_tmp(:,1);
vy(:, i+1,j+1) = data_tmp(:,2);
vz(:, i+1,j+1) = data_tmp(:,3);
end
end
Note that loop ended at (M-1).
Following is my suggestion.
vx = permute(reshape(dat(1:N*M*M,1), N, M, M),[1,3,2]);
vy = permute(reshape(dat(1:N*M*M,2), N, M, M),[1,3,2]);
vz = permute(reshape(dat(1:N*M*M,3), N, M, M),[1,3,2]);
In my laptop, it is 4 times faster than original code. As I increase the size, the gap approaches to 2.
Just in case you want to stick with the loop, here is a much faster way to do this:
data = randi(100,20000,3);
[vx,vy,vz] = deal(zeros(100,11,11));
[J,I] = ndgrid(1:11,1:11);
c = 1;
for k = 0:100:11000
vx(:,I(c),J(c)) = data((1:100)+k,1);
vy(:,I(c),J(c)) = data((1:100)+k,2);
vz(:,I(c),J(c)) = data((1:100)+k,3);
c = c+1;
end
My guess is that reshape from #Dohyun answer is what you looking for (and it's x10 faster than this, and x10000 faster than your code), but for next time you use loops, this may be useful.
And here is another option to do this without reshape, in a similar time to the reshape version:
[vx,vy,vz] = deal(zeros(100,10,11));
vx(:) = data(1:11000,1);
vy(:) = data(1:11000,2);
vz(:) = data(1:11000,3);
vx = permute(vx,[1 3 2]);
vy = permute(vy,[1 3 2]);
vz = permute(vz,[1 3 2]);
The idea is that you define the shape of [vx,vy,vz] while allocating them.
I have a set of three vectors (stored into a 3xN matrix) which are 'entangled' (e.g. some value in the second row should be in the third row and vice versa). This 'entanglement' is based on looking at the figure in which alpha2 is plotted. To separate the vector I use a difference based approach where I calculate the difference of one value with respect the three next values (e.g. comparing (1,i) with (:,i+1)). Then I take the minimum and store that. The method works to separate two of the three vectors, but not for the last.
I was wondering if you guys can share your ideas with me how to solve this problem (if possible). I have added my coded below.
Thanks in advance!
Problem in figures:
clear all; close all; clc;
%%
alpha2 = [-23.32 -23.05 -22.24 -20.91 -19.06 -16.70 -13.83 -10.49 -6.70;
-0.46 -0.33 0.19 2.38 5.44 9.36 14.15 19.80 26.32;
-1.58 -1.13 0.06 0.70 1.61 2.78 4.23 5.99 8.09];
%%% Original
figure()
hold on
plot(alpha2(1,:))
plot(alpha2(2,:))
plot(alpha2(3,:))
%%% Store start values
store1(1,1) = alpha2(1,1);
store2(1,1) = alpha2(2,1);
store3(1,1) = alpha2(3,1);
for i=1:size(alpha2,2)-1
for j=1:size(alpha2,1)
Alpha1(j,i) = abs(store1(1,i)-alpha2(j,i+1));
Alpha2(j,i) = abs(store2(1,i)-alpha2(j,i+1));
Alpha3(j,i) = abs(store3(1,i)-alpha2(j,i+1));
[~, I] = min(Alpha1(:,i));
store1(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha2(:,i));
store2(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha3(:,i));
store3(1,i+1) = alpha2(I,i+1);
end
end
%%% Plot to see if separation worked
figure()
hold on
plot(store1)
plot(store2)
plot(store3)
Solution using extrapolation via polyfit:
The idea is pretty simple: Iterate over all positions i and use polyfit to fit polynomials of degree d to the d+1 values from F(:,i-(d+1)) up to F(:,i). Use those polynomials to extrapolate the function values F(:,i+1). Then compute the permutation of the real values F(:,i+1) that fits those extrapolations best. This should work quite well, if there are only a few functions involved. There is certainly some room for improvement, but for your simple setting it should suffice.
function F = untangle(F, maxExtrapolationDegree)
%// UNTANGLE(F) untangles the functions F(i,:) via extrapolation.
if nargin<2
maxExtrapolationDegree = 4;
end
extrapolate = #(f) polyval(polyfit(1:length(f),f,length(f)-1),length(f)+1);
extrapolateAll = #(F) cellfun(extrapolate, num2cell(F,2));
fitCriterion = #(X,Y) norm(X(:)-Y(:),1);
nFuncs = size(F,1);
nPoints = size(F,2);
swaps = perms(1:nFuncs);
errorOfFit = zeros(1,size(swaps,1));
for i = 1:nPoints-1
nextValues = extrapolateAll(F(:,max(1,i-(maxExtrapolationDegree+1)):i));
for j = 1:size(swaps,1)
errorOfFit(j) = fitCriterion(nextValues, F(swaps(j,:),i+1));
end
[~,j_bestSwap] = min(errorOfFit);
F(:,i+1) = F(swaps(j_bestSwap,:),i+1);
end
Initial solution: (not that pretty - Skip this part)
This is a similar solution that tries to minimize the sum of the derivatives up to some degree of the vector valued function F = #(j) alpha2(:,j). It does so by stepping through the positions i and checks all possible permutations of the coordinates of i to get a minimal seminorm of the function F(1:i).
(I'm actually wondering right now if there is any canonical mathematical way to define the seminorm so we get our expected results... I initially was going for the H^1 and H^2 seminorms, but they didn't quite work...)
function F = untangle(F)
nFuncs = size(F,1);
nPoints = size(F,2);
seminorm = #(x,i) sum(sum(abs(diff(x(:,1:i),1,2)))) + ...
sum(sum(abs(diff(x(:,1:i),2,2)))) + ...
sum(sum(abs(diff(x(:,1:i),3,2)))) + ...
sum(sum(abs(diff(x(:,1:i),4,2))));
doSwap = #(x,swap,i) [x(:,1:i-1), x(swap,i:end)];
swaps = perms(1:nFuncs);
normOfSwap = zeros(1,size(swaps,1));
for i = 2:nPoints
for j = 1:size(swaps,1)
normOfSwap(j) = seminorm(doSwap(F,swaps(j,:),i),i);
end
[~,j_bestSwap] = min(normOfSwap);
F = doSwap(F,swaps(j_bestSwap,:),i);
end
Usage:
The command alpha2 = untangle(alpha2); will untangle your functions:
It should even work for more complicated data, like these shuffled sine-waves:
nPoints = 100;
nFuncs = 5;
t = linspace(0, 2*pi, nPoints);
F = bsxfun(#(a,b) sin(a*b), (1:nFuncs).', t);
for i = 1:nPoints
F(:,i) = F(randperm(nFuncs),i);
end
Remark: I guess if you already know that your functions will be quadratic or some other special form, RANSAC would be a better idea for larger number of functions. This could also be useful if the functions are not given with the same x-value spacing.