I was wondering if there is a way of speeding up (maybe via vectorization?) the conditional filling of huge sparse matrices (e.g. ~ 1e10 x 1e10). Here's the sample code where I have a nested loop, and I fill in a sparse matrix only if a certain condition is met:
% We are given the following cell arrays of the same size:
% all_arrays_1
% all_arrays_2
% all_mapping_arrays
N = 1e10;
% The number of nnz (non-zeros) is unknown until the loop finishes
huge_sparse_matrix = sparse([],[],[],N,N);
n_iterations = numel(all_arrays_1);
for iteration=1:n_iterations
array_1 = all_arrays_1{iteration};
array_2 = all_arrays_2{iteration};
mapping_array = all_mapping_arrays{iteration};
n_elements_in_array_1 = numel(array_1);
n_elements_in_array_2 = numel(array_2);
for element_1 = 1:n_elements_in_array_1
element_2 = mapping_array(element_1);
% Sanity check:
if element_2 <= n_elements_in_array_2
item_1 = array_1(element_1);
item_2 = array_2(element_2);
huge_sparse_matrix(item_1,item_2) = 1;
end
end
end
I am struggling to vectorize the nested loop. As far as I understand the filling a sparse matrix element by element is very slow when the number of entries to fill is large (~100M). I need to work with a sparse matrix since it has dimensions in the 10,000M x 10,000M range. However, this way of filling a sparse matrix in MATLAB is very slow.
Edits:
I have updated the names of the variables to reflect their nature better. There are no function calls.
Addendum:
This code builds the matrix adjacency for a huge graph. The variable all_mapping_arrays holds mapping arrays (~ adjacency relationship) between nodes of the graph in a local representation, which is why I need array_1 and array_2 to map the adjacency to a global representation.
I think it will be the incremental update of the sparse matrix, rather than the loop based conditional that will be slowing things down.
When you add a new entry to a sparse matrix via something like A(i,j) = 1 it typically requires that the whole matrix data structure is re-packed. The is an expensive operation. If you're interested, MATLAB uses a CCS data structure (compressed column storage) internally, which is described under the Data Structure section here. Note the statement:
This scheme is not effcient for manipulating matrices one element at a
time
Generally, it's far better (faster) to accumulate the non-zero entries in the matrix as a set of triplets and then make a single call to sparse. For example (warning - brain compiled code!!):
% Inputs:
% N
% prev_array and next_array
% n_labels_prev and n_labels_next
% mapping
% allocate space for matrix entries as a set of "triplets"
ii = zeros(N,1);
jj = zeros(N,1);
xx = zeros(N,1);
nn = 0;
for next_label_ix = 1:n_labels_next
prev_label = mapping(next_label_ix);
if prev_label <= n_labels_prev
prev_global_label = prev_array(prev_label);
next_global_label = next_array(next_label_ix);
% reallocate triplets on demand
if (nn + 1 > length(ii))
ii = [ii; zeros(N,1)];
jj = [jj; zeros(N,1)];
xx = [xx; zeros(N,1)];
end
% append a new triplet and increment counter
ii(nn + 1) = next_global_label; % row index
jj(nn + 1) = prev_global_label; % col index
xx(nn + 1) = 1.0; % coefficient
nn = nn + 1;
end
end
% we may have over-alloacted our triplets, so trim the arrays
% based on our final counter
ii = ii(1:nn);
jj = jj(1:nn);
xx = xx(1:nn);
% just make a single call to "sparse" to pack the triplet data
% as a sparse matrix object
sp_graph_adj_global = sparse(ii,jj,xx,N,N);
I'm allocating in chunks of N entries at a time. Assuming that you know alot about the structure of your matrix you might be able to use a better value here.
Hope this helps.
Related
I got 3D data, from which I need to calculate properties.
To reduce computung I wanted to discretize the space and calculate the properties from the Bin instead of the individual data points and then reasign the propertie caclulated from the bin back to the datapoint.
I further only want to calculate the Bins which have points within them.
Since there is no 3D-binning function in MatLab, what i do is using histcounts over each dimension and then searching for the unique Bins that have been asigned to the data points.
a5pre=compositions(:,1);
a7pre=compositions(:,2);
a8pre=compositions(:,3);
%% BINNING
a5pre_edges=[0,linspace(0.005,0.995,19),1];
a5pre_val=(a5pre_edges(1:end-1) + a5pre_edges(2:end))/2;
a5pre_val(1)=0;
a5pre_val(end)=1;
a7pre_edges=[0,linspace(0.005,0.995,49),1];
a7pre_val=(a7pre_edges(1:end-1) + a7pre_edges(2:end))/2;
a7pre_val(1)=0;
a7pre_val(end)=1;
a8pre_edges=a7pre_edges;
a8pre_val=a7pre_val;
[~,~,bin1]=histcounts(a5pre,a5pre_edges);
[~,~,bin2]=histcounts(a7pre,a7pre_edges);
[~,~,bin3]=histcounts(a8pre,a8pre_edges);
bins=[bin1,bin2,bin3];
[A,~,C]=unique(bins,'rows','stable');
a5pre=a5pre_val(A(:,1));
a7pre=a7pre_val(A(:,2));
a8pre=a8pre_val(A(:,3));
It seems like that the unique function is pretty time consuming, so I was wondering if there is a faster way to do it, knowing that the line only can contain integer or so... or a totaly different.
Best regards
function [comps,C]=compo_binner(x,y,z,e1,e2,e3,v1,v2,v3)
C=NaN(length(x),1);
comps=NaN(length(x),3);
id=1;
for i=1:numel(x)
B_temp(1,1)=v1(sum(x(i)>e1));
B_temp(1,2)=v2(sum(y(i)>e2));
B_temp(1,3)=v3(sum(z(i)>e3));
C_id=sum(ismember(comps,B_temp),2)==3;
if sum(C_id)>0
C(i)=find(C_id);
else
comps(id,:)=B_temp;
id=id+1;
C_id=sum(ismember(comps,B_temp),2)==3;
C(i)=find(C_id>0);
end
end
comps(any(isnan(comps), 2), :) = [];
end
But its way slower than the histcount, unique version. Cant avoid find-function, and thats a function you sure want to avoid in a loop when its about speed...
If I understand correctly you want to compute a 3D histogram. If there's no built-in tool to compute one, it is simple to write one:
function [H, lindices] = histogram3d(data, n)
% histogram3d 3D histogram
% H = histogram3d(data, n) computes a 3D histogram from (x,y,z) values
% in the Nx3 array `data`. `n` is the number of bins between 0 and 1.
% It is assumed all values in `data` are between 0 and 1.
assert(size(data,2) == 3, 'data must be Nx3');
H = zeros(n, n, n);
indices = floor(data * n) + 1;
indices(indices > n) = n;
lindices = sub2ind(size(H), indices(:,1), indices(:,2), indices(:,3));
for ii = 1:size(data,1)
H(lindices(ii)) = H(lindices(ii)) + 1;
end
end
Now, given your compositions array, and binning each dimension into 20 bins, we get:
[H, indices] = histogram3d(compositions, 20);
idx = find(H);
[x,y,z] = ind2sub(size(H), idx);
reduced_compositions = ([x,y,z] - 0.5) / 20;
The bin centers for H are at ((1:20)-0.5)/20.
On my machine this runs in a fraction of a second for 5 million inputs points.
Now, for each composition(ii,:), you have a number indices(ii), which matches with another number idx[jj], corresponding to reduced_compositions(jj,:). One easy way to make the assignment of results is as follows:
H(H > 0) = 1:numel(idx);
indices = H(indices);
Now for each composition(ii,:), your closest match in the reduced set is reduced_compositions(indices(ii),:).
I couldn't find any relevant topics so I'm posting this one:
How can I parallelize operations/calculations on a huge array? The problem is that I use the arrays with size of 10000000x10 which is basically small enough to operate on in line, but while running on parfor - causes not enough memory error.
The code goes:
function aggregatedRes = umbrellaFct(preparedInputsAsaCellArray)
% Description: function used to parallelize calculation
% preparedInputsAsaCellArray - cell array with size of 1x10, for example first
% cell {1,1} would be: {array,corr,df}
% array - an array 1e7 by 10, with data from different regions to be aggregated
% corr - correlation matrix
% df - degrees of freedom as an integer value
% create a function handle from child function
fcnHndl = #childFct;
% For each available cell - calculate and aggregate
parfor j = 1:numel(preparedInputsAsaCellArray)
output = fcnHndl(preparedInputsAsaCellArray{j}{:});
end
% Extract results
for i = 1:numel(preparedInputsAsaCellArray)
aggregatedRes(:,i) = output{j};
end
end
And child function used in the umberella function:
function aggregated = childFct(array, corr, df)
% Description:
% array - an array 1e7 by 10, with data from different regions to be aggregated
% corr - correlation matrix
% df - degrees of freedom as an integer value
% get num of cases for multivariate nums
cases = lenght(array(:,1));
% preallocate space
corrMatrix = double(zeros(cases, size(corr,1)))
u = corrMatrix;
soerted = corrMatrix;
s = zeros(lenght(array(:,1)), lenght(array(1,:)));
% calc multivariate nums
u = mvtrnd(corr, df, cases);
clear corr, cases
% calc t-students cumulative dist
u = tcdf(u, df);
clear df
% double sort
[~, sorted] = sort(u);
clear u
[~, corrMatrix] = sort(sorted);
clear sorted
for jj = 1:lenght(lossData(1,:))
s(:,jj) = array(corrMatrix(:,jj),jj);
end
clear array corrMatrix jj
aggregated = sum(s,2);
end
I already tried with distributed memory but ultimately failed.
I will apreciate any help or hint!
Edit: The logic behind functions is to calculate and aggregate data from different regions. In total there are ten arrays, all with size 1e7x10. My idea was to use parfor to simultaneously calculate and aggregate them - to save time. It works fine for smaller arrays (like 1e6x10) but runned out of memory for 1e7x10 (in case of more than 2 pools). I suspect the way i used and implemented parfor could be wrong and inefficient.
The following MATLAB code loops through all elements of a matrix with size 2IJ x 2IJ.
for i=1:(I-2)
for j=1:(J-2)
ij1 = i*J+j+1; % row
ij2 = i*J+j+1 + I*J; % col
D1(ij1,ij1) = 2;
D1(ij1,ij2) = -1;
end
end
Is there any way I can parallelize it use MATLAB's parfor command? You can assume any element not defined is 0. So this matrix ends up being sparse (mostly 0s).
Before using parfor it is recommended to read the guidelines related to decide when to use parfor. Specially this:
Generally, if you want to make code run faster, first try to vectorize it.
Here vectorization can be used effectively to compute indices of the nonzero elements. Those indices are used in function sparse. For it you need to define one of i or j to be a column vector and another a row vector. Implicit expansion takes effect and indices are computed.
I = 300;
J = 300;
i = (1:I-2).';
j = 1:J-2;
ij1 = i*J+j+1;
ij2 = i*J+j+1 + I*J;
D1 = sparse(ij1, ij1, 2, 2*I*J, 2*I*J) + sparse(ij1, ij2, -1, 2*I*J, 2*I*J);
However for the comparison this can be a way of using parfor (not tested):
D1 = sparse (2*I*J, 2*I*J);
parfor i=1:(I-2)
for j=1:(J-2)
ij1 = i*J+j+1;
ij2 = i*J+j+1 + I*J;
D1 = D1 + sparse([ij1;ij1], [ij1;ij2], [2;-1], 2*I*J, 2*I*J) ;
end
end
Here D1 used as reduction variable.
I have large sets of 3D data consisting of 1D signals acquired in 2D space.
The first step in processing this data is thresholding all signals to find the arrival of a high-amplitude pulse. This pulse is present in all signals and arrives at different times.
After thresholding, the 3D data set should be reordered so that every signal starts at the arrival of the pulse and what came before is thrown away (the end of the signals is of no importance, as of now i concatenate zeros to the end of all signals so the data remains the same size).
Now, I have implemented this in the following manner:
First, i start by calculating the sample number of the first sample exceeding the threshold in all signals
M = randn(1000,500,500); % example matrix of realistic size
threshold = 0.25*max(M(:,1,1)); % 25% of the maximum in the first signal as threshold
[~,index] = max(M>threshold); % indices of first sample exceeding threshold in all signals
Next, I want all signals to be shifted so that they all start with the pulse. For now, I have implemented it this way:
outM = zeros(size(M)); % preallocation for speed
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
This works fine, and i know for-loops are not that slow anymore, but this easily takes a few seconds for the datasets on my machine. A single iteration of the for-loop takes about 0.05-0.1 sec, which seems slow to me for just copying a vector containing 500-2000 double values.
Therefore, I have looked into the best way to tackle this, but for now I haven't found anything better.
I have tried several things: 3D masks, linear indexing, and parallel loops (parfor).
for 3D masks, I checked to see if any improvements are possible. Therefore i first contruct a logical mask, and then compare the speed of the logical mask indexing/copying to the double nested for loop.
%% set up for logical mask copying
AA = logical(ones(500,1)); % only copy the first 500 values after the threshold value
Mask = logical(zeros(size(M)));
Jepla = zeros(500,size(M,2),size(M,3));
for i = 1:size(M,2)
for j = 1:size(M,3)
Mask(index(1,i,j):index(1,i,j)+499,i,j) = AA;
end
end
%% speed comparison
tic
Jepla = M(Mask);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
The for-loop is faster every time, even though there is more that's copied.
Next, linear indexing.
%% setup for linear index copying
%put all indices in 1 long column
LongIndex = reshape(index,numel(index),1);
% convert to linear indices and store in new variable
linearIndices = sub2ind(size(M),LongIndex,repmat(1:size(M,2),1,size(M,3))',repelem(1:size(M,3),size(M,2))');
% extend linear indices with those of all values to copy
k = zeros(numel(M),1);
count = 1;
for i = 1:numel(LongIndex)
values = linearIndices(i):size(M,1)*i;
k(count:count+length(values)-1) = values;
count = count + length(values);
end
k = k(1:count-1);
% get linear indices of locations in new matrix
l = zeros(length(k),1);
count = 1;
for i = 1:numel(LongIndex)
values = repelem(LongIndex(i)-1,size(M,1)-LongIndex(i)+1);
l(count:count+length(values)-1) = values;
count = count + length(values);
end
l = k-l;
% create new matrix
outM = zeros(size(M));
%% speed comparison
tic
outM(l) = M(k);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
Again, the alternative approach, linear indexing, is (a lot) slower.
After this failed, I learned about parallelisation, and though this would for sure speed up my code.
By reading some of the documentation around parfor and trying it out a bit, I changed my code to the following:
gcp;
outM = zeros(size(M));
inM = mat2cell(M,size(M,1),ones(size(M,2),1),size(M,3));
tic
parfor i = 1:500
for j = 1:500
outM(:,i,j) = [inM{i}(index(1,i,j):end,1,j);zeros(index(1,i,j)-1,1)];
end
end
end
toc
I changed it so that "outM" and "inM" would both be sliced variables, as I read this is best. Still this is very slow, a lot slower than the original for loop.
So now the question, should I give up on trying to improve the speed of this operation? Or is there another way in which to do this? I have searched a lot, and for now do not see how to speed this up.
Sorry for the long question, but I wanted to show what I tried.
Thank you in advance!
Not sure if an option in your situation, but looks like cell arrays are actually faster here:
outM2 = cell(size(M,2),size(M,3));
tic;
for i = 1:size(M,2)
for j = 1:size(M,3)
outM2{i,j} = M(index(1,i,j):end,i,j);
end
end
toc
And a second idea which also came out faster, batch all data which have to be shifted by the same value:
tic;
for i = 1:unique(index).'
outM(1:size(M,1)+1-i,index==i) = M(i:end,index==i);
end
toc
It totally depends on your data if this approach is actually faster.
And yes integer valued and logical indexing can be mixed
I'm having some issues with a particular piece of code (see code below). As it happens I need to get the variable Thetas. However, before the code executes I don't know how long will Thetas be or the dimensions of their matrices, since it all depends on the variable "hidden_layers".
I decided to build a blank cell array and append them as they are created for later usage. But I was wondering what would be the cheapest way to compute this. I have used the same approach many times throughout my code and I'm having some issues with memory usage.
% Prompt the user for how many layers will the neural network have
hidden_layers = input('How many layers on the NN?: ');
% Group of variables required to get Thetas
X = csvread('myData.csv');
n = size(X, 2);
hidden_layer_size = 60;
num_labels = 39;
% The length of Thetas{} will be => hidden_layers + 1
Thetas = {};
% Function randomize(element1, element2) returns a random matrix with
% dimensions: element2 x (element1 + 1)
% There will always be Thetas{1}
Thetas{1} = randomize(n,hidden_layer_size);
% Now build the rest of the Thetas based on the number of hidden_layers
for i = 1:(hidden_layers)
if (i == hidden_layers)
Thetas{i} = randomize(hidden_layer_size, num_labels);
else
Thetas{i} = randomize(hidden_layer_size, hidden_layer_size);
end
end
PD: I've also tried to forget about the cell array and build a jx1 vector with all the Thetas appended, which I reshape later on for further computations. However, it doesn't seem to save any processing time according to the profiler and it's tiring to be reshaping it all the time.