N-Dimensional Histogram Counts - matlab

I am currently trying to code up a function to assign probabilities to a collection of vectors using a histogram count. This is essentially a counting exercise, but requires some finesse to be able to achieve efficiently. I will illustrate with an example:
Say that I have a matrix X = [x1, x2....xM] with N rows and M columns. Here, X represents a collection of M, N-dimensional vectors. IN other words, each of the columns of X is an N-dimensional vector.
As an example, we can generate such an X for M = 10000 vectors and N = 5 dimensions using:
X = randint(5,10000)
This will produce a 5 x 10000 matrix of 0s and 1s, where each column is represents a 5 dimensional vector of 1s and 0s.
I would like to assign a probability to each of these vectors through a basic histogram count. The steps are simple: first find the unique columns of X; second, count the number of times each unique column occurs. The probability of a particular occurrence is then the #of times this column was in X / total number of columns in X.
Returning to the example above, I can do the first step using the unique function in MATLAB as follows:
UniqueXs = unique(X','rows')'
The code above will return UniqueXs, a matrix with N rows that only contains the unique columns of X. Note that the transposes are due to weird MATLAB input requirements.
However, I am unable to find a good way to count the number of times each of the columns in UniqueX is in X. So I'm wondering if anyone has any suggestions?
Broadly speaking, I can think of two ways of achieving the counting step. The first way would be to use the find function, though I think this may be slow since find is an elementwise operation. The second way would be to call unique recursively as it can also provide the index of one of the unique columns in X. This should allow us to remove that column from X and redo unique on the resulting X and keep counting.
Ideally, I think that unique might already be doing some counting so the most efficient way would probably be to work without the built-in functions.

Here are two solutions, one assumes all values are either 0's or 1's (just like the example in your description), the other does not. Both codes should be very fast (more so the one with binary values), even on large data.
1) only zeros and ones
%# random vectors of 0's and 1's
x = randi([0 1], [5 10000]); %# RANDINT is deprecated, use RANDI instead
%# convert each column to a binary string
str = num2str(x', repmat('%d',[1 size(x,1)])); %'
%# convert binary representation to decimal number
num = (str-'0') * (2.^(size(s,2)-1:-1:0))'; %'# num = bin2dec(str);
%# count frequency of how many each number occurs
count = accumarray(num+1,1); %# num+1 since it starts at zero
%# assign probability based on count
prob = count(num+1)./sum(count);
2) any positive integer
%# random vectors with values 0:MAX_NUM
x = randi([0 999], [5 10000]);
%# format vectors as strings (zero-filled to a constant length)
nDigits = ceil(log10( max(x(:)) ));
frmt = repmat(['%0' num2str(nDigits) 'd'], [1 size(x,1)]);
str = cellstr(num2str(x',frmt)); %'
%# find unique strings, and convert them to group indices
[G,GN] = grp2idx(str);
%# count frequency of occurrence
count = accumarray(G,1);
%# assign probability based on count
prob = count(G)./sum(count);
Now we can see for example how many times each "unique vector" occurred:
>> table = sortrows([GN num2cell(count)])
table =
'000064850843749' [1] # original vector is: [0 64 850 843 749]
'000130170550598' [1] # and so on..
'000181606710020' [1]
'000220492735249' [1]
'000275871573376' [1]
'000525617682120' [1]
'000572482660558' [1]
'000601910301952' [1]
...
Note that in my example with random data, the vector space becomes very sparse (as you increase the maximum possible value), thus I wouldn't be surprised if all counts were equal to 1...

Related

Optimize nested for loop for calculating xcorr of matrix rows

I have 2 nested loops which do the following:
Get two rows of a matrix
Check if indices meet a condition or not
If they do: calculate xcorr between the two rows and put it into new vector
Find the index of the maximum value of sub vector and replace element of LAG matrix with this value
I dont know how I can speed this code up by vectorizing or otherwise.
b=size(data,1);
F=size(data,2);
LAG= zeros(b,b);
for i=1:b
for j=1:b
if j>i
x=data(i,:);
y=data(j,:);
d=xcorr(x,y);
d=d(:,F:(2*F)-1);
[M,I] = max(d);
LAG(i,j)=I-1;
d=xcorr(y,x);
d=d(:,F:(2*F)-1);
[M,I] = max(d);
LAG(j,i)=I-1;
end
end
end
First, a note on floating point precision...
You mention in a comment that your data contains the integers 0, 1, and 2. You would therefore expect a cross-correlation to give integer results. However, since the calculation is being done in double-precision, there appears to be some floating-point error introduced. This error can cause the results to be ever so slightly larger or smaller than integer values.
Since your calculations involve looking for the location of the maxima, then you could get slightly different results if there are repeated maximal integer values with added precision errors. For example, let's say you expect the value 10 to be the maximum and appear in indices 2 and 4 of a vector d. You might calculate d one way and get d(2) = 10 and d(4) = 10.00000000000001, with some added precision error. The maximum would therefore be located in index 4. If you use a different method to calculate d, you might get d(2) = 10 and d(4) = 9.99999999999999, with the error going in the opposite direction, causing the maximum to be located in index 2.
The solution? Round your cross-correlation data first:
d = round(xcorr(x, y));
This will eliminate the floating-point errors and give you the integer results you expect.
Now, on to the actual solutions...
Solution 1: Non-loop option
You can pass a matrix to xcorr and it will perform the cross-correlation for every pairwise combination of columns. Using this, you can forego your loops altogether like so:
d = round(xcorr(data.'));
[~, I] = max(d(F:(2*F)-1,:), [], 1);
LAG = reshape(I-1, b, b).';
Solution 2: Improved loop option
There are limits to how large data can be for the above solution, since it will produce large intermediate and output variables that can exceed the maximum array size available. In such a case for loops may be unavoidable, but you can improve upon the for-loop solution above. Specifically, you can compute the cross-correlation once for a pair (x, y), then just flip the result for the pair (y, x):
% Loop over rows:
for row = 1:b
% Loop over upper matrix triangle:
for col = (row+1):b
% Cross-correlation for upper triangle:
d = round(xcorr(data(row, :), data(col, :)));
[~, I] = max(d(:, F:(2*F)-1));
LAG(row, col) = I-1;
% Cross-correlation for lower triangle:
d = fliplr(d);
[~, I] = max(d(:, F:(2*F)-1));
LAG(col, row) = I-1;
end
end

Partial sum of a vector

For a vector v (e.g. v=[1,2,3,4,5]), and two index vectors (e.g. a=[1,1,1,2,3] and b=[3,4,5,5,5], with all a(i)<b(i)), I would like to construct w=sum(v(a:b)), which gives the values
w = zeros(length(a),1);
for i = 1:length(a)
w(i)=sum(v(a(i):b(i)));
end
It is slow when length(a) is large. Can I compute w without the for loop?
Yes! The nth element of cumsum(v) is the sum of the first n elements in v, so just take that and subtract the sum of the elements that you don't want to include:
v=[1,2,3,4,5]
a=[1,1,1,2,3]
b=[3,4,5,5,5]
C=cumsum(v)
C(b)-C(a)+v(a)
%// or alternatively
C=cumsum([0 v])
C(b+1)-C(a)
The following code works, but it is of course much less readable:
% assume v is a column vector
units = 1:length(v); units = units'; %units is a column vector
units_matrix = repmat(units, [1 length(a)]);
a_matrix = repmat(a, [length(v) 1]); % assuming a is is a row vector
b_matrix = repmat(b, [length(v) 1]);
weights = (units_matrix>=a_matrix) & (units_matrix<=b_matrix);
v_matrix = repmat(v, [1 length(a)]);
w = sum(v_matrix.*weights);
Explanation:
v_matrix contains copies of v. The summation will be done along
the column, so we need to prepare the other needed information in
vectorized form. units_matrix contains the indexes in v along the
columns. The columns are identical. a_matrix and b_matrix, in
each of their column, contains the indexes that are relevant for each
partial summation. All rows are identical. weights is a logical
matrix where, for each column, the indexes contained in
units_matrix between the corresponding a and b are 1 (true),
and the rest is 0. The element-wise multiplication thus filters the
"right" values, and all the indexes outside the range (again, for
each different column) is multiplied by zero. w is then he result
of the sum function, i.e. a row vector (every column of the
"filtered" matrix is summed).

delete certain columns of matrix when number of zero elements exceeds threshold avoiding loop

I have a quite big (107 x n) matrix X. Within these n columns, each three columns belong to each other. So, the first three columns of matrix X build a block, then columns 4,5,6 and so on.
Within each block, the first 100 row elements of the first column are important X(1:100,1:3:end). Whenever in this first column the number of zeros or NaNs is greater or equal 20, it should delete the whole block.
Is there a way to do this without a loop?
Thanks for any advice!
Assuming the number of columns of the input to be a multiple of 3, there could be two approaches here.
Approach #1
%// parameters
rl = 100; %// row limit
cl = 20; %// count limit
X1 = X(1:rl,1:3:end) %// Important elements from input
match_mat = isnan(X1) | X1==0 %// binary array of matches
match_blk_id = find(sum(match_mat)>=cl) %// blocks that satisfy requirements
match_colstart = (match_blk_id-1).*3+1 %// start column indices that satisfy
all_col_ind = bsxfun(#plus,match_colstart,[0:2]') %//'columns indices to be removed
X(:,all_col_ind)=[] %// final output after removing to be removed columns
Or if you prefer "compact" codes -
X1 = X(1:rl,1:3:end);
X(:,bsxfun(#plus,(find(sum(isnan(X1) | X1==0)>=cl)-1).*3+1,[0:2]'))=[];
Approach #2
X1 = X(1:rl,1:3:end)
match_mat = isnan(X1) | X1==0 %// binary array of matches
X(:,repmat(sum(match_mat)>=cl,[3 1]))=[] %// Find matching blocks, replicate to
%// next two columns and remove them from X
Note: If X is not a multiple of 3, use this before using the codes - X = [X zeros(size(X,1) ,3 - mod(size(X,2),3))].

Find blocks of same dates in a vector and average over the corresponding block of data in another vector

I'm new to Matlab and I'm looking for a solution to a problem of determining blocks of same dates in one vector and to average over the corresponding block of data in another vector.
Given is a vector consisting of several blocks of dates in the format 'dd-mmm-yyyy'. The blocks with same dates can have variable length. An example would be
T= ['03-Jan-2013';
'03-Jan-2013';
'03-Jan-2013';
'04-Jan-2013';
'04-Jan-2013';
'05-Jan-2013']
Each date in T corresponds to a data entry in another vector H (for simplicity same dates from T have here the same corresponding number in H)
H= [1;
1;
1;
5;
5;
6]
The goal is now to determine the average of the elements of H which correspond to the same dates and return a modified date and data vector Tout and Hout which would look like this:
Tout=['03-Jan-2013';
'04-Jan-2013';
'05-Jan-2013']
and
Hout=[1;
5;
6]
where Hout represents the averaged values.
Both vectors are initially drawn from a textfile and can have a length of about 100k.
So looping is probably not the best thing to do.
I appreciate any help!
Use unique to get the unique dates and their multiplicity and accumarray to average over the ones that are repeated
[Tout,~,n] = unique(T, 'rows');
Hout = accumarray(n, H, [], #mean);

how to transfer the pairwise distance value in a distance matrix

I am pretty new to Matlab, now i want to use the matlab to do some clustering job.
if I have 3 columns values
id1 id2 distvalue1
id1 id3 distvalue2
....
id2 id4 distvalue i
.....
5000 ids in total, but some ids pairs are missing the distance value
in python I can make loops to import these distance value into a matrix form. How I can do it in matlab?
and also let the matlab knows id1,...idx are identifies and the third column is the value
Thanks!
Based on the comments, you know how to get the data into the form of an N x 3 matrix, called X, where X(:,1) is the first index, X(:,2) is the second index, and X(:,3) is the corresponding distance.
Let's assume that the indices (id1... idx) are arbitrary numeric labels.
So then we can do the following:
% First, build a list of all the unique indices
indx = unique([X(:,1); X(:,2)]);
Nindx = length(indx);
% Second, initialize an empty connection matrix, C
C = zeros(Nindx, Nindx); %or you could use NaN(Nindx, Nindx)
% Third, loop over the rows of X, and map them to points in the matrix C
for n = 1:size(X,1)
row = find(X(n,1) == indx);
col = find(X(n,2) == indx);
C(row,col) = X(n,3);
end
This is not the most efficient method (that would be to remap the indices of X to the range [1... Nindx] in a vectorized manner), but it should be fine for 5000 ids.
If you end up dealing with very large numbers of unique indices, for which only very few of the index-pairs have assigned distance values, then you may want to look at using sparse matrices -- try help sparse -- instead of pre-allocating a large zero matrix.