Given a matrix d x n (d-dimensional, n-object) I would like to compute the unit length vector of each columns. (i.e the resultant matrix should have unit length in every column)
how can i do it without looping every column?
I'm assuming you're using the L2 norm. In that case,
normalizedVector = bsxfun(#rdivide,vector,sqrt(sum(vector.^2,1)));
will have unit length along each column.
Related
I have a dataframe with two columns where each row has a Sparse Vector. I try to find a proper way to calculate the cosine similarity (or just the dot product) of the two vectors in each row.
However, I haven't been able to find any library or tutorial to do it for Sparse vectors.
The only way I found is the following:
Create a k X n matrix, where n items are described as k-dimensioned vectors. For representing each item as a k dimension vector, you can use ALS which represents each entity in a latent factor space. The dimension of this space (k) can be chosen by you. This k X n matrix can be represented as RDD[Vector].
Convert this k X n matrix to RowMatrix.
Use columnSimilarities() function to get a n X n matrix of similarities between n items.
I feel it is an overkill to calculate all the cosine similarities for each pair while I need it only for the specific pairs in my (quite big) dataframe.
In Spark 3 there is now method dot for a SparseVector object, which takes another vector as its argument.
If you want to do this in earlier versions, you could create a user defined function that follows this algorithm:
Take intersection of your vectors' indices.
Get two subarrays of your vectors' values based on the indices from the intersection.
Do pairwise multiplication of the elements of those two subarrays.
Sum the values resulting values from such pairwise multiplication.
Here's my realization of it:
import org.apache.spark.ml.linalg.SparseVector
def dotProduct(vec: SparseVector, vecOther: SparseVector) = {
val commonIndices = vec.indices intersect vecOther.indices
commonIndices.map(x => vec(x) * vecOther(x)).reduce(_+_)
}
I guess you know how to turn it into a Spark UDF from here and apply it to your dataframe's columns.
And if you normalize your sparse vectors with org.apache.spark.ml.feature.Normalizer before computing your dot product, you'll get cosine similarity in the end (by definition).
Great answer above #Sergey-Zakharov +1.
A few adds-on:
The reduce doesn't work on empty sequences.
Make sure computing L2 normalization.
val normalizer = new Normalizer()
.setInputCol("features")
.setOutputCol("normFeatures")
.setP(2.0)
val l2NormData = normalizer.transform(df_features)
and
val dotProduct = udf {(v1: SparseVector, v2: SparseVector) =>
v1.indices.intersect(v2.indices).map(x => v1(x) * v2(x)).reduceOption(_ + _).getOrElse(0.0)
}
and then
val df = dfA.crossJoin(broadcast(dfB))
.withColumn("dot", dotProduct(col("featuresA"), col("featuresB")))
If the number of vectors you want to calculate the dot product with is small, cache the RDD[Vector] table. Create a new table [cosine_vectors] that is a filter on the original table to only select the vectors you want the cosine similarities for. Broadcast join those two together and calculate.
I need to know if there is any efficient way of doing the following in MATLAB.
I have several big sparse matrices, the size of each one is roughly 9000000x9000000.
I need to access multiple element of such matrix and assign to each selected element a different value stored in another array. I'll give an example:
What I have:
SPARSE MATRIX of size 9000000x9000000
Matrix with the list of indexes and values I want to access, this is a matrix like this:
[row1, col1, value1;
row2, col2, value2;
...
rowN, colN, valueN]
Where N is the length of such matrix.
What I need:
Assign to the SPARSE MATRIX the corresponding value to the corresponding index, this is:
SPARSE_MATRIX(row1, col1) = value1
SPARSE_MATRIX(row2, col2) = value2
...
SPARSE_MATRIX(rowN, colN) = valueN
Thanks in advance!
EDIT:
Thank you to both for answering, I think I did not explain myself well, I'll try again.
I already have a large SPARSE MATRIX of about 9000000 rows x 9000000 columns, it is a SPARSE MATRIX filled with zeros.
Then I have another array or matrix, let's call it M with N number of rows, where N could take values from 0 to 9000000; and 3 columns. The first two columns are used to index an element of my SPARSE MATRIX, and the third column stores the value I want to transfer to the SPARSE MATRIX, this is, given a random row of M, i:
SPARSE_MATRIX(M(i, 1), M(i, 2)) = M(i, 3)
The idea is to do that for all the rows, I have tried it with common indexing:
SPARSE_MATRIX(M(:, 1), M(:, 2)) = M(:, 3)
Now I would like to do this assignation for all the rows in M as fast as possible, because if I use a loop or common indexing it takes ages (I am using a 7th Gen i7 processor with 16 GB of RAM). And I also need to keep the zeros in the SPARSE_MATRIX.
EDIT 2: SOLVED! Thank you Metahominid, I was not thinking through, but yes the sparse function does solve my problem, I just think my brain circuits were shortcircuited yesterday and was unable to see through it hahaha. Thank you to both anyway!
Regards!
You can construct a sparse matrix like this.
A = sparse(i,j,v)
S = sparse(i,j,v) generates a sparse matrix S from the triplets i, j,
and v such that S(i(k),j(k)) = v(k). The max(i)-by-max(j) output
matrix has space allotted for length(v) nonzero elements. sparse adds
together elements in v that have duplicate subscripts in i and j.
So you can simply construct the row vector, column vector and value vector.
I am answering in part because I cannot comment. You question seems a little confusing to me. The sparse() function in MATLAB does just this.
You can enter your arrays of indices and values directly into the interface, or declare a sparse matrix of zeros and set each individually.
Given your data format make three vectors, ROWS = [row1; ...; rown], COLS = [col1; ...; coln], and DATA = [val1; ... valn]. I am assuming that your size is the overall size of the full matrix and not the sparse portion.
Then
A = sparse(ROWS, COLS, DATA) will do just what you want. You can even specify the original matrix size.
A = sparse(ROWS, COLS, DATA, 90...., 90....).
I have matrix A and matrices [U,S,V], such that [U, S, V] = svd(A).
How could I amend my script in matlab to get the 10 columns of U that correspond to the 10 largest singular values of A (i.e. the largest values in S)?
If you recall the definition of the svd, it is essentially solving an eigenvalue problem such that:
Av = su
v is a right-eigenvector from the matrix V and u is a left-eigenvector from the matrix U. s is a singular value from the matrix S. You know that S is a diagonal matrix where these are the singular values sorted in descending order. As such, if we took the first column of v and the first column of u, as well as the first singular value of s (top-left corner), if we did the above computation, we should get both outputs to be the same.
As an example:
rng(123);
A = randn(4,4);
[U,S,V] = svd(A);
format long;
o1 = A*V(:,1);
o2 = S(1,1)*U(:,1);
disp(o1);
disp(o2);
-0.267557887773137
1.758696945035771
0.934255531699997
-0.978346339659143
-0.267557887773136
1.758696945035771
0.934255531699996
-0.978346339659143
Similarly, if we looked at the second columns of U and V with the second singular value:
o1 = A*V(:,2);
o2 = S(2,2)*U(:,2);
disp(o1);
disp(o2);
0.353422275717823
-0.424888938462465
1.543570300948254
0.613563185406719
0.353422275717823
-0.424888938462465
1.543570300948252
0.613563185406719
As such, the basis vectors are arranged such that the columns are arranged from left to right in the same order as what the singular values dictate. As such, you simply grab the first 10 columns of U. This can be done by:
out = U(:,1:10);
out would contain the basis vectors, or columns of U that correspond to the 10 highest singular values of A.
First you sort the singular values, and save the reindexing, then take the first 10 values:
[a, b]=sort(diag(S));
Umax10=U(:,b(1:10));
As mentioned by Rayryeng, svd outputs the singular values in decreasing order so:
Umax10=U(:,1:10);
is enough.
Just have to remember that it is not the case for eig, even though it may seems that eig also outputs ordered eigenvalues it is not always the case.
For a vector v (e.g. v=[1,2,3,4,5]), and two index vectors (e.g. a=[1,1,1,2,3] and b=[3,4,5,5,5], with all a(i)<b(i)), I would like to construct w=sum(v(a:b)), which gives the values
w = zeros(length(a),1);
for i = 1:length(a)
w(i)=sum(v(a(i):b(i)));
end
It is slow when length(a) is large. Can I compute w without the for loop?
Yes! The nth element of cumsum(v) is the sum of the first n elements in v, so just take that and subtract the sum of the elements that you don't want to include:
v=[1,2,3,4,5]
a=[1,1,1,2,3]
b=[3,4,5,5,5]
C=cumsum(v)
C(b)-C(a)+v(a)
%// or alternatively
C=cumsum([0 v])
C(b+1)-C(a)
The following code works, but it is of course much less readable:
% assume v is a column vector
units = 1:length(v); units = units'; %units is a column vector
units_matrix = repmat(units, [1 length(a)]);
a_matrix = repmat(a, [length(v) 1]); % assuming a is is a row vector
b_matrix = repmat(b, [length(v) 1]);
weights = (units_matrix>=a_matrix) & (units_matrix<=b_matrix);
v_matrix = repmat(v, [1 length(a)]);
w = sum(v_matrix.*weights);
Explanation:
v_matrix contains copies of v. The summation will be done along
the column, so we need to prepare the other needed information in
vectorized form. units_matrix contains the indexes in v along the
columns. The columns are identical. a_matrix and b_matrix, in
each of their column, contains the indexes that are relevant for each
partial summation. All rows are identical. weights is a logical
matrix where, for each column, the indexes contained in
units_matrix between the corresponding a and b are 1 (true),
and the rest is 0. The element-wise multiplication thus filters the
"right" values, and all the indexes outside the range (again, for
each different column) is multiplied by zero. w is then he result
of the sum function, i.e. a row vector (every column of the
"filtered" matrix is summed).
I am currently trying to code up a function to assign probabilities to a collection of vectors using a histogram count. This is essentially a counting exercise, but requires some finesse to be able to achieve efficiently. I will illustrate with an example:
Say that I have a matrix X = [x1, x2....xM] with N rows and M columns. Here, X represents a collection of M, N-dimensional vectors. IN other words, each of the columns of X is an N-dimensional vector.
As an example, we can generate such an X for M = 10000 vectors and N = 5 dimensions using:
X = randint(5,10000)
This will produce a 5 x 10000 matrix of 0s and 1s, where each column is represents a 5 dimensional vector of 1s and 0s.
I would like to assign a probability to each of these vectors through a basic histogram count. The steps are simple: first find the unique columns of X; second, count the number of times each unique column occurs. The probability of a particular occurrence is then the #of times this column was in X / total number of columns in X.
Returning to the example above, I can do the first step using the unique function in MATLAB as follows:
UniqueXs = unique(X','rows')'
The code above will return UniqueXs, a matrix with N rows that only contains the unique columns of X. Note that the transposes are due to weird MATLAB input requirements.
However, I am unable to find a good way to count the number of times each of the columns in UniqueX is in X. So I'm wondering if anyone has any suggestions?
Broadly speaking, I can think of two ways of achieving the counting step. The first way would be to use the find function, though I think this may be slow since find is an elementwise operation. The second way would be to call unique recursively as it can also provide the index of one of the unique columns in X. This should allow us to remove that column from X and redo unique on the resulting X and keep counting.
Ideally, I think that unique might already be doing some counting so the most efficient way would probably be to work without the built-in functions.
Here are two solutions, one assumes all values are either 0's or 1's (just like the example in your description), the other does not. Both codes should be very fast (more so the one with binary values), even on large data.
1) only zeros and ones
%# random vectors of 0's and 1's
x = randi([0 1], [5 10000]); %# RANDINT is deprecated, use RANDI instead
%# convert each column to a binary string
str = num2str(x', repmat('%d',[1 size(x,1)])); %'
%# convert binary representation to decimal number
num = (str-'0') * (2.^(size(s,2)-1:-1:0))'; %'# num = bin2dec(str);
%# count frequency of how many each number occurs
count = accumarray(num+1,1); %# num+1 since it starts at zero
%# assign probability based on count
prob = count(num+1)./sum(count);
2) any positive integer
%# random vectors with values 0:MAX_NUM
x = randi([0 999], [5 10000]);
%# format vectors as strings (zero-filled to a constant length)
nDigits = ceil(log10( max(x(:)) ));
frmt = repmat(['%0' num2str(nDigits) 'd'], [1 size(x,1)]);
str = cellstr(num2str(x',frmt)); %'
%# find unique strings, and convert them to group indices
[G,GN] = grp2idx(str);
%# count frequency of occurrence
count = accumarray(G,1);
%# assign probability based on count
prob = count(G)./sum(count);
Now we can see for example how many times each "unique vector" occurred:
>> table = sortrows([GN num2cell(count)])
table =
'000064850843749' [1] # original vector is: [0 64 850 843 749]
'000130170550598' [1] # and so on..
'000181606710020' [1]
'000220492735249' [1]
'000275871573376' [1]
'000525617682120' [1]
'000572482660558' [1]
'000601910301952' [1]
...
Note that in my example with random data, the vector space becomes very sparse (as you increase the maximum possible value), thus I wouldn't be surprised if all counts were equal to 1...