Summing up scipy rows and filling it in the diagonal - scipy

I have a sparse matrix of large dimension and I want to take sum of elements of each row, i, and fill diagonal with the results. How do I do this?
My numpy approach:
A = np.sum(A, axis=1)
D = np.diag(A)
How do I approach this with scipy?
Edit: I am using scipy.sparse and initializing A as a csr_matrix. I have read the docs but perhaps I am misread or missing something. With sparse.diags(D), I just got a column array, i.e it didn't give me a diagonal matrix when I tried to turn the sparse matrix into np.array.

Make a sparse matrix:
In [149]: from scipy import sparse
In [150]: M = sparse.csr_matrix(np.arange(6).reshape(2,3))
In [151]: M
Out[151]:
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 5 stored elements in Compressed Sparse Row format>
In [152]: M.A
Out[152]:
array([[0, 1, 2],
[3, 4, 5]])
Sum on an axis:
In [153]: M.sum(axis=0)
Out[153]: matrix([[3, 5, 7]])
Note this is a dense np.matrix. Since sum increases density, sparse returns this rather than a sparse matrix.
Convert that matrix to 1d ndarray:
In [154]: M.sum(axis=0).A1
Out[154]: array([3, 5, 7])
Use diags to make a sparse matrix:
In [155]: M1=sparse.diags(M.sum(axis=0).A1)
In [156]: M1
Out[156]:
<3x3 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements (1 diagonals) in DIAgonal format>
In [157]: M1.A
Out[157]:
array([[3., 0., 0.],
[0., 5., 0.],
[0., 0., 7.]])
Read sparse.diags to see its required arguments. sparse.dia_matrix can also be used (but also read its docs).
Math on the Dia format matrix is likely to produce a csr format matrix.
Other diagonal creation commands:
M1=sparse.diags(M.sum(axis=0),[0], shape=(3,3))
M1=sparse.dia_matrix((M.sum(axis=0), 0),shape=(3,3))

Related

How to force SVC to treat a user-provided kernel as sparse

SVC appears to treat kernels that can take sparse matrices differently from those that don't. However, if a user-provided kernel is written to take sparse matrices, and a sparse matrix is provided during fit, it still converts the sparse matrix to dense and treats the kernel as dense because the kernel is not one of the sparse kernels pre-defined in scikit-learn.
Is there a way to force SVC to recognize the kernel as sparse and not convert the sparse matrix to dense before passing it to the kernel?
Edit 1: minimal working example
As an example, if upon creation, SVC is passed the string "linear" for the kernel, then the linear kernel is used, the sparse matrices are passed directly to the linear kernel, and the support vectors are stored as sparse matrices if a sparse matrix is provided when fitting. However, if instead the linear_kernel function itself is passed to SVC, then the sparse matrices are converted to ndarray before passing to the kernel, and the support vectors are stored as ndarray.
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import linear_kernel
from sklearn.svm import SVC
def make_random_sparsemat(m, n=1024, p=.94):
"""Make mxn sparse matrix with 1-p probability of 1."""
return csr_matrix(np.random.uniform(size=(m, n)) > p, dtype=np.float64)
X = make_random_sparsemat(100)
Y = np.asarray(np.random.uniform(size=(100)) > .5, dtype=np.float64)
model1 = SVC(kernel="linear")
model1.fit(X, Y)
print("Built-in kernel:")
print("Kernel treated as sparse: {}".format(model1._sparse))
print("Type of dual coefficients: {}".format(type(model1.dual_coef_)))
print("Type of support vectors: {}".format(type(model1.support_vectors_)))
model2 = SVC(kernel=linear_kernel)
model2.fit(X, Y)
print("User-provided kernel:")
print("Kernel treated as sparse: {}".format(model2._sparse))
print("Type of dual coefficients: {}".format(type(model2.dual_coef_)))
print("Type of support vectors: {}".format(type(model2.support_vectors_)))
Output:
Built-in kernel:
Kernel treated as sparse: True
Type of dual coefficients: <class 'scipy.sparse.csr.csr_matrix'>
Type of support vectors: <class 'scipy.sparse.csr.csr_matrix'>
User-provided kernel:
Kernel treated as sparse: False
Type of dual coefficients: <type 'numpy.ndarray'>
Type of support vectors: <type 'numpy.ndarray'>
I'm fishing around in the dark, working mainly from scikit-learn code that I find on github.
A lot of the SVC linear code appears to be in a C library. There is talk about its internal representation being sparse.
Your linear_kernel function just does:
X, Y = check_pairwise_arrays(X, Y)
return safe_sparse_dot(X, Y.T, dense_output=True)
If I make your X and Y
In [119]: X
Out[119]:
<100x1024 sparse matrix of type '<class 'numpy.float64'>'
with 6108 stored elements in Compressed Sparse Row format>
In [120]:
In [120]:
In [120]: Y = np.asarray(np.random.uniform(size=(100)) > .5, dtype=np.float64)
and recreate sparse_safe_dot
In [122]: safe_sparse_dot(Y,X,dense_output=True)
Out[122]: array([ 3., 5., 3., ..., 4., 2., 4.])
So applying that to Y and X (in the only order that makes sense), I get a dense array. Changing the dense_output parameter doesn't change things. Basically, Y*X, a sparse * a dense, returns a dense.
If I make Y sparse, then I can get a sparse product:
In [125]: Ym=sparse.csr_matrix(Y)
In [126]: Ym*X
Out[126]:
<1x1024 sparse matrix of type '<class 'numpy.float64'>'
with 1000 stored elements in Compressed Sparse Row format>
In [127]: safe_sparse_dot(Ym,X,dense_output=False)
Out[127]:
<1x1024 sparse matrix of type '<class 'numpy.float64'>'
with 1000 stored elements in Compressed Sparse Row format>
In [128]: safe_sparse_dot(Ym,X,dense_output=True)
Out[128]: array([[ 3., 5., 3., ..., 4., 2., 4.]])
I don't know the workings of SVC and fit, but just from working with sparse matrices, I know that you have to be careful when mixing sparse and dense matrices. It is easy to get a dense result, whether you want it or not.

Eigen Values from Matlab

I'm trying to figure out Eigenvalues/Eigenvectors for large datasets in order to compute
the PCA. I can calculate the Eigenvalues and Eigenvectors for 2x2, 3x3 etc..
The problem is, I have a dataset containing 451x128 I compute the covariance matrix which
gives me 128x128 values from this. This, therefore looks like the following:
A = [ [1, 2, 3,
2, 3, 1,
..........,
= 128]
[5, 4, 1,
3, 2, 1,
2, 1, 2,
..........
= 128]
.......,
128]
Computing the Eigenvalues and vectors for a 128x128 vector seems really difficult and
would take a lot of computing power. However, if I allow for each of the blocks in A to be a 2-dimensional (3xN) I can then compute the covariance matrix which will give me a 3x3 matrix.
My question is this: Would this be a good or reasonable assumption for solving the eigenvalues and vectors? Something like this:
A is a 2-dimensional vector containing 128x451,
foreach of the blocks compute the eigenvalues and eigenvectors of the covariance vector,
like so:
Eig1 = eig(cov(A[0]))
Eig2 = eig(cov(A[1]))
This would then give me 128 Eigenvalues (for each of the blocks inside the 128x128 vector)..
If this is not correct, how does MATLAB handle such large dimensional data?
Have you tried svd()
Do the singular value decomposition
[U,S,V] = svd(X)
U and V are orthogonal matrices and S contains the eigen values. Sort U and V in descending order based on S.
As kkuilla mentions, you can use the SVD of the original matrix, as the SVD of a matrix is related to the Eigenvalues and Eigenvectors of the covariance matrix as I demonstrate in the following example:
A = [1 2 3; 6 5 4]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
As to your own assumption, I may be misreading it, but I think it is false. I can't see why the Eigenvalues of the blocks would relate to the Eigenvalues of the matrix as a whole; they wouldn't correspond to the same Eigenvectors, as the dimensionality of the Eigenvectors wouldn't match. I think your covariances would be different too, but then I'm not completely clear on how you are creating these blocks.
As to how Matlab does it, it does use some tricks. Perhaps the link below might be informative (though it might be a little old). I believe they use (or used) LAPACK and a QZ factorisation to obtain intermediate values.
https://au.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html
Use the word
[Eigenvectors, Eigenvalues] = eig(Matrix)

Eigen vector in SVD

Im going to compute the eigen value and eigen vector from my Matrix data fro the classification.
The rows represent the different classes and the columns represent the features.
So, for example if I have
X=
[2 3 4]
[3 2 4]
[4 5 6]
[8 9 0]
I have to use SVD instead of PCA because the matrix is not square.
What I have done are:
Compute the mean for each row. So I have
Mean=
M1
M2
M3
M4
Substract my matrix X with the Mean
Substract=
[2-M1 3-M1 4-M1]
[3-M2 2-M2 4-M2]
[4-M3 5-M3 6-M3]
[8-M4 9-M4 0-M4]
Covariance Matrix = (Substract*Substract^t)/(4-1)
[U,S,V] = svd(X)
Are all my step right? By compute the mean for each row (as the classes)?
If I want to project my data into eigen space (for dimensionality reduction), which is the eigen vector (U or V)??
You can do PCA whether your matrix is square or not. In fact, your matrix is rarely square because it has a form n*p where n is the number of observations and p is the number of features. Thus you can use MATLAB's pricomp function
[W, pc] = princomp(data);
where W is a weight matrix and pc is the principal component score. You can see your data projected into the principal component space by,
plot(pc(1,:),pc(2,:),'.');
which shows your data in the first- and second- principal component directions.

Matlab ordfilt2 or alternatives for weighted local max

I would like to compute the weighted maxima of a vector in Matlab. For weighted maxima I intend the following:
Given a vector of 2*N+1 weights W={w[-N], w[-N+1] .. w[0] .. w[N]} and given an input sequence A, weighted maxima is a vector M where m[i]=max(w[-N]*a[i-N], w[-N+1]*a[i-N+1], ... w[N]*a[i+N])
So for example given a vector A= [1, 4, 12, 2, 4] and weights W=[0.5, 1, 0.5], the weighted maxima would be M=[2, 6, 12, 6, 4].
This can be done using ordfilt2, but ordfilt2 uses weights as additive rather then multiplicative.
I am actually working on 4D matrixes, but any 1D solution would work as the 4D weight matrix is separable.
My current solution is to generate shifted copies of the input array A, weight them according to the shift and maximize all the arrays. Shift is performed using circshift and is the bottleneck in the process. generating shifted matrixes "manually" trough indexing turned out to be even slower.
Can you suggest any more efficient solution?
EDIT: For a positive A, M=exp(ordfilt2(log(A), length(W), ones(size(W)), log(W))) does the job, but still takes longer than the circshift solution above. I am still looking for more efficient solutions.
>> B = padarray(A, [0 floor(numel(W)/2)], 0); % Pad A with zeros
>> B = bsxfun(#times, B(bsxfun(#plus, 1:numel(B)-numel(W)+1, (0:numel(W)-1)')), W(:)); % Apply the weights
>> M = max(B) % Compute the local maxima
M =
2 6 12 6 4

Finding certain VALUES of indices in a matrix

I have two vectors which are of same length M and N. The values of the vectors represent the indices of another matrix A so that the corresponding indices in vector M and N make index pairs of A.
For example I have matrices
M=[1 2 3 4] and N=[5 6 7 8]
I would like to find the values of specific indices in matrix A and store them to another vector I, like this:
I = [A(1,5) A(2,6) A(3,7) A(4,8)]
How could this be done?
You can convert them to linear indices using sub2ind and then use those linear indices to index A:
ind = sub2ind(size(A), M(:), N(:));
I = A(ind);
Note I've gone M(:) as this guarantees that M will be a column vector