scipy.sparse.csr_matrix matrix-matrix multiplication implementation - scipy

I'm wondering how does scipy implement the matrix-matrix multiplication in the csr sparse format. Does anyone have a pseudo-code of that? A python implementation would be even better. To be more clear, suppose we have matrix A and B in csr format, such that they both have .data, .indices and .indptr attributes. C = A*B. How to compute those three attributes of C?Sorry for my bad english and I hope I'm clear.
Thanks,
yang

Related

BigBird, or Sparse self-attention: How to implement a sparse matrix?

This question is related to the new paper: Big Bird: Transformers for Longer Sequences. Mainly, about the implementation of the Sparse Attention (that is specified in the Supplemental material, part D). Currently, I am trying to implement it in PyTorch.
They suggest a new way to speed up the computation by blocking the original query and key matrices (see, below)
When you do the matrix multiplaciton in the step (b), you end up with something like that:
.
So I was wondering: how would you go from that representation (image above) to a sparse matrix (using PyTorch, see below)? In the paper, they just say: "simply reshape the result", and I do not know any easy ways to do so (especially, when I have multiple blocks in different positions (see step (c) on the first image).
RESOLUTION:
Huggingface has an implementation of BigBird in pytorch.
I end up following the guidelines in the paper. When it comes to the unpacking of the result I use: torch.sparse_coo_tensor
EDIT: Sparse tensors are still memory-hungry! The more efficient solution is described here

Multiplication of large sparse Matrices without null values in scala

I have two very sparse distributed matrixes of dimension 1,000,000,000 x 1,000,000,000 and I want to compute the matrix multiplication efficiently.
I tried to create a BlockMatrix from a CoordinateMatrix but it's a lot of memory (where in reality the non zero data are around ~500'000'000) and the time of computation is enormous.
So there is another way to create a sparse matrix and compute a multiplication efficiently in a distributed way in Spark? Or i have to compute it manually?
You must obviously use a storage format for sparse matrices that makes use of their sparsity.
Now, without knowing anything about how you handle matrices and which libraries you use, there's no helping you but to ask you to look at the linear algebra libraries of your choice and look for sparse storage formats; the "good old" Fortran-based libraries that underly a lot of modern math libs support them, and so chances are that you really have to do but a little googling with yourlibraryname + "sparse matrix".
second thoughts:
Sparse matrixes really don't lend themselves to distribution very well; think about the operations you'd have to do to coordinate distribution compared to the actual multiplications/additions.
Also, ~5e8 non-zero elements in a 1e18 element matrix are definitely a lot of memory, and since you don't specify how much you consider a lot to be, it's very possible there's nothing wrong with it. Assuming you're using the default double precision, that's 5e8 * 8B = 4GB of pure numbers, not counting the coordinates needed for sparse storage. So, if you've got ~10GB of memory, I wouldn't be surprised at all.
As there is no build-in method in Spark to perform a matrix multiplication with sparse matrixes. I resolved by reduce at best the sparsity of the matrices before perform the matrice multiplication with BlockMatrix (that not support sparse matrix).
Last edit: Even with the sparsity optimization I had a lot of problems with large dataset. Finally, I decided to implement it myself. Now is running very fast. I hope that a matrix implementation with sparse matrix will be implemented in Spark as I think there are a lot of application that can make use of this.

Matlab Multiplication

If matrix A is in X, and matrix B is in Y.
Doing a multiplcation would just be Z = X*Y. Correct assuming same size for both arrays.
How can I compute it doing it with a for loop?
The anwser by ja72 is wrong, see my comments under it to see why. In general, in these simple linear algebra operations, it's impossible for your code to beat the vectorized version, not even if you write your code in C/mex (unless you have a certain sparsity structure in your matrix that you can exploit in your code). The reason is that under the hood, Matlab passes the actual job of matrix multiplication to Lapack library, written in Fortran, which then calls Blas libraries that are optimized given particular machine architecture.
Yes acai is correct, and I remember wondering the same thing when I started using Matlab. Just to provide some more detail to what acai said, LAPACK is Linear Algebra PACKage and is something a lot of other languages use to solve these types of problems, Python connects to it using SciPy, Java jlapack, etc.. BLAS is Basic Linear Algebra Subroutines, which handle the basic problem of matrix multiplication you are asking about. Acai is also right that you can never beat the performance Matlab gives for matrix multiplication, this is their bread and butter and they have spent decades now, optimizing the performance of these operations.
Yes matrix multiplication is A*B and element by element is A*.B. If A is (NxM) and B is (MxK) size then the code for C=A*B is
update
for i=1:N
for j=1:K
C(i,j) = A(i,:)*B(:,j)
end
end

Is sparse matrix-vector multiplication available in Simulink/xPC?

I am trying to make my control algorithm more efficient since my matrices are sparse. Currently, I am doing conventional matrix-vector multiplications in Simulink/xPC for a real-time application. I can not find a way to convert the matrix to a sparse one and perform that type of multiplication where it is compatible with xPC. Does anyone have an idea on how to do this?
It appears, at least as of earlier this year, that it is impossible to do sparse matrices in Simulink: see this Q&A on MathWorks' site. As the answerer is a Simulink software engineer, it seems authoritative. :)

MATLAB calculates INV wrong (for singular matrices)

MATLAB calculate INV wrong sometimes:
See this example
[ 8617412867597445*2^(-25), 5859840749966268*2^(-28)]
[ 5859840749966268*2^(-28), 7969383419954132*2^(-32)]
If you put this in MATLAB it doesn't have inverse but in s calculator it has one.
What is going on?
Please read What every scientist should know about floating point arithmetic
Next, don't compute the inverse anyway. An inverse matrix is almost never necessary, except in textbooks, where it is convenient to write. Sadly, many authors do not appreciate this fact anyway, because they had learned from textbooks by other people who also failed to understand that an inverse matrix is a bad thing to do in general.
Since this matrix is numerically singular in double precision arithmetic, the inverse of that matrix is meaningless.
Use of the matlab backslash operator will be better and faster in general than will inverse. Or use pinv, which will be more robust to problems.
Hi I wanted to comment on Woodchips' answer but since I'm a new user I can't seem to do that, that is one very interesting article and I must read it in more detail when I have the time...
With regards to matrix inversion, you could always use the 'cond' command to calculate the condition number of the matrix, for a non-singular matrix the value should be approaching unity. As Woodchips suggested, 'pinv' does come in handy if you need to find the psuedo-inverse of a non-square matrix.