How large matrix can be fit into Eigen library? [closed] - matlab

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am working on large scale data like 300000 x 300000 matrix currently may interest. It is really hard to process in matlab due to "Out of memory" error so I decide to use EIGEN. Is there any restriciton for eigen in the matrix size?

The dense matrices in EIGEN are stored in a contiguous block of memory, which cannot exceed 2 GB in 32-bit application, so if you're running a 32-bit application, the allocation will start to crash for matrices half this size, that is to say around 10,000x10,000. See for example this duplicated question.
The same issue will happen with other libraries, since you're bounded by your RAM, not by the library.
Yet, if your big matrix has a vast majority of 0 in it, you can resort to SparseMatrix.
If your matrix isn't sparse, then you may store it on disk, but you'll get terrible performance when manipulating it.

Related

How does error get back propagated through pooling layers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
Improve this question
I asked a question earlier that might have been too specific so I'll ask again in more general terms. How does error get propagated backwards through a pooling layer when there are no weights to train? In the tensorflow video at 6:36 https://www.youtube.com/watch?v=Y_hzMnRXjhI there's a GlobalAveragePooling1D after Embedding, How does the error go backwards?
A layer doesn't need to have weights in order to back-prop.
You can compute the gradients of a global avg pool w.r.t the inputs - it's simply dividing by the number of elements pooled.
It is a bit more tricky when it comes to max pooling: in that case, you propagate gradients through the pooled indices. That is, during back-prop, the gradients are "routed" to the input elements that contributed the maximal elements, no gradient is propagated to the other elements.

the maximum size of matrix in Matlab [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want to solve the linear equation Ax=b, with A is 40000x400000 matrix, x and b are 400000x1 array . When I use A\b in MATLAB, I have the error "out of memory". Because the size of matrix is too large, I can not use A\b in MATLAB. How can I solve my problem without using the command A\b in Matlab.
Thank you for your help
The problem is partially on Matlab and partially on your machine.
First, think very well if you actually need to solve this problem with a 40000x400000, maybe you can simplify it, maybe you can segment your problem, maybe the system is decoupled, just check very well.
For context, to store a 40000x400000 matrix of 8 byte floats, you'd need around 120gb. That's likely too much, there's a good chance you don't even have enough free disk space for it.
If this matrix has many zeros, at least much more than non-zeros, then you could exploit Matlab's sparse matrix functions. They work without storing the whole matrix, and operating only on non-zero numbers, basically.
If you are lazy but you have a very good machine (say 1TB of SSD), you might consider increasing the size of the paging file in linux (or its analogous on windows). That basically means that there is a space in the disk that you'll allow the computer to use as if it were RAM memory. While memory wouldn't crash, the operations you need to perform would take insanely long, so consider starting with a smaller matrix to gauge the execution time, which should grow with the cube of the vector length.
So case in point, try to review the problem you've got at your hands.

What is the absolute fastest package/software to multiply sparse * dense matrices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I've tried matlab, but unfortunately it is not threaded. I've also tried eigen and although it is threaded and scales quite well, the single thread performance is a little worse than Matlab.
How can I multiply a general large sparse * dense matrix in the fastest way possible on the CPU (not GPU).
Use both. For a single threaded environment, run matlab routines, for multi-threaded, go with eigen. And keep tabs on new developments because for highly competitive fields like these, any advice you get here will be out of date in a month.

LDA and Dimensionality Reduction [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have dataset consisting of about 300 objects with 84 features for each object. The objects are already separated into two classes. With PCA I´m able to reduce the dimensionality to about 24. I´m using 3 principle components covering about 96% of the variance of the original data. The problem I have is that PCA doesnt care about the ability to separate the classes from each other. Is there a way to combine PCA for reducing feature space and LDA for finding a discriminance function for those two classes ?
Or is there a way to use LDA for finding the features that separate two classes in threedimensional space in the best manner ?
I´m kind of irritated because I found this paper but I´m not really understanding. http://faculty.ist.psu.edu/jessieli/Publications/ecmlpkdd11_qgu.pdf
Thanks in advance.
You should have a look at this article on principle component regression (PCR, what you want if the variable to be explained is scalar) and partial least squares regression (PLSR) with MATLAB's statistics toolbox. In PCR essentially, you choose the principal components as they most explain the dependent variable. They may not be the ones with the largest variance.

generate eigen images of an image matlab [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am new to image processing and am trying to learn few concepts by practically implementing certain functions. I heard about creating eigen images of an image, so tried to implement the same, to actually know what they are and what properties they alter.
Thus I obtained the eigen vectors using the eig function in matlab. How can I display these eigen images using the vector? Please forgive me if the question is wrong or rudimentary. Your help is much appreciated.
Assuming you have several images of size r x c, then taken the steps described on wikipedia, you should now have eigenvectors ev1, ev2 ... of length r x c.
If this is the case, it should be fairly easy to turn these into images again:
myImage1 = reshape(ev1,r,c);
Check whether r and c are in the right order and whether you need to transpose, but this is basically it.
For showing them you may want to look into surf or image.