I have a huge sparse matrix (1,000 x 1,000,000) that I cannot load on matlab (not enough RAM).
I want to visualize this matrix to have an idea of its sparsity and of the differences of the values.
Because of the memory constraints, I want to proceed as follows:
1- Divide the matrix into 4 matrices
2- Load each matrix on matlab and visualize it so that the colors give an idea of the values (and of the zeros particularly)
3- "Stick" the 4 images I will get in order to have a global idea for the original matrix
(i) Is it possible to load "part of a matrix" in matlab?
(ii) For the visualization tool, I read about spy (and daspect). However, this function only enables to visualize the non-zero values indifferently of their scales. Is there a way to add a color code?
(iii) How can I "stick" plots in order to make one?
If your matrix is sparse, then it seems that the currently method of storing it (as a full matrix in a text file) is very inefficient, and certainly makes loading it into MATLAB very hard. However, I suspect that as long as it is sparse enough, it can still be leaded into MATLAB as a sparse matrix.
The traditional way of doing this would be to load it all in at once, then convert to sparse representation. In your case, however, it would make sense to read in the text file, one line at a time, and convert to a MATLAB sparse matrix on-the-fly.
You can find out if this is possible by estimating the sparsity of your matrix, and using this to see if the whole thing could be loaded into MATLAB's memory as a sparse matrix.
Try something like: (untested code!)
% initialise sparse matrix
sparse_matrix = sparse(num_rows, num_cols);
row_num = 1;
fid = fopen(filename);
% read each line of text file in turn
while ~feof(fid)
this_line = fscanf(fid, '%f');
% add row to sparse matrix (note transpose, which I think is required)
sparse_matrix(row_num, :) = this_line';
row_num = row_num + 1;
end
fclose(fid)
% visualise using spy
spy(sparse_matrix)
Visualisation
With regards to visualisation: visualising a sparse matrix like this via a tool like imagesc is possible, but I believe it may internally create the full matrix – maybe someone can confirm if this is true or not. If it does, then it's going to cause you memory problems.
All spy is really doing is plotting in 2D the locations of the non-zero elements. You can fairly easily write your own spy function, which can have different coloured or sized points depending on the values at each location. See this answer for some examples.
Saving sparse matrices
As I say above, the method your matrix is saved as is pretty inefficient – for a matrix with 10% sparsity, around 95% of your text file will be a zero or a space. I don't know where this data has come from, but if you have any control over its creation (e.g. it comes from another program you have written) it would make much more sense to save only the non-zero elements in the format row_idx, col_idx, value.
You can then use spconvert to import the sparse matrix directly.
One of the simplest methods (if you can actually store the full sparse matrix in RAM) is to use gnuplot to visualize the sparisty pattern.
I was able to spy matrices of size 10-20GB using gnuplot without problems. But make sure you use png or jpeg formats to output the image. Note that you don't need the value of the non-zero entry only the integers (row, col). And plot them "plot "row_col.dat" using 1:2 with points".
This chooses your row as x axis and cols as your y axis and start plotting the non-zero entries. It is very easy to do this. This is the most scalable solution I know. Gnuplot works at decent speed even for very large datasets (>10GB of [row, cols]), but Matlab just hangs (with due respect)
I use imagesc() to visualise arrays. It scales the values in array to values between 0 and 1, then plots the array like a greyscale bitmap image (of course you can change the colormap to make it easier to see detail).
Related
I have a sparse 5018x5018 matrix in MATLAB, which has about 100k values set to 1 (i.e., about 99.6% empty).
I'm trying to flip roughly 5% of those zeros to ones (i.e., about 1.25m entries). I have the x and y indices in the matrix I want to flip.
Here is what I have done:
sizeMat=size(network);
idxToReplace=sub2ind(sizeMat,x_idx, y_idx);
network(idxToReplace) = 1;
This is incredibly slow, in particular the last line. Is there any way to make this operation run noticeably faster, preferably without using mex files?
This should be faster:
idxToReplace=sparse(x_idx,y_idx,ones(size(x_idx),size(matrix,1),size(matrix,2)); % Create a sparse with ones at locations
network=network+idxToReplace; % Add the two matrices
I think your solution is very slow because you create a 1.26e6 logical array with your points and then store them in the sparse matrix. In my solution, you only create a sparse matrix and just sum the two.
I am filling a sparse matrix P (230k,290k) with values coming from a text file which I read line by line, here is the (simplified) code
while ...
C = textscan(text_line,'%d','delimiter',',','EmptyValue', 0);
line_number = line_number+1;
P(line_number,:)=C{1};
end
the problem I have is that while at the beginning the
P(line_number,:)=C{1};
statement is fast, after a few thousands lines become exterely slow, I guess because Matlab need to find the memory space to allocate every time. Is there a way to pre-allocate memory with sparse matrixes? I don't think so but maybe I am missing something. Any other advise which can speed up the operation (e.g. having a lot of free RAM can make the difference?)
There's a sixth input argument to sparse that tells the number of nonzero elements in the matrix. That's used by Matlab to preallocate:
S = sparse(i,j,s,m,n,nzmax) uses vectors i, j, and s to generate an
m-by-n sparse matrix such that S(i(k),j(k)) = s(k), with space
allocated for nzmax nonzeros.
So you could initiallize with
P = sparse([],[],[],230e3,290e3,nzmax);
You can make a guess about the number of nonzeros (perhaps checking file size?) and use that as nzmax. If it turns you need more nonzero elements in the end, Matlab will preallocate on the fly (slowly).
By far the fastest way to generate a sparse matrix wihtin matlab is to load all the values in at once, then generate the sparse matrix in one call to sparse. You have to load the data and arrange it into vectors defining the row and column indices and values for each filled cell. You can then call sparse using the S = sparse(i,j,s,m,n) syntax.
I need to convolve a matrix with many other matrices with few calls to convn.
for example: I have size(MyMat)=[fm, fm ,1, bSize] and size(masks)=[s, s, maskNum]
I want res(:,:,k,:) to be the product of convolving masks(:,:,k) with MyMat
res(:,:,k,:)=convn(MyMat,masks(:,:,k));
since the convolution takes up over 80% of the running time for my script and is called hundreds of thousands of times, I don't want to use a loop.
I'm looking for the fastest way to do this. basically, you could say I have bSize matrices, and I want to apply convolution masks masks to all of them with as few calls as possible to convolution.
The matrices are all small,non-sparse, fft-based convolution will probably slow it down (as a commentor here verified :) )
(The reason I have a 1 in the size of MyMat is because I actually have more elements in that dimension, but I compute the convolution for each element in that dimension in a loop)
The main goal is simply to eliminate the need for the following loop, or make it parallel with very little overhead, if possible:
for i=1:length
res(:,:,:,i)=convn(MyArray,convMask(:,:,i));
end
parallelizing for the GPU would be great if there's a way to do this with less overhead than the usual parfor
Thank you!
I assume that you are preallocating the array res correctly? Without a simple demo of what your doing and an idea of the size of fm, s, etc., one can only make guesses to help you. If the sizes of your matrices are large enough you might look into FFT-based convolution methods (there are some for convn on the Matlab File Exchange). If the data is sparse (> 50% zeros), you could try converting this to matrix multiplication and use sparse data types. You could also try gpuArray/convn if you have a decent one.
I want to compute a general PCA matrix for a dataset, and I will use it to reduce dimensions of sift descriptors. I have already found some algorithms to compute it, but I couldn't find a way to compute it by using MATLAB.
Can someone help me?
[coeff, score] = princomp(X)
is the right thing to do, but knowing how to use it is a little tricky.
My understanding is that you did something like:
sift_image = sift_fun(img)
which gives you a binary image: sift_feature?
(Even if not binary, this still works.)
Inputs, formulating X:
To use princomp/pca formulate X so that each column is a numel(sift_image) x 1 vector (i.e. sift_image(:))
Do this for all your images and line them up as columns in X. So X will be numel(sift_image) x num_images.
If your images aren't the same size (e.g. pixel dimensions different, more or less of a scene in the images), then you'll need to bring them into some common space, which is a whole different problem.
Unless your stuff is binary, you'll probably want to de-mean/normalize X, both in the column direction (i.e. normalizing each individual image) and row direction (de-meaning the whole dataset).
Outputs
score is the set of eigen vectors: it will be num_pixels * num_images.
To get, say the first eigen vector back into an image shape, do:
first_component = reshape(score(:,1),size(im));
And so on for the rest of the components. There are as many components as input images.
Each row of coeff is the num_images (equal to num_components) set of weights that can be applied to generate each input image. i.e.
input_image_1 = reshape(score * coeff(:,1) , size(original_im));
where input_image_1 is the correct, original shape
coeff(1,:) is a vector (num_images x 1)
score is pixels x num_images
(Disclaimer: I may have the columns/rows mixed up, but the descriptions are correct.)
Does that help?
If you have access to Statistics Toolbox, you can use the command princomp, or in recent versions the command pca.
I have a time series of temperature profiles that I want to interpolate, I want to ask how to do this if my data is irregularly spaced.
Here are the specifics of the matrix:
The temperature is 30x365
The time is 1x365
Depth is 30x1
Both time and depth are irregularly spaced. I want to ask how I can interpolate them into a regular grid?
I have looked at interp2 and TriScatteredInterp in Matlab, however the problem are the following:
interp2 works only if data is in a regular grid.
TriscatteredInterp works only if the vectors are column vectors. Although time and depth are both column vectors, temperature is not.
Thanks.
Function Interp2 does not require for a regularly spaced measurement grid at all, it only requires a monotonic one. That is, sampling positions stored in vectors depths and times must increase (or decrease) and that's all.
Assuming this is indeed is the situation* and that you want to interpolate at regular positions** stored in vectors rdepths and rtimes, you can do:
[JT, JD] = meshgrid(times, depths); %% The irregular measurement grid
[RT, RD] = meshgrid(rtimes, rdepths); %% The regular interpolation grid
TemperaturesOnRegularGrid = interp2(JT, JD, TemperaturesOnIrregularGrid, RT, RD);
* : If not, you can sort on rows and columns to come back to a monotonic grid.
**: In fact Interp2 has no restriction for output grid (it can be irregular or even non-monotonic).
I would use your data to fit to a spline or polynomial and then re-sample at regular intervals. I would highly recommend the polyfitn function. Actually, anything by this John D'Errico guy is incredible. Aside from that, I have used this function in the past when I had data on a irregularly spaced 3D problem and it worked reasonably well. If your data set has good support, which I suspect it does, this will be a piece of cake. Enjoy! Hope this helps!
Try the GridFit tool on MATLAB central by John D'Errico. To use it, pass in your 2 independent data vectors (time & temperature), the dependent data matrix (depth) along with the regularly spaced X & Y data points to use. By default the tool also does smoothing for overlapping (or nearly) data points. If this is not desired, you can override this (and other options) through a wide range of configuration options. Example code:
%Establish regularly spaced points
num_points = 20;
time_pts = linspace(min(time),max(time),num_points);
depth_pts = linspace(min(depth),max(depth),num_points);
%Run interpolation (with smoothing)
Pest = gridfit(depth, time, temp, time_pts, depth_pts);