Restricted Boltzmann Machine - preprocessing data - matlab

I am programming on MATLAB and want to use RBMs with real-valued input, like greyscale images, so I tried to follow what Hinton said in this article.
The images have integer values in [0, 255] and are stored in a matrix D which is [numImages x numPixel]. So I started preprocessing the data:
scaled the entire dataset so that all the values are in [0, 1] with
D = D / 255;
brought every pixel to have zero mean across all images, so I subtracted from every column of the matrix its mean value with
imgMean = mean(D); % row vector
D = D - repmat(imgMean, rows, 1);
divided the entire dataset by its standard deviation, so that every pixel has unit variance, with
D = D / std(D(:));
But when I try to plot the images, the result is clearly very dark, since many values become negative and are clipped to zero.
Is this ok or did I make any mistake with the preprocessing?

Related

Image bit-plane decomposition

I was wondering how I could extract bit planes of an image for image compression in MATLAB?
Getting individual bit planes is very easy in MATLAB. Use the bitget function.
bitget takes in an array / matrix of an integral type (uint8, uint16, etc.) and it returns an array / matrix of the same size that gives you the bit at a specified position.
For example, supposing that your image was A of size M x N and you wanted the least significant bit, you would do this:
B = bitget(A, 1);
B would be a M x N matrix where each location gives you the least significant bit for the corresponding pixels in the image. You would change the second parameter from 1 up to as many bits as the type supported to get the desired bit location you want so from 1, the least significant bit, up to K, the most significant bit.
If you wanted all bit planes in a single 3D matrix, that can easily be done in the following way assuming an 8-bit unsigned integer grayscale image stored in A:
B = zeros(size(A, 1), size(A, 2), 8, 'uint8');
for idx = 1 : 8
B(:,:,idx) = bitget(A, idx);
end
This will produce a 3D matrix B of 8 slices where the first slice (B(:,:,1)) denotes the LSB at each pixel location up to the last slice (B(:,:,8)) which denotes the MSB at each pixel location.
Read more about bitget on MathWorks' official documentation on the function: http://www.mathworks.com/help/matlab/ref/bitget.html

plot two matrices both of (4*36 double) size in mat lab

I would like to plot two matrices both of (4*36 double) size. The first contains rho and the second contains depths for the 36 locations
well I looked into surf but it reads two arrays and one matrix rather than two matrices and yes I would like to plot them as column graph
here is an example
rho= magic(36);
rho(5:1:end,:)=[];
D= magic(36);
D(5:1:end,:)=[];
D=sort(depth);
So right now the matrix rho contains the densities for the 36 location at four different depths. The matrix D contains the four different depths at which the reading at rho is found. The first element in the first matrix corresponds to the first element in the second matrix and so on
in the end what I would like to have is the 36 column with the different reading from (rho) plotted against appropriate depth in (D)
I hope I helped make it clearer somehow
Simple example of plotting four sets of X and Y data:
X = repmat(1:36, [4 1]);
Y(1,:) = rand(1,36);
Y(2,:) = 0.2 * (1:36);
Y(3,:) = 5 * sin(linspace(-pi,pi,36));
Y(4,:) = 0.1 * (1:36).^2;
figure
plot(X', Y')
This results in
Note - in order to get four series to plot like this, the data has to be in COLUMNS. The original data was in 4x36 matrix, so it was in ROWS. I used the transpose operator (apostrophe - X' rather than just X) to get the data organized in columns.
Maybe this helps...

Using SVD to compress an image in MATLAB

I am brand new to MATLAB but am trying to do some image compression code for grayscale images.
Questions
How can I use SVD to trim off low-valued eigenvalues to reconstruct a compressed image?
Work/Attempts so far
My code so far is:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=double(B);
%read the image and store it as matrix B, convert the image to a grayscale
photo and convert the matrix to a class 'double' for values 0-255
[U,S,V]=svd(doubleB);
This allows me to successfully decompose the image matrix with eigenvalues stored in variable S.
How do I truncate S (which is 167x301, class double)? Let's say of the 167 eigenvalues I want to take only the top 100 (or any n really), how do I do that and reconstruct the compressed image?
Updated code/thoughts
Instead of putting a bunch of code in the comments section, this is the current draft I have. I have been able to successfully create the compressed image by manually changing N, but I would like to do 2 additional things:
1- Show a pannel of images for various compressions (i/e, run a loop for N = 5,10,25, etc.)
2- Somehow calculate the difference (error) between each image and the original and graph it.
I am horrible with understanding loops and output, but this is what I have tried:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=im2double(B);%
%read the image and store it as matrix B, convert the image to a grayscale
%photo and convert the image to a class 'double'
[U,S,V]=svd(doubleB);
C=S;
for N=[5,10,25,50,100]
C(N+1:end,:)=0;
C(:,N+1:end)=0;
D=U*C*V';
%Use singular value decomposition on the image doubleB, create a new matrix
%C (for Compression diagonal) and zero out all entries above N, (which in
%this case is 100). Then construct a new image, D, by using the new
%diagonal matrix C.
imshow(D);
error=C-D;
end
Obviously there are some errors because I don't get multiple pictures or know how to "graph" the error matrix
Although this question is old, it has helped me a lot to understand SVD. I have modified the code you have written in your question to make it work.
I believe you might have solved the problem, however just for the future reference for anyone visiting this page, I am including the complete code here with the output images and graph.
Below is the code:
close all
clear all
clc
%reading and converting the image
inImage=imread('fruits.jpg');
inImage=rgb2gray(inImage);
inImageD=double(inImage);
% decomposing the image using singular value decomposition
[U,S,V]=svd(inImageD);
% Using different number of singular values (diagonal of S) to compress and
% reconstruct the image
dispEr = [];
numSVals = [];
for N=5:25:300
% store the singular values in a temporary var
C = S;
% discard the diagonal values not required for compression
C(N+1:end,:)=0;
C(:,N+1:end)=0;
% Construct an Image using the selected singular values
D=U*C*V';
% display and compute error
figure;
buffer = sprintf('Image output using %d singular values', N)
imshow(uint8(D));
title(buffer);
error=sum(sum((inImageD-D).^2));
% store vals for display
dispEr = [dispEr; error];
numSVals = [numSVals; N];
end
% dislay the error graph
figure;
title('Error in compression');
plot(numSVals, dispEr);
grid on
xlabel('Number of Singular Values used');
ylabel('Error between compress and original image');
Applying this to the following image:
Gives the following result with only first 5 Singular Values,
with first 30 Singular Values,
and the first 55 Singular Values,
The change in error with increasing number of singular values can be seen in the graph below.
Here you can notice the graph is showing that using approximately 200 first singular values yields to approximately zero error.
Just to start, I assume you're aware that the SVD is really not the best tool to decorrelate the pixels in a single image. But it is good practice.
OK, so we know that B = U*S*V'. And we know S is diagonal, and sorted by magnitude. So by using only the top few values of S, you'll get an approximation of your image. Let's say C=U*S2*V', where S2 is your modified S. The sizes of U and V haven't changed, so the easiest thing to do for now is to zero the elements of S that you don't want to use, and run the reconstruction. (Easiest way to do this: S2=S; S2(N+1:end, :) = 0; S2(:, N+1:end) = 0;).
Now for the compression part. U is full, and so is V, so no matter what happens to S2, your data volume doesn't change. But look at what happens to U*S2. (Plot the image). If you kept N singular values in S2, then only the first N rows of S2 are nonzero. Compression! Except you still have to deal with V. You can't use the same trick after you've already done (U*S2), since more of U*S2 is nonzero than S2 was by itself. How can we use S2 on both sides? Well, it's diagonal, so use D=sqrt(S2), and now C=U*D*D*V'. So now U*D has only N nonzero rows, and D*V' has only N nonzero columns. Transmit only those quantities, and you can reconstruct C, which is approximately like B.
For example, here's a 512 x 512 B&W image of Lena:
We compute the SVD of Lena. Choosing the singular values above 1% of the maximum singular value, we are left with just 53 singular values. Reconstructing Lena with these singular values and the corresponding (left and right) singular vectors, we obtain a low-rank approximation of Lena:
Instead of storing 512 * 512 = 262144 values (each taking 8 bits), we can store 2 x (512 x 53) + 53 = 54325 values, which is approximately 20% of the original size. This is one example of how SVD can be used to do lossy image compression.
Here's the MATLAB code:
% open Lena image and convert from uint8 to double
Lena = double(imread('LenaBW.bmp'));
% perform SVD on Lena
[U,S,V] = svd(Lena);
% extract singular values
singvals = diag(S);
% find out where to truncate the U, S, V matrices
indices = find(singvals >= 0.01 * singvals(1));
% reduce SVD matrices
U_red = U(:,indices);
S_red = S(indices,indices);
V_red = V(:,indices);
% construct low-rank approximation of Lena
Lena_red = U_red * S_red * V_red';
% print results to command window
r = num2str(length(indices));
m = num2str(length(singvals));
disp(['Low-rank approximation used ',r,' of ',m,' singular values']);
% save reduced Lena
imwrite(uint8(Lena_red),'Reduced Lena.bmp');
taking the first n max number of eigenvalues and their corresponding eigenvectors may solve your problem.For PCA, the original data multiplied by the first ascending eigenvectors will construct your image by n x d where d represents the number of eigenvectors.

MATLAB windowed FFT for a 2D matrix data (image)

Data: Say I have a 2000 rows by 500 column matrix (image)
What I need: Compute the FFT of 64 rows by 10 column chunks of above data. In other words, I want to compute the FFT of 64X10 window that is run across the entire data matrix. The FFT result is used to compute a scalar value (say peak amplitude frequency) which is used to create a new "FFT value" image.
Now, I need the final FFT image to be the same size as the original data (2000 X 500).
What is the fastest way to accomplish this in MATLAB? I am currently using for loops which is relatively slow. Also I use interpolation to size up the final image to the original data size.
As #EitanT pointed out, you can use blockproc for batch block processing of an image J. However you should define your function handle as
fun = #(block_struct) fft2(block_struct.data);
B = blockproc(J, [64 10], fun);
For a [2000 x 500] matrix this will give you a [2000 x 500] output of complex Fourier values, evaluated at sub-sampled pixel locations with a local support (size of the input to FFT) of [64 x 10]. Now, to replace those values with a single, e.g. with the peak log-magnitude, you can further specify
fun = #(block_struct) max(max(log(abs(fft2(block_struct.data)))));
B = blockproc(J, [64 10], fun);
The output then is a [2000/64 x 500/10] output of block-patch values, which you can resize by nearest-neighbor interpolation (or something else for smoother versions) to the desired [2000 x 500] original size
C = imresize(B, [2000 500], 'nearest');
I can include a real image example if it will further help.
Update: To get overlapping blocks you can use the 'Bordersize' option of blockproc by setting the overlap [V H] such that the final windows of size [M + 2*V, N + 2*H] will still be [64, 10] in size. Example:
fun = #(block_struct) log(abs(fft2(block_struct.data)));
V = 16; H = 3; % overlap values
overlap = [V H];
M = 32; N = 4; % non-overlapping values
B1 = blockproc(J, [M N], fun, 'BorderSize', overlap); % final windows are 64 x 10
However, this will work with keeping the full Fourier response, not the single-value version with max(max()) above.
See also this post for filtering using blockproc:Dealing with “Really Big” Images: Block Processing.
If you want to apply the same function (in your case, the 2-D Fourier transform) on individual distinct blocks in a larger matrix, you can do that with the blkproc function, which is replaced in newer MATLAB releases by blockproc.
However, I infer that you wish to apply apply fft2 on overlapping blocks in a "sliding window" fashion. For this purpose you can use colfilt with the 'sliding' option. Note that the function that we're applying on each block is the fft:
block_size = [64, 10];
temp_size = 5 * block_size;
col_func = #(x)cellfun(#(y)max(max(abs(fft2(y)))), num2cell(x, 1), 'Un', 0);
B = colfilt(A, block_size, 10 * block_size, 'sliding', col_func);
How does this work? colfilt processes the matrix A by rearranging each "sliding" block into a separate column of a new temporary matrix, and then applying the col_func to this new matrix. col_func in turn restores each column into the original block and applies fft2 on it, returning the largest amplitude value for each column.
Important things to note:
Since this mentioned temporary matrix includes all possible "sliding" blocks, memory could be a limitation. Therefore, in order to use less memory in calculations, colfilt breaks up the original matrix A into sub-matrices of temp_size, and performs calculations separately on each. The resulting matrix B is still the same, of course.
Each element in the resulting matrix B is computed from the corresponding block neighborhood. The larger your image is, the more blocks you will need to process, so the computation time will increase geometrically. I believe that you'll have to wait quite a bit until MATLAB finishes processing all sliding windows on your 2000-by-500 matrix.

matlab image processing 3d

i have 100 b&w image of smthing.the probllem is i want to scan each image in 0&1 formatin mby n format and then place each image to one over one and again scan and save them in mbynby100 form.
how i do this and from where i should start
_jaysean
Your question is vague and hard to understand, but my guess is that you want to take 100 M-by-N grayscale intensity images, threshold them to create logical matrices (i.e. containing zeroes and ones), then put them together into one M-by-N-by-100 matrix. You can do the thresholding by simply picking a threshold value yourself, like 0.5, and applying it to an image A as follows:
B = A > 0.5;
The matrix B will now be an M-by-N logical matrix with ones where A is greater than 0.5 and zeroes where A is less than or equal to 0.5.
If you have the Image Processing Toolbox, you could instead use the function GRAYTHRESH to pick a threshold and the function IM2BW to apply it:
B = im2bw(A,graythresh(A));
Once you do this, you can easily put the images into an M-by-N-by-100 logical matrix. Here's an example of how you could do this in a loop, assuming the variables M and N are defined:
allImages = false(M,N,100); %# Initialize the matrix to store all the images
for k = 1:100
%# Here, you would load your image into variable A
allImages(:,:,k) = im2bw(A,graythresh(A)); %# Threshold A and add it to
%# the matrix allImages
end