Image bit-plane decomposition - matlab

I was wondering how I could extract bit planes of an image for image compression in MATLAB?

Getting individual bit planes is very easy in MATLAB. Use the bitget function.
bitget takes in an array / matrix of an integral type (uint8, uint16, etc.) and it returns an array / matrix of the same size that gives you the bit at a specified position.
For example, supposing that your image was A of size M x N and you wanted the least significant bit, you would do this:
B = bitget(A, 1);
B would be a M x N matrix where each location gives you the least significant bit for the corresponding pixels in the image. You would change the second parameter from 1 up to as many bits as the type supported to get the desired bit location you want so from 1, the least significant bit, up to K, the most significant bit.
If you wanted all bit planes in a single 3D matrix, that can easily be done in the following way assuming an 8-bit unsigned integer grayscale image stored in A:
B = zeros(size(A, 1), size(A, 2), 8, 'uint8');
for idx = 1 : 8
B(:,:,idx) = bitget(A, idx);
end
This will produce a 3D matrix B of 8 slices where the first slice (B(:,:,1)) denotes the LSB at each pixel location up to the last slice (B(:,:,8)) which denotes the MSB at each pixel location.
Read more about bitget on MathWorks' official documentation on the function: http://www.mathworks.com/help/matlab/ref/bitget.html

Related

Histogram equalization for non-images in MATLAB

I have a vector of values and I want to change the values somehow that its histogram is closer to the uniform distribution using MATLAB. I am aware of histeq in MATLAB that takes an image as input and assumes the densities are in 0-255 range. I am looking for a more general version of histeq.
You are looking to do a full scale contrast stretch, correct? If so this function will work. You can change K to be the largest value in your vector if you are not using 8 bit integers.
function [result] = myfscs(image)
K=255;
A= min(image(:));
B= max(image(:));
P=K/(B-A);
L=A*K/(B-A);
J = (P .* image - L);
result = uint8(J); % doesn't have to be a uint8 returned

Restricted Boltzmann Machine - preprocessing data

I am programming on MATLAB and want to use RBMs with real-valued input, like greyscale images, so I tried to follow what Hinton said in this article.
The images have integer values in [0, 255] and are stored in a matrix D which is [numImages x numPixel]. So I started preprocessing the data:
scaled the entire dataset so that all the values are in [0, 1] with
D = D / 255;
brought every pixel to have zero mean across all images, so I subtracted from every column of the matrix its mean value with
imgMean = mean(D); % row vector
D = D - repmat(imgMean, rows, 1);
divided the entire dataset by its standard deviation, so that every pixel has unit variance, with
D = D / std(D(:));
But when I try to plot the images, the result is clearly very dark, since many values become negative and are clipped to zero.
Is this ok or did I make any mistake with the preprocessing?

Using SVD to compress an image in MATLAB

I am brand new to MATLAB but am trying to do some image compression code for grayscale images.
Questions
How can I use SVD to trim off low-valued eigenvalues to reconstruct a compressed image?
Work/Attempts so far
My code so far is:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=double(B);
%read the image and store it as matrix B, convert the image to a grayscale
photo and convert the matrix to a class 'double' for values 0-255
[U,S,V]=svd(doubleB);
This allows me to successfully decompose the image matrix with eigenvalues stored in variable S.
How do I truncate S (which is 167x301, class double)? Let's say of the 167 eigenvalues I want to take only the top 100 (or any n really), how do I do that and reconstruct the compressed image?
Updated code/thoughts
Instead of putting a bunch of code in the comments section, this is the current draft I have. I have been able to successfully create the compressed image by manually changing N, but I would like to do 2 additional things:
1- Show a pannel of images for various compressions (i/e, run a loop for N = 5,10,25, etc.)
2- Somehow calculate the difference (error) between each image and the original and graph it.
I am horrible with understanding loops and output, but this is what I have tried:
B=imread('images1.jpeg');
B=rgb2gray(B);
doubleB=im2double(B);%
%read the image and store it as matrix B, convert the image to a grayscale
%photo and convert the image to a class 'double'
[U,S,V]=svd(doubleB);
C=S;
for N=[5,10,25,50,100]
C(N+1:end,:)=0;
C(:,N+1:end)=0;
D=U*C*V';
%Use singular value decomposition on the image doubleB, create a new matrix
%C (for Compression diagonal) and zero out all entries above N, (which in
%this case is 100). Then construct a new image, D, by using the new
%diagonal matrix C.
imshow(D);
error=C-D;
end
Obviously there are some errors because I don't get multiple pictures or know how to "graph" the error matrix
Although this question is old, it has helped me a lot to understand SVD. I have modified the code you have written in your question to make it work.
I believe you might have solved the problem, however just for the future reference for anyone visiting this page, I am including the complete code here with the output images and graph.
Below is the code:
close all
clear all
clc
%reading and converting the image
inImage=imread('fruits.jpg');
inImage=rgb2gray(inImage);
inImageD=double(inImage);
% decomposing the image using singular value decomposition
[U,S,V]=svd(inImageD);
% Using different number of singular values (diagonal of S) to compress and
% reconstruct the image
dispEr = [];
numSVals = [];
for N=5:25:300
% store the singular values in a temporary var
C = S;
% discard the diagonal values not required for compression
C(N+1:end,:)=0;
C(:,N+1:end)=0;
% Construct an Image using the selected singular values
D=U*C*V';
% display and compute error
figure;
buffer = sprintf('Image output using %d singular values', N)
imshow(uint8(D));
title(buffer);
error=sum(sum((inImageD-D).^2));
% store vals for display
dispEr = [dispEr; error];
numSVals = [numSVals; N];
end
% dislay the error graph
figure;
title('Error in compression');
plot(numSVals, dispEr);
grid on
xlabel('Number of Singular Values used');
ylabel('Error between compress and original image');
Applying this to the following image:
Gives the following result with only first 5 Singular Values,
with first 30 Singular Values,
and the first 55 Singular Values,
The change in error with increasing number of singular values can be seen in the graph below.
Here you can notice the graph is showing that using approximately 200 first singular values yields to approximately zero error.
Just to start, I assume you're aware that the SVD is really not the best tool to decorrelate the pixels in a single image. But it is good practice.
OK, so we know that B = U*S*V'. And we know S is diagonal, and sorted by magnitude. So by using only the top few values of S, you'll get an approximation of your image. Let's say C=U*S2*V', where S2 is your modified S. The sizes of U and V haven't changed, so the easiest thing to do for now is to zero the elements of S that you don't want to use, and run the reconstruction. (Easiest way to do this: S2=S; S2(N+1:end, :) = 0; S2(:, N+1:end) = 0;).
Now for the compression part. U is full, and so is V, so no matter what happens to S2, your data volume doesn't change. But look at what happens to U*S2. (Plot the image). If you kept N singular values in S2, then only the first N rows of S2 are nonzero. Compression! Except you still have to deal with V. You can't use the same trick after you've already done (U*S2), since more of U*S2 is nonzero than S2 was by itself. How can we use S2 on both sides? Well, it's diagonal, so use D=sqrt(S2), and now C=U*D*D*V'. So now U*D has only N nonzero rows, and D*V' has only N nonzero columns. Transmit only those quantities, and you can reconstruct C, which is approximately like B.
For example, here's a 512 x 512 B&W image of Lena:
We compute the SVD of Lena. Choosing the singular values above 1% of the maximum singular value, we are left with just 53 singular values. Reconstructing Lena with these singular values and the corresponding (left and right) singular vectors, we obtain a low-rank approximation of Lena:
Instead of storing 512 * 512 = 262144 values (each taking 8 bits), we can store 2 x (512 x 53) + 53 = 54325 values, which is approximately 20% of the original size. This is one example of how SVD can be used to do lossy image compression.
Here's the MATLAB code:
% open Lena image and convert from uint8 to double
Lena = double(imread('LenaBW.bmp'));
% perform SVD on Lena
[U,S,V] = svd(Lena);
% extract singular values
singvals = diag(S);
% find out where to truncate the U, S, V matrices
indices = find(singvals >= 0.01 * singvals(1));
% reduce SVD matrices
U_red = U(:,indices);
S_red = S(indices,indices);
V_red = V(:,indices);
% construct low-rank approximation of Lena
Lena_red = U_red * S_red * V_red';
% print results to command window
r = num2str(length(indices));
m = num2str(length(singvals));
disp(['Low-rank approximation used ',r,' of ',m,' singular values']);
% save reduced Lena
imwrite(uint8(Lena_red),'Reduced Lena.bmp');
taking the first n max number of eigenvalues and their corresponding eigenvectors may solve your problem.For PCA, the original data multiplied by the first ascending eigenvectors will construct your image by n x d where d represents the number of eigenvectors.

MATLAB windowed FFT for a 2D matrix data (image)

Data: Say I have a 2000 rows by 500 column matrix (image)
What I need: Compute the FFT of 64 rows by 10 column chunks of above data. In other words, I want to compute the FFT of 64X10 window that is run across the entire data matrix. The FFT result is used to compute a scalar value (say peak amplitude frequency) which is used to create a new "FFT value" image.
Now, I need the final FFT image to be the same size as the original data (2000 X 500).
What is the fastest way to accomplish this in MATLAB? I am currently using for loops which is relatively slow. Also I use interpolation to size up the final image to the original data size.
As #EitanT pointed out, you can use blockproc for batch block processing of an image J. However you should define your function handle as
fun = #(block_struct) fft2(block_struct.data);
B = blockproc(J, [64 10], fun);
For a [2000 x 500] matrix this will give you a [2000 x 500] output of complex Fourier values, evaluated at sub-sampled pixel locations with a local support (size of the input to FFT) of [64 x 10]. Now, to replace those values with a single, e.g. with the peak log-magnitude, you can further specify
fun = #(block_struct) max(max(log(abs(fft2(block_struct.data)))));
B = blockproc(J, [64 10], fun);
The output then is a [2000/64 x 500/10] output of block-patch values, which you can resize by nearest-neighbor interpolation (or something else for smoother versions) to the desired [2000 x 500] original size
C = imresize(B, [2000 500], 'nearest');
I can include a real image example if it will further help.
Update: To get overlapping blocks you can use the 'Bordersize' option of blockproc by setting the overlap [V H] such that the final windows of size [M + 2*V, N + 2*H] will still be [64, 10] in size. Example:
fun = #(block_struct) log(abs(fft2(block_struct.data)));
V = 16; H = 3; % overlap values
overlap = [V H];
M = 32; N = 4; % non-overlapping values
B1 = blockproc(J, [M N], fun, 'BorderSize', overlap); % final windows are 64 x 10
However, this will work with keeping the full Fourier response, not the single-value version with max(max()) above.
See also this post for filtering using blockproc:Dealing with “Really Big” Images: Block Processing.
If you want to apply the same function (in your case, the 2-D Fourier transform) on individual distinct blocks in a larger matrix, you can do that with the blkproc function, which is replaced in newer MATLAB releases by blockproc.
However, I infer that you wish to apply apply fft2 on overlapping blocks in a "sliding window" fashion. For this purpose you can use colfilt with the 'sliding' option. Note that the function that we're applying on each block is the fft:
block_size = [64, 10];
temp_size = 5 * block_size;
col_func = #(x)cellfun(#(y)max(max(abs(fft2(y)))), num2cell(x, 1), 'Un', 0);
B = colfilt(A, block_size, 10 * block_size, 'sliding', col_func);
How does this work? colfilt processes the matrix A by rearranging each "sliding" block into a separate column of a new temporary matrix, and then applying the col_func to this new matrix. col_func in turn restores each column into the original block and applies fft2 on it, returning the largest amplitude value for each column.
Important things to note:
Since this mentioned temporary matrix includes all possible "sliding" blocks, memory could be a limitation. Therefore, in order to use less memory in calculations, colfilt breaks up the original matrix A into sub-matrices of temp_size, and performs calculations separately on each. The resulting matrix B is still the same, of course.
Each element in the resulting matrix B is computed from the corresponding block neighborhood. The larger your image is, the more blocks you will need to process, so the computation time will increase geometrically. I believe that you'll have to wait quite a bit until MATLAB finishes processing all sliding windows on your 2000-by-500 matrix.

Histogram Based Image Classification with Weka

I am doing a project on histogram based image retrieval, and I need to compare learning algorithms for a set of images. So, in MATLAB, I converted an image (256x256 pixels) into HSV, quantized it to 8(H),3(S),3(V) and created a weighted sum, which is a 256x256 matrix.
I want to use this matrix (of all images in the dataset) to create an ARFF file, and I am stuck at this point. Can anyone help me out with how it has to be done?
If I understood what you did, you took the image as input (256x256 RGB matrix) and converted it to a 256x256 matrix where each position is a weighted sum of HSV values.
However, if you want to extract a color histogram (which, in this case, is the appropriate input to Weka), you should have as output a vector, where each entry is the count of how many pixels has a given H, S and L value.
Since you have 8 different values for H (0 to 7), 3 for S (0 to 2) and 3 for L (0 to 2), your vector V should have 8+3+3=14 entries. In order to compute V, use the following algorithm:
Input: quantized HSL image I
Output: histogram V
for each pixel p in I:
V[p.H] = V[p.H] + 1 // Increment the count for the H component.
V[7 + p.S] = V[7 + p.S] + 1 // Increment the count for the S component.
V[10 + p.L] = V[10 + p.L] + 1 // Increment the count for the L component.
return V