I have a matrix of m.n images like the following:
images = zeros( m, n, height, width );
It means I have m.n images whose width and height is given. Then, in a for loop; I fill these images like:
for i=1:m
for j=1:n
images(i,j,:,:) = imread('imagePath');
end
end
Then, let's say I want to use the image (1,1):
image1 = images(1,1,:,:);
I expect this image1 to have size = (h,w). However, when I say:
size(image1)
I get the result:
(1,1,h,w)
Questions:
1.
Why I don't have the following result?
(h,w)
2.
How can I reconstruct my code to have my expected result?
You can use the squeeze function to do just that :)
image1 = squeeze(image1);
size(image1)
should give
(h,w)
It has to do with how matlab does indexing. When you say
image1 = images(1,1,:,:);
You're telling matlab you want a 4 dimensional array, with first and second dimensions of size 1.
Where as, if you had said:
junk = images(:,:,1,1);
size(junk)
> [m,n]
Matlab treats a matrix of size [m,n] the same as if it were of size [m,n,1] or [m,n,1,1]. Can't do that on the front, thus the need for squeeze as #Junuxx points out. An alternative approach is to do thing as follows:
images = zeros( height, width, m, n );
for i=1:m
for j=1:n
images(:,:,m,n) = imread('imagePath');
end
end
image1 = images(:,:,1,1);
Related
for i = [I, J, K];
imshow(i);
end
I, J, K are 16-bit images.
The script keeps trying to pump out the images (but doesn't) and gets into an infinite loop.
Is there something I'm missing?
You can store multiple images in a matrix if they have the same sizes.
However, You should store the images in a cell if they have different sizes. This method is less messier because you don't need to worry about how to extract them later.
define a cell with the size equal to the number of images.
numImages = 3;
Images = cell(1,numImage);
Store an image I into a cell:
Images{1,1} = I;
Now go over the images and show them
for ii = 1:3
imshow(Images{1,ii});
end
Example:
I = imread('cameraman.tif');
J = imread('peppers.png');
K = imread('snowflakes.png');
Images = cell(1,3);
Images{1,1} = I;
Images{1,2} = J;
Images{1,3} = K;
for ii=1:numel(Images)
figure;imshow(Images{1,ii});
end
For better understanding as to which point exactly you are missing/what is happening here: Using square brackets works like a concatenation here. So the line
i = [I, J, K] % separated with commas or spaces for horzcat
i = [I; J; K] % separated with semi-colons for vertcat
does the same as horzcat or vertcat:
i = horzcat(I, J, K);
i = vertcat(I, J, K);
Let's say I, J, K are 64x64 gray valued images. A (horizontal) concatenation will create a 64x192 matrix. The for loop will go through your matrix column-wise, which means it will extract a 64x1 vector 192-times (or much more often for larger images, which might feel like "infinite"). Displaying only a vector with imshow() won't show anything.
As already pointed out, using cells is a more flexible way to store images. When using arrays you have to handle each dimension (and this only works if your images are equal in size):
sizeImage = size(I); % assume all img are same size (as I)
numImages = 3; % needed for allocating array
% init array for imgs and fill images into array: e.g. 64x64x3
imageArray = zeros([sizeImage numImages]);
imageArray(:,:,1) = I; % :,: selects all elements of a dimension
imageArray(:,:,2) = J;
imageArray(:,:,3) = K;
for n = 1:numImages % iterate over image index
figure; imshow(imageArray(:,:,n)); % n = 1, 2 ... , numImages
end % is used for position in imageArray
Using the colon : when accessing arrays/cells selects all elements of a dimension. E.g. imageArray(:,:,n) will select all elements of first and second dimension, corresponding to a 64x64 image. For RGB images an array with 3 images will be 64x64x3x3 and you'll have to use imageArray(:,:,:,n) to select all three color channels.
Note that using for i = img_array won't work, as this will give vectors again.
E.g. for img_array: 64x64x5 (five 64x64 gray-valued images), this will iterate over all but one dimensions (and assign the remaining dim to i): img_array(:,1,1), img_array(:,2,1), img_array(:,3,1), ..., img_array(:,1,2), img_array(:,2,2) ..., img_array(:,64,5) and will again produce 64*3 = 192 vectors for i.
As was already pointed out, if you have variable image sizes, using cell arrays is the way. You may want to consult: Difference between cell and matrix in matlab?
Currently, I am using the code below to segment an image into a grid of cellSizeX pixels times cellSizeY pixels:
grid = zeros(cellSizeX, cellSizeY, ColorCount, cellTotalX, cellTotalY);
for i = 1:cellSizeX:(HorRowCount)
for j = 1:cellSizeY:(VertColumnCount)
try
grid(:,:,:,icount, jcount) = img(i:i+cellSizeX-1, j:j+cellSizeY-1, :);
catch
end
jcount = jcount + 1;
end
icount = icount + 1;
jcount = 1;
end
While this code runs fine and satisfactorily, there are things that nag me:
Via some testing with tic and toc, comparing switching index positions such as grid(:,:,:,icount,jcount) and grid(icount,jcount,:,:,:), I see that grid(:,:,:,icount,jcount) is fastest. But can anything be improved here?
The code will work only if the requested cellSizeX and cellSizeY are proportional to the image img. So requesting cellSizeX and cellSizeY of 9 x 9 on image with size 40 x 40 will result in matlab complaining about exceeding matrix's dimension. Any suggestion regarding this? I do not want to simply fill in blank area for those cells. These cells will be used further in Vlfeat SIFT.
How about converting the image into a cellarray with each cell of size CellSizeX x CellSizeY x ColorCount, then stacking all these cells to a single array grid?
ca = mat2cell( img, cellSizeY * ones(1, cellTotalY), ...
cellSizeX * ones(1, cellTotalX), ...
ColorCount );
grid = reshape( cat( 4, ca{:} ),...
cellSizeX, cellSizeY, ColorCount, cellTotalX, cellTotalY);
It is accustomed in the image processing community to pad image with non-zero values depending on the values of the image at the boundary. Look at the function padarray for more information. You may pad your input image such that its padded size will be proportional to CellSizeX and CellSizeY (padding does not have to be identical at both axes).
How to do this on matlab?
zero pad the face image with a five‐pixel
thick rim around the borders of the
image. show the resulting image.
Must be manual codes on script.
save this function as create_padded_image.m
function padded_image = create_padded_image(image, padding)
if nargin < 2
% if no padding passed - define it.
padding = 5;
end
if nargin < 1
% let's create an image if none is given
image = rand(5, 4)
end
% what are the image dimensions?
image_size = size(image);
% allocate zero array of new padded image
padded_image = zeros(2*padding + image_size(1), 2*padding + image_size(2))
% write image into the center of padded image
padded_image(padding+1:padding+image_size(1), padding+1:padding+image_size(2)) = image;
end
Then call it like this:
% read in image - assuming that your image is a grayscale image
$ image = imread(filename);
$ padded_image = create_padded_image(image)
This sounds like homework, so I will just give you a hint:
In MATLAB it is very easy to put the content of one matrix into another at precisely the correct place. Check out the help for matrix indexing and you should be able to solve it.
I realize you want to code this yourself, but for reference, you can use the PADARRAY function. Example:
I = imread('coins.png');
II = padarray(I,[5 5],0,'both');
imshow(II)
Note this works also for multidimensional matrices (RGB images for example)
I'm currently working with MATLAB to do some image processing. I've been set a task to basically recreate the convolution function for applying filters. I managed to get the code working okay and everything seemed to be fine.
The next part was for me to do the following..
Write your own m-function for unsharp masking of a given image to produce a new output image.
Your function should apply the following steps:
Apply smoothing to produce a blurred version of the original image,
Subtract the blurred image from the original image to produce an edge image,
Add the edge image to the original image to produce a sharpened image.
Again I've got code mocked up to do this but I run into a few problems. When carrying out the convolution, my image is cropped down by one pixel, this means when I go to carry out the subtraction for the unsharpening the images are not the same size and the subtraction cannot take place.
To overcome this I want to create a blank matrix in the convolution function that is the same size as the image being inputted, the new image will then go on top of this matrix so in affect the new image has a one pixel border around it to make it to its original size. When I try and implement this, all I get as an output is the blank matrix I just created. Why is this happening and if so would you be able to help me fix it?
My code is as follows.
Convolution
function [ imgout ] = convolution( img, filter )
%UNTITLED Summary of this function goes here
% Detailed explanation goes here
[height, width] = size(img); % height, width: number of im rows, etc.
[filter_height, filter_width] = size(filter);
for height_bound = 1:height - filter_height + 1; % Loop over output elements
for width_bound = 1:width - filter_width + 1;
imgout = zeros(height_bound, width_bound); % Makes an empty matrix the correct size of the image.
sum = 0;
for fh = 1:filter_height % Loop over mask elements
for fw = 1:filter_width
sum = sum + img(height_bound - fh + filter_height, width_bound - fw + filter_width) * filter(fh, fw);
end
end
imgout(height_bound, width_bound) = sum; % Store the result
end
end
imshow(imgout)
end
Unsharpen
function sharpen_image = img_sharpen(img)
blur_image = medfilt2(img);
convolution(img, filter);
edge_image = img - blur_image;
sharpen_image = img + edge_image;
end
Yes. Concatenation, e.g.:
A = [1 2 3; 4 5 6]; % Matrix
B = [7; 8]; % Column vector
C = [A B]; % Concatenate
I need to rasterize an image in matlab.
I have a b/w image and want to chunk it up in 8x8 blocks and get a mean value from every block. Then I want to replace the block with a new block that is made up by ones and zeros, with a amount of ones depending on the mean value from the original block.
Thanks in advance!
This will get you started. It is the downsampled image where each value is between zero and the square of the block size. You are on your own expanding that integer into a sub matrix.
bs = 8
a = imread('trees.tif');
[r,c] = size(a);
d = imresize(a,[round(r/bs), round(c/bs)]);
figure(1)
imshow(a)
figure(2)
imshow(d)
mv = max(d(:))
d = round(double(d)/double(mv)*bs*bs);
figure(3)
imagesc(d)