Looking at the Image Pyramids tutorial I see the following note:
Notice that it is important that the input image can be divided by a factor of two (in both dimensions). Otherwise, an error will be shown.
I was wondering, How can Image Pyramid can be built for arbitrary image size and keep the "Reproduction" exact (Up to round off errors).
Taking image size of 101 x 101, After the first "Downsample" step using 1:2:101 an image size of 51 x 51 is yielded.
Yet after another iteration 26 x 26 image is yielded, so how can we handle both odd and even sizes.
I'd be happy for a MATLAB code dealing with the "Upsample" / "Downsample" procedure for any size.
I have seen some techniques where they resize the dimensions of the original image to be even, before they subsample the image. This odd and even stuff is unfortunately unavoidable as you have already seen, so you'd have to do some work by yourself before you pass it to an image decomposition routine. That way there is no ambiguity when constructing your image pyramid. When you're done, you can crop out those portions of the original image that you don't want. Another technique would be to eliminate the row and column to ensure that they're both even as well.
As such, all you'd have to do is extend your image's dimensions to ensure that each dimension is even. In other words, you'd do something like this:
im = imread('...'); %// Place image here
rows = size(im,1);
cols = size(im,2);
imResize = imresize(im, [rows + mod(rows,2), cols + mod(cols,2)], 'bilinear');
This basically reads in an image and the dimensions (rows and columns). After, it resizes the image to ensure that the dimensions (rows and columns) are even. This is done by checking to see if the values are odd by using mod. If any of the dimensions are odd, the output would be 1 and you would just tack this on as the output dimensions.
Also, you can simply crop out the last row or column if they're odd too by doing:
im = imread('...'); % // Place image here
rows = size(im,1);
cols = size(im,2);
imResize = im(1:rows-mod(rows,2), 1:cols-mod(cols,2), :);
The mod here is used in the same way to crop out what you don't need. If any of the dimensions are odd, simply subtract 1 so that we eliminate either the last row or last column if needed.
As CST-Link has already stated, upsampling from an image that has odd dimensions already would be impossible to reconstruct the original image dimensions as that precision in order to get those original rows and columns back if they were odd were already lost.
For downsampling the better approach would be to delete the last row & column from the image, thus a 101×101 generates 50×50. Though it may be a matter of taste, I think is better to ignore real information than to introduce bogus information.
For upsampling, I'm afraid what you ask is impossible. Let's say you have a 200×200 pixel image; can you tell from its size that is was downsampled from a 400×400 pixel image, or from 401×401 pixel image? Both would give the same result.
Related
I need to combine several images (of different textures) together. I have tried the following code:
% Read 4d data
I1 = importdata('Img1.tif');
I2 = importdata('Img2.tif');
% Extract a slice of the data
extractImg1 = I1(:,:,1);
extractImg2 = I2(:,:,1);
% compute image size
[ny1, nx1] = size(extractA1);
[ny2, nx2] = size(extractA2);
P1 = extractImg1 (round(ny1/2)-120:round(ny1/2)+120, round(nx1/2)-120:round(nx1/2)+120);
figure, imshow(P1); title('Img1');
P2 = extractImg2 (round(ny2/2)-120:round(ny2/2)+120, round(nx2/2)-120:round(nx2/2)+120);
figure, imshow(P2); title('Img2');
Please, what should I do next?
Secondly, the combined image will be needed for laser printing. The images do not have exactly the same pixel dimensions thus; I was told that it would not make sense to combine them, as this might slightly reduce accuracy.
Nonetheless, I still have a feeling that combining the images wouldn’t be wrong considering that they all have the same resolutions.
I need advice as to whether I should go ahead with the combination. Many thanks in advance.
You have extracted two equal-sized regions from the two images. If you want to put those side-by-side in the same image, use cat, or equivalently, use the square brackets []:
next_to_each_other = [P1,P2];
on_top_of_each_other = [P1;P2];
But note that you can put these things together even if they don't have the same sizes. For example, if I1 is NxM pixels, and I2 is NxK (with N the vertical size as customary in MATLAB) then you can still do [I1,I2] because the vertical size matches.
If nether the vertical nor horizontal sizes match, you can pad one with zeros (or whatever value is appropriate) using padarray before putting them together:
ny1 = size(I1,1);
ny2 = size(I2,1);
if ny1<ny2
I1 = padarray(I1,[ny2-ny1,0,0],0,'post'); % The 0 is the value to pad
elseif ny2<ny1
I2 = padarray(I2,[ny1-ny2,0,0],0,'post'); % The 0 is the value to pad
end
out = [I1,I2];
padarray also allows replicating the data in the matrix instead of padding with zeros. Read the documentation to find what is appropriate. padarray requires the Image Processing Toolbox. If you don't have it, you can replicate its functionality by creating an array with zeros of the appropriate size using the zeros function, and adding it to the image using something like [I1;zeros(ny2-ny1,size(I1,2),size(I1,3)].
I am working on some images. I am given an abc.tif image ( color image) . I read it as follows:
Mat test_image=imread("abc.tif",IMREAD_UNCHANGED);
I perform some operations on it and convert it into some binary image (using threshold) containing only two values 0 and 255 which are stored in img image where img is created as following:
Mat img(584,565,CV_8UC1); %//(so now img contains only 0 and 255)
I save this image using imwrite("myimage.jpg",img);
I want to compare the myimage.jpg image with another binary image manual.gif pixel by pixel to check whether one image is duplicate of another but as you can notice the problem is OpenCv doesnot support .gif format so I need to convert it into .jpg and because of that the image changes and now both the images will be concluded as different images may be even though they are same. What to do now?
Actually I am working on retinal blood vessel segmentation and these images are found in the DRIVE database.
I am given these images. Original image:
I perform some operations on it and extract blood vessels from it and then create a binary image and store in some Mat variable img as discussed earlier. Now I have got another image (.gif image) which I cannot load as shown below:
Now I want to compare my img image (binary) with the given .gif image (above) which I cannot load.
Use ImageMagic for converting your .gif to .PNG in batch mode. You could also convert it on the fly using system("convert img.gif img.png") call.
I'm not sure, if pixel comparison will give you good result. An offset shift of the same image will result in bad match.
EDIT As an idea. Maybe calculating centers of gravity and shifting/rotating both images to have the same origin may help here.
Consider using moments, freeman chain or other mode robust shape comparison methods.
first off you will want to use the images in the same format as each other #Adi mentioned jpg is lossy in the comments which is correct so shouldn't be used until possibly after any work is done. MATLAB - image conversion
you will also want the images to be of the same size. you can compare them using the size function and then pad them to add pixels to make the dimensions the same. the padding can always be removed later, just watch how the padding is added so as not to affect your operations.
you will also need to look into rotations, consider putting the image into the frequency domain and rotate the image to align the spectrum's.
below is a simple pixel comparison code, pixel comparison is not particularly accurate for comparing. even the slightest miss alignment will cause false negatives or false positives.
%read image
test_image1 = imread('C:\Users\Public\Pictures\Sample Pictures\Desert.jpg');
test_image2 = imread('C:\Users\Public\Pictures\Sample Pictures\Hydrangeas.jpg');
%convert to gray scale
gray_img1 = rgb2gray(test_image1);
gray_img2 = rgb2gray(test_image2);
% threshold image to put all values greater than 125 to 255 and all values
% below 125 to 0
binary_image1 = gray_img1 > 125;
binary_image2 = gray_img2 > 125;
%binary image to size to allow pixel by pixel checking
[row, col] = size(binary_image1);
% initialize the counters for similar and different pixelse to zero
similar = 0;
different = 0;
%two loops to scan through all rows and columns of the image.
for kk = 1 : row
for yy = 1 : col
%using if statement with isequal function to compare corresponding
%pixel values and count them depending ont he logical output of
%isequal
if isequal(binary_image1(kk,yy), binary_image2(kk,yy))
similar = similar + 1;
else
different = different + 1;
end
end
end
% calculate the percentage difference between the images and print it
total_pixels = row*col;
difference_percentage = (different / total_pixels) * 100;
fprintf('%f%% difference between the compared images \n%d pixels being different to %d total pixels\n', difference_percentage, different, total_pixels )
% simple supbtraction of the two images
diff_image = binary_image1 - binary_image2;
%generate figure to show the original gray and corresponding binary images
%as well as the subtraction
figure
subplot(2,3,1)
imshow(gray_img1);
title('gray img1');
subplot(2,3,2)
imshow(gray_img2);
title('gray img2');
subplot(2,3,4)
imshow(binary_image1);
title('binary image1');
subplot(2,3,5)
imshow(binary_image2);
title('binary image2');
subplot(2,3,[3,6])
imshow(diff_image);
title('diff image');
that's a code my prof said but I don't understand it.
A=imread('cameraman'); i=1:4:256; T=A(i,i); imshow(A); figure; imshow(T);
Why does the image just become smaller, and details are not omitted?
Details are being omitted. I'm assuming from the code that the image is 256x256.
The indexing variable i is being defined with a step of 4, meaning it goes something like this:
i = [1 5 9 13 ... 256];
Then, it is used to index both the row and columns of the matrix A to create a new matrix T.
That's why the new image is smaller; T only contains data points from A that are indexed by i.
As an exercise I recommend varying the step to see how the resulting image changes. Change the step to 1 and you will see that both images are the same size. Change the step to 8 and you will see that the second image is now even smaller than before.
I have an image that was read in using the imread function. My goal is to collect pairs of pixels in an image in MATLAB. Specifically, I have read a paper, and I am trying to recreate the following scenario:
First, the original image is grouped into pairs of pixel values. A pair consists of two neighboring pixel values or two with a small difference value. The pairing could be done horizontally by pairing the pixels on the same row and consecutive columns, or vertically, or by a key-based specific pattern. The pairing could be through all pixels of the image or just a portion of it.
I am looking to recreate the horizontal pairing scenario. I'm not quite sure how I would do this in MATLAB.
Assuming your image is grayscale, we can easily generate a 2D grid of co-ordinates using ndgrid. We can use these to create one grid, then shift the horizontal co-ordinates to the right to make another grid and then use sub2ind to convert the 2D grid into linear indices. We can finally use these linear indices to create our pixel pairings that you have described in your comments (you should really add that to your post BTW). What's important is that you need to skip over every other column in a row to ensure unique pixel pairings.
I'm also going to assume that your image is grayscale. If we go to colour, this will be slightly more complicated, and I'll leave that to you as a learning exercise. Therefore, assuming your image was read in through imread and is stored in im, do something like this:
[rows,cols] = size(im);
[X,Y] = ndgrid(1:rows,1:2:cols);
ind = sub2ind(size(im), X, Y);
ind_shift = sub2ind(size(im), X, Y+1);
pixels1 = im(ind);
pixels2 = im(ind_shift);
pixels = [pixels1(:) pixels2(:)];
pixels will be a 2D array, where each row gives you the pixel intensities of a particular pairing in the image. Bear in mind that I processed each row independently. As such, as soon as we are done with one row, we simply move on to the next row and continue the procedure. This also assumes that your image has an even number of columns. Should it not, you have a decision to make. You need to either pad the image with one column at the end, and this column can be anything you want, or you can remove this column from the image before processing. If you want to fill in this column, you can either make it all zeroes, or perhaps replicate the last column and place this beside the last column in the original image. Therefore, an appropriate pre-processing step may look something like this:
if mod(cols,2) ~= 0
im = im(:,1:end-1);
end
The above code simply removes the last column in the image if the number of columns is odd. Once you run through this code, you can run the first bit of code that I had above.
Good luck!
I have a (naïve probably) question, I just wanted to clarify this part. So, when I take a dsift on one image I generally get an 128xn matrix. Thing is, that n value, is not always the same across different images. Say image 1 gets an 128x10 matrix, while image 2 gets a 128x18 matrix. I am not quite sure why is this happening.
I think that each column of 128dimension represents a single image feature or a single patch detected from the image. So in the case of 128x18, we have extracted 18 patches and described them with 128 values each. If this is correct, why cant we have a fixed numbers of patches per image, say 20 patches, so every time our matrixes would be 128x20.
Cheers!
This is because the number of reliable features that are detected per image change. Just because you detect 10 features in one image does not mean that you will be able to detect the same number of features in the other image. What does matter is how close one feature from one image matches with another.
What you can do (if you like) is extract the, say, 10 most reliable features that are matched the best between the two images, if you want to have something constant. Choose a number that is less than or equal to the minimum of the number of patches detected between the two. For example, supposing you detect 50 features in one image, and 35 features in another image. After, when you try and match the features together, this results in... say... 20 best matched points. You can choose the best 10, or 15, or even all of the points (20) and proceed from there.
I'm going to show you some example code to illustrate my point above, but bear in mind that I will be using vl_sift and not vl_dsift. The reason why is because I want to show you visual results with minimal pre- and post-processing. Should you choose to use vl_dsift, you'll need to do a bit of work before and after you compute the features by dsift if you want to visualize the same results. If you want to see the code to do that, you can check out the vl_dsift help page here: http://www.vlfeat.org/matlab/vl_dsift.html. Either way, the idea about choosing the most reliable features applies to both sift and dsift.
For example, supposing that Ia and Ib are uint8 grayscale images of the same object or scene. You can first detect features via SIFT, then match the keypoints.
[fa, da] = vl_sift(im2single(Ia));
[fb, db] = vl_sift(im2single(Ib));
[matches, scores] = vl_ubcmatch(da, db);
matches contains a 2 x N matrix, where the first row and second row of each column denotes which feature index in the first image (first row) matched best with the second image (second row).
Once you do this, sort the scores in ascending order. Lower scores mean better matches as the default matching method between two features is the Euclidean / L2 norm. As such:
numBestPoints = 10;
[~,indices] = sort(scores);
%// Get the numBestPoints best matched features
bestMatches = matches(:,indices(1:numBestPoints));
This should then return the 10 best matches between the two images. FWIW, your understanding about how the features are represented in vl_feat is spot on. These are stored in da and db. Each column represents a descriptor of a particular patch in the image, and it is a histogram of 128 entries, so there are 128 rows per feature.
Now, as an added bonus, if you want to display how each feature from one image matches to another image, you can do the following:
%// Spawn a new figure and show the two images side by side
figure;
imagesc(cat(2, Ia, Ib));
%// Extract the (x,y) co-ordinates of each best matched feature
xa = fa(1,bestMatches(1,:));
%// CAUTION - Note that we offset the x co-ordinates of the
%// second image by the width of the first image, as the second
%// image is now beside the first image.
xb = fb(1,bestMatches(2,:)) + size(Ia,2);
ya = fa(2,bestMatches(1,:));
yb = fb(2,bestMatches(2,:));
%// Draw lines between each feature
hold on;
h = line([xa; xb], [ya; yb]);
set(h,'linewidth', 1, 'color', 'b');
%// Use VL_FEAT method to show the actual features
%// themselves on top of the lines
vl_plotframe(fa(:,bestMatches(1,:)));
fb2 = fb; %// Make a copy so we don't mutate the original
fb2(1,:) = fb2(1,:) + size(Ia,2); %// Remember to offset like we did before
vl_plotframe(fb2(:,bestMatches(2,:)));
axis image off; %// Take out the axes for better display