I have coloured images of size 20 by 20. The objective is that :
based on a query, I need to check which recalled image is the closest match to the query. For example, let 10 images be the recalled ones. Out of the 10 recalled images, I need to find the closest match to the query.
I was thinking of using the correlation between the images. One can use the Correlation coefficient - higher the value, more is the correlation between pixel values. (Please correct me where wrong).
R = corr2(A,B1) will compute the correlation coefficient between A and B where A is the query, B1 is the first recalled image image of the same size. I used the above command for colored images but I got the result R = NaN. How do I solve this problem for colored and gray scale. Thank you.
This is the QUERY IMAGE
The other image(recalled / retrieved B1)
UPDATE : Code for testing correlation of an image with itself from a databasae called patches.mat (patches is the database. It consists of 59500 20x20 patches, reshaped into 400-dimensional column vectors,
taken from a bunch of motorcycle images; I am using the 50 th example as the query image)
img_query = imagesc(reshape(patches(:,50),20,20));
colormap gray;axis image;
R = corr2(img_query,img_query)
Answer = NaN
That is because one of the images is black or includes a single color, meaning that all the values of the matrix are similar. Check the following examples:
I = imread('pout.tif');
J = I*0; % create a black image
R = corr2(I,J);
R =
NaN
I = imread('pout.tif');
J = 255*ones(size(I)); % create a white image
R = corr2(I,J);
R =
NaN
Update
It should work in your case, as you can see in the following example, it works perfectly:
I1 = abs(255*(rand(10,10));
I2 = abs(255*(rand(10,10));
corr2(I1,I2)
ans =
0.0713
Even with the images you have shared, it is working for me. To find out your problem, you have to either share a part of your code, or post images as they are, not saved images (with a size 420x560x3).
Note: you cannot have images including more than 1 layer.
Your code shows that you are using the handle of the image instead of the image itself.
Test this:
I = reshape(patches(:,50),20,20);
corr2(I,I)
I would try also the Universal Quality Index by Wang which gives you a 1.0 if both images are equal, and less in other cases with spatial indication where they differ, think of one image as reference, the other as a degraded one relative to the first one. See http://www.cns.nyu.edu/~zwang/files/research/quality_index/demo.html
Related
I need to combine several images (of different textures) together. I have tried the following code:
% Read 4d data
I1 = importdata('Img1.tif');
I2 = importdata('Img2.tif');
% Extract a slice of the data
extractImg1 = I1(:,:,1);
extractImg2 = I2(:,:,1);
% compute image size
[ny1, nx1] = size(extractA1);
[ny2, nx2] = size(extractA2);
P1 = extractImg1 (round(ny1/2)-120:round(ny1/2)+120, round(nx1/2)-120:round(nx1/2)+120);
figure, imshow(P1); title('Img1');
P2 = extractImg2 (round(ny2/2)-120:round(ny2/2)+120, round(nx2/2)-120:round(nx2/2)+120);
figure, imshow(P2); title('Img2');
Please, what should I do next?
Secondly, the combined image will be needed for laser printing. The images do not have exactly the same pixel dimensions thus; I was told that it would not make sense to combine them, as this might slightly reduce accuracy.
Nonetheless, I still have a feeling that combining the images wouldn’t be wrong considering that they all have the same resolutions.
I need advice as to whether I should go ahead with the combination. Many thanks in advance.
You have extracted two equal-sized regions from the two images. If you want to put those side-by-side in the same image, use cat, or equivalently, use the square brackets []:
next_to_each_other = [P1,P2];
on_top_of_each_other = [P1;P2];
But note that you can put these things together even if they don't have the same sizes. For example, if I1 is NxM pixels, and I2 is NxK (with N the vertical size as customary in MATLAB) then you can still do [I1,I2] because the vertical size matches.
If nether the vertical nor horizontal sizes match, you can pad one with zeros (or whatever value is appropriate) using padarray before putting them together:
ny1 = size(I1,1);
ny2 = size(I2,1);
if ny1<ny2
I1 = padarray(I1,[ny2-ny1,0,0],0,'post'); % The 0 is the value to pad
elseif ny2<ny1
I2 = padarray(I2,[ny1-ny2,0,0],0,'post'); % The 0 is the value to pad
end
out = [I1,I2];
padarray also allows replicating the data in the matrix instead of padding with zeros. Read the documentation to find what is appropriate. padarray requires the Image Processing Toolbox. If you don't have it, you can replicate its functionality by creating an array with zeros of the appropriate size using the zeros function, and adding it to the image using something like [I1;zeros(ny2-ny1,size(I1,2),size(I1,3)].
In MATLAB, how do I fuse more than two images? For example, I want to do what imfuse does but for more than 2 images. Using two images, this is the code I have:
A = imread('file1.jpg');
B = imread('file2.jpg');
C = imfuse(A,B,'blend','Scaling','joint');
C will be fused version of A and B. I have about 50 images to fuse. How do I achieve this?
You could write a for loop, then simply have a single image that stores all of the fused results and repeatedly fusing this image with a new image you read in. As such, let's say your images were named from file1.jpg to file50.jpg. You could do something like this:
A = imread('file1.jpg');
for idx = 2 : 50
B = imread(['file' num2str(idx) '.jpg']); %// Read in the next image
A = imfuse(A, B, 'blend', 'Scaling', 'joint'); %// Fuse and store into A
end
What the above code will do is that it will repeatedly read in the next image, and fuse it with the image stored in A. At each iteration, it will take what is currently in A, fuse it with a new image, then store it back in A. That way, as we keep reading in images, we will keep fusing new images on top of those images that were fused from before. After this for loop finishes, you will have 50 images that are all fused together.
imfuse with the 'blend' method performs alpha blending on two images. In the absence of an alpha channel on the images, this is nothing more than the arithmetic mean of each pair of corresponding pixels. Therefore, one way of interpreting the fusion of N images is to simply average N corresponding pixels, one from each image, to get the output image.
Assuming that:
All images are of size imgSize (e.g., [480,640])
All images have the same pixel value range (e.g., 0-255 for uint8 or 0-1 for double)
the following should give you something reasonable:
numImages = 50;
A = zeros(imgSize,'double');
for idx = 1:numImages
% Borrowing rayryeng's nice filename construction
B = imread(['file' num2str(idx) '.jpg']);
A = A + double(B);
end
A = A/numImages;
The result will be in the array A with type double after the loop and may need to be cast back to the proper type for your image.
Piggy-backing on rayryeng's solution:
What you want to do is increase the alpha at every step in proportion to how much that image is contributing to the already stored images. For Example:
Adding 1 image to 1 existing image, you would want an alpha of 0.5 so that they are equal.
Now adding one image to the 2 existing images, it should contribute 33% to the image and therefore an alpha of 0.33. 4th image should contribute 25% (Alpha=0.25) and so on.
The result follows an x^-1 trend. So at the 20th image, 1/20 = 0.05, so an alpha of 0.05 would be necessary.
The code included below calculates the Euclidean distance between two images in hsv color space and if the result is under a Threshold (here set to 0.5) the two images are similar and it will group them in one cluster.
This will be done for a group of images (video frames actually).
It worked well on a group of sample images but when I change the sample it starts to work odd, e.g the result is low for two different images and high (like 1.2) for two similar images.
For example the result for these two very similar images is relatively high: first pic and second pic when it actually should be under 0.5.
What is wrong?
In the code below, f is divided by 100 to allow comparison to values near 0.5.
Im1 = imread('1.jpeg');
Im2 = imread('2.jpeg');
hsv = rgb2hsv(Im1);
hn1 = hsv(:,:,1);
hn1=hist(hn1,16);
hn1=norm(hn1);
hsv = rgb2hsv(Im2);
hn2 = hsv(:,:,1);
hn2=hist(hn2,16);
hn2=norm(hn2);
f = norm(hn1-hn2,1)
f=f/100
These two lines:
hn1=hist(hn1,16);
hn1=norm(hn1);
convert your 2D image into a scalar. I suspect that is not what you're interested in doing.....
EDIT:
Probably a better approach would be:
hn1 = hist(hn1(:),16) / numel(hn1(:));
but, you haven't really given us much on the math, so this is just a guess.
I try to write a matlab function that upsamples me a picture (matrix of grey values). It is actually nothing overwhelmingly complicated, but I yet manage to do it wrong.
My objective is it to resize it by factor 2 and for the start I just want to see my upscaled picture. I want to fill the gaps with zeros, hence every 2nd row/column is a filled with zeros.
When I am done, I wonder why I see nothing but a grey ocean of pixels. I would have expected to be able to recognize at least some stuff in my picture.
Here is my function, does anyone see my mistake?
function [upsampled] = do_my_upsampling(image)
[X Y] = size(image);
upsampled = zeros(X*2, Y*2);
upsampled(1:2:end, 1:2:end) = image(1:1:end, 1:1:end);
end
Your code works fine for me (with image = rand(100);. However, it's not a very Matlab-way to achieve the result.
If you just want to spread out your pixels, why don't you do direct indexing?
[nRows,nCols] = size(image);
upsampled = zeros(2*nRows,2*nCols);
upsampled(1:2:end,1:2:end) = image;
Try imshow(image,[])
or, as your image is a double, convert it into uint8 first and then show i.e
imshow(uint8(image))
Is it possible to compare the color of two images using Matlab if the two images are of different sizes?The problem that am facing is that, i want to detect the presence of a colored patch in an image?
You could just compare normalized histograms (i.e., like a color probability distribution). If the large and small images are semantically identical, then their normalized histograms are similar.
If they are semantically different, then their histograms will likely differ.
Do you have the image processing toolbox? If so, you might approach the problem by taking the images, splitting them into their component color channels, resizing the individual channels, and re-assembling them into resized color images. I wrote a program to do that a while ago, and I recall the code looking something like this:
function imout = ResizeRGB(imin,height,width)
imout = zeros(height,width,3);
iminR = imin(:,:,1);
iminG = imin(:,:,2);
iminB = imin(:,:,3);
imoutR = imresize(iminR, [height width]);
imoutG = imresize(iminG, [height width]);
imoutB = imresize(iminB, [height width]);
imout(:,:,1) = imoutR;
imout(:,:,2) = imoutG;
imout(:,:,3) = imoutB;
(Since I don't have the IPT handy at the moment, that program should be considered pseudocode even though it's more or less in correct matlab syntax. I can't run it without the IPT, so I can't tell if it's buggy or not.)
Once you resize the images so that they have common dimensions, the problem becomes identical to the problem of comparing colors for two images of equal dimensions.
On the other hand, if you have a picture of the patch and a picture that may contain the patch, you might consider using a binary mask to threshold the results of a cross-correlation (xcorr2 in the IPT). For more information on that approach, there's a tutorial on the MathWorks website: http://www.mathworks.com/products/demos/image/cross_correlation/imreg.html
This would be a bit crude, but you could crop your images to the minimum common size if this will be sufficient for your application:
A = imread("image1.jpg");
B = imread("image2.jpg");
rows = min(size(A,1), size(B,1));
cols = min(size(A,2), size(B,2));
croppedA = A(1:rows, 1:cols, :);
croppedB = B(1:rows, 1:cols, :);