comparing images pixel by pixel - matlab

I am working on some images. I am given an abc.tif image ( color image) . I read it as follows:
Mat test_image=imread("abc.tif",IMREAD_UNCHANGED);
I perform some operations on it and convert it into some binary image (using threshold) containing only two values 0 and 255 which are stored in img image where img is created as following:
Mat img(584,565,CV_8UC1); %//(so now img contains only 0 and 255)
I save this image using imwrite("myimage.jpg",img);
I want to compare the myimage.jpg image with another binary image manual.gif pixel by pixel to check whether one image is duplicate of another but as you can notice the problem is OpenCv doesnot support .gif format so I need to convert it into .jpg and because of that the image changes and now both the images will be concluded as different images may be even though they are same. What to do now?
Actually I am working on retinal blood vessel segmentation and these images are found in the DRIVE database.
I am given these images. Original image:
I perform some operations on it and extract blood vessels from it and then create a binary image and store in some Mat variable img as discussed earlier. Now I have got another image (.gif image) which I cannot load as shown below:
Now I want to compare my img image (binary) with the given .gif image (above) which I cannot load.

Use ImageMagic for converting your .gif to .PNG in batch mode. You could also convert it on the fly using system("convert img.gif img.png") call.
I'm not sure, if pixel comparison will give you good result. An offset shift of the same image will result in bad match.
EDIT As an idea. Maybe calculating centers of gravity and shifting/rotating both images to have the same origin may help here.
Consider using moments, freeman chain or other mode robust shape comparison methods.

first off you will want to use the images in the same format as each other #Adi mentioned jpg is lossy in the comments which is correct so shouldn't be used until possibly after any work is done. MATLAB - image conversion
you will also want the images to be of the same size. you can compare them using the size function and then pad them to add pixels to make the dimensions the same. the padding can always be removed later, just watch how the padding is added so as not to affect your operations.
you will also need to look into rotations, consider putting the image into the frequency domain and rotate the image to align the spectrum's.
below is a simple pixel comparison code, pixel comparison is not particularly accurate for comparing. even the slightest miss alignment will cause false negatives or false positives.
%read image
test_image1 = imread('C:\Users\Public\Pictures\Sample Pictures\Desert.jpg');
test_image2 = imread('C:\Users\Public\Pictures\Sample Pictures\Hydrangeas.jpg');
%convert to gray scale
gray_img1 = rgb2gray(test_image1);
gray_img2 = rgb2gray(test_image2);
% threshold image to put all values greater than 125 to 255 and all values
% below 125 to 0
binary_image1 = gray_img1 > 125;
binary_image2 = gray_img2 > 125;
%binary image to size to allow pixel by pixel checking
[row, col] = size(binary_image1);
% initialize the counters for similar and different pixelse to zero
similar = 0;
different = 0;
%two loops to scan through all rows and columns of the image.
for kk = 1 : row
for yy = 1 : col
%using if statement with isequal function to compare corresponding
%pixel values and count them depending ont he logical output of
%isequal
if isequal(binary_image1(kk,yy), binary_image2(kk,yy))
similar = similar + 1;
else
different = different + 1;
end
end
end
% calculate the percentage difference between the images and print it
total_pixels = row*col;
difference_percentage = (different / total_pixels) * 100;
fprintf('%f%% difference between the compared images \n%d pixels being different to %d total pixels\n', difference_percentage, different, total_pixels )
% simple supbtraction of the two images
diff_image = binary_image1 - binary_image2;
%generate figure to show the original gray and corresponding binary images
%as well as the subtraction
figure
subplot(2,3,1)
imshow(gray_img1);
title('gray img1');
subplot(2,3,2)
imshow(gray_img2);
title('gray img2');
subplot(2,3,4)
imshow(binary_image1);
title('binary image1');
subplot(2,3,5)
imshow(binary_image2);
title('binary image2');
subplot(2,3,[3,6])
imshow(diff_image);
title('diff image');

Related

Edge linking in Matlab

I am trying to link the edges in images like the image below:
I've tried using dilation/erosion operations but the result isn't good. Is there any other way to link the edges?
Here is the original image:
The result you have might very well be good enough to apply the Hough transform to, which will identify your eight most important lines across the image.
I don't know if all your images are similar, but in the example you show it is easy to separate the gray lines from the green background. For example, the following code (using DIPimage, but easy to implement with other tools too) will distinguish the relatively bright gray from anything that is dark or colorful:
img = readim('https://i.stack.imgur.com/vmBiF.jpg');
img = colorspace(img,'hsv');
img = (0.5-img{2})*img{3}; % img{2} is the saturation channel, img{3} is the value (intensity) channel
img = clip(img); % set negative values to 0
Next, a Laplace of Gaussian filter (which is a line detector), some thresholding just above zero, and a selection of only the larger objects results in the detected lines:
img = -laplace(img,5); % LoG with sigma=5
img = img > 0.05; % 0.05 is just above 0
img = areaopening(img,1000); % remove objects smaller than 1000 pixels
Needless to say, this is a lot cheaper computationally than running a U-Net.

Saving kinect depth frame (uint16) using MATLAB but why is it too dark?

Recently I work on kinect using MATLAB. I take depth frame which is in uint16 format. But when I display it or save it using MATLAB command like: imshow & imwrite respectively, it shows too dark image. But when set the display range or convert it in uint8 format it becomes brighter. But I want to save it as a brighter format without converting in uint8 format like scaling the range between 0 to 4500.
vid = videoinput('kinect',1);
vid2 = videoinput('kinect',2);
vid.FramesPerTrigger = 1;
vid2.FramesPerTrigger = 1;
% % Set the trigger repeat for both devices to 200, in order to acquire 201 frames from both the color sensor and the depth sensor.
vid.TriggerRepeat = 200;
vid2.TriggerRepeat = 200;
% % Configure the camera for manual triggering for both sensors.
triggerconfig([vid vid2],'manual');
% % Start both video objects.
start([vid vid2]);
trigger([vid vid2])
[imgDepth, ts_depth, metaData_Depth] = getdata(vid2);
f=imgDepth;
figure,imshow(f);
figure,imshow(f,[0 4500]);
imwrite(f,'C:\Users\sufi\Desktop\matlab_kinect\Data_image\output\depth\fo.tiff');
stop([vid vid2]);
When I set the display range:
Without setting the display range:
The values in a 16bit image range from 0 to 65535.
If we take a look at the histogram of your image:
We see that the max value is 7995. But that's just a few outliers. Most information is somewhere between 700 and 4300.
So all our values are in 5-10% of our value range. That makes it look very dark.
In order to make it look better for humans we have to normalize it. (Some image viewer do this automatically).
So in order to get a nicer image into your power point presentation you have two options.
a) display it in an image viewer that can display it nicely and take a screenshot
b) normalize the image in matlab and save it to a file.
You can further improve the image by removing those outliers befor normalization.
One simple way can be scaling the image based on following formula:
Pixel_value=Pixel_value/4500*65535
If you want see the exact image that you get from uint8 ; I guess the following steps will work for you.
Probably while casting the image to uint8 matlab firstly clip the values above some threshold lets say 4095=2**12-1 (i'm not sure about value) and then it makes right shifts (4 shifts in our case) to make it inside the range of 0-255.
So i guess multiplying the value of uint8 with 256 and casting it as uint16 will help you get the same image
Pixel_uint16_value= Pixel_uint8_value*256 //or Pixel_uint16_value= Pixel_uint8_value<<8
//dont forget to cast the result as uint16

How do I denoise a simple grayscale image

Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!

fusing more than 2 images in matlab

In MATLAB, how do I fuse more than two images? For example, I want to do what imfuse does but for more than 2 images. Using two images, this is the code I have:
A = imread('file1.jpg');
B = imread('file2.jpg');
C = imfuse(A,B,'blend','Scaling','joint');
C will be fused version of A and B. I have about 50 images to fuse. How do I achieve this?
You could write a for loop, then simply have a single image that stores all of the fused results and repeatedly fusing this image with a new image you read in. As such, let's say your images were named from file1.jpg to file50.jpg. You could do something like this:
A = imread('file1.jpg');
for idx = 2 : 50
B = imread(['file' num2str(idx) '.jpg']); %// Read in the next image
A = imfuse(A, B, 'blend', 'Scaling', 'joint'); %// Fuse and store into A
end
What the above code will do is that it will repeatedly read in the next image, and fuse it with the image stored in A. At each iteration, it will take what is currently in A, fuse it with a new image, then store it back in A. That way, as we keep reading in images, we will keep fusing new images on top of those images that were fused from before. After this for loop finishes, you will have 50 images that are all fused together.
imfuse with the 'blend' method performs alpha blending on two images. In the absence of an alpha channel on the images, this is nothing more than the arithmetic mean of each pair of corresponding pixels. Therefore, one way of interpreting the fusion of N images is to simply average N corresponding pixels, one from each image, to get the output image.
Assuming that:
All images are of size imgSize (e.g., [480,640])
All images have the same pixel value range (e.g., 0-255 for uint8 or 0-1 for double)
the following should give you something reasonable:
numImages = 50;
A = zeros(imgSize,'double');
for idx = 1:numImages
% Borrowing rayryeng's nice filename construction
B = imread(['file' num2str(idx) '.jpg']);
A = A + double(B);
end
A = A/numImages;
The result will be in the array A with type double after the loop and may need to be cast back to the proper type for your image.
Piggy-backing on rayryeng's solution:
What you want to do is increase the alpha at every step in proportion to how much that image is contributing to the already stored images. For Example:
Adding 1 image to 1 existing image, you would want an alpha of 0.5 so that they are equal.
Now adding one image to the 2 existing images, it should contribute 33% to the image and therefore an alpha of 0.33. 4th image should contribute 25% (Alpha=0.25) and so on.
The result follows an x^-1 trend. So at the 20th image, 1/20 = 0.05, so an alpha of 0.05 would be necessary.

Detection of homogeneous area in term of connectivity in image

I am looking for some measurements to do to distinct between these two binary images (texte and noise).
Hough transform of the frequency domain don't tell me much (either in skeleton or in the original shape), as can be seen below !
in the spatial domain, I have try to measure, if a given pixel participate to line or curve, or participate to a random shape, and then measures the percentage of all pixels participating and not participating to normal shape (lines and curves) to distinguish between these images, but I didn't succeed, in implementation.
what do you think ?
I use matlab for test.
Thanks in advance
Looking at the skeleton images, one could notice how the noise image has lots of branches in it, as compared to the text image and this looks like one of the features that could be exploited. The experiment as code shown below soughts to verify the same, using the OP's images -
Experiment Code
%%// Experiment to research what features what might help us
%%// differentiate betwen noise and text images
%%// Read in the given images
img1 = imread('noise.png');
img2 = imread('text.png');
%%// Since the given images had the features as black and rest as white,
%%// we must invert them
img1 = ~im2bw(img1);
img2 = ~im2bw(img2);
%%// Remove the smaller blobs from both of the images which basically
%%// denote the actual noise in them
img1 = rmnoise(img1,60);
img2 = rmnoise(img2,60);
%// Get the skeleton images
img1 = bwmorph(img1,'skel',Inf);
img2 = bwmorph(img2,'skel',Inf);
%%// Find blobs branhpoints for each blob in both images
[L1, num1] = bwlabel(img1);
[L2, num2] = bwlabel(img2);
for k = 1:num1
img1_bpts_count(k) = nnz(bwmorph(L1==k,'branchpoints'));
end
for k = 1:num2
img2_bpts_count(k) = nnz(bwmorph(L2==k,'branchpoints'));
end
%%// Get the standard deviation of branch points count
img1_branchpts_std = std(img1_bpts_count)
img2_branchpts_std = std(img2_bpts_count)
Note: Above code uses a function - rmnoise shown below that is built based on the problem discussed at this link :
function NewImg = rmnoise(Img,threshold)
[L,num] = bwlabel( Img );
counts = sum(bsxfun(#eq,L(:),1:num));
B1 = bsxfun(#eq,L,permute(find(counts>threshold),[1 3 2]));
NewImg = sum(B1,3)>0;
return;
Output
img1_branchpts_std =
73.6230
img2_branchpts_std =
12.8417
One can see the big difference between the standard deviations of the two input images, suggesting this feature could be used.
Runs on some other samples
To make our theory a bit more concrete, let's use a pure text image and gradually add noise and see if the standard deviation of branch-points, naming it as check_value suggest anything on them.
(I) Pure text image
check_value = 1.7461
(II) Some added noise image
check_value = 30.1453
(III) Some more added noise image
check_value = 54.6446
Conclusion: As can be seen, this parameter provides quite a good indicator to decide on the nature of images.
Finalized Code
A script could be written to test for whether another input image would be a text or noise one, like this -
%%// Parameters
%%// 1. Decide this based on the typical image size and count of pixels
%%// in the biggest noise blob
rmnoise_threshold = 60;
%%// 2. Decide this based on the typical image size and how convoluted the
%%// noisy images are
branchpts_count_threshold = 50;
%%// Actual processing
%%// We are assuming input images as binary images with features as true
%%// and false in rest of the region
img1 = im2bw(imread(FILE));
img1 = rmnoise(img1,rmnoise_threshold);
img1 = bwmorph(img1,'skel',Inf);
[L1, num1] = bwlabel(img1);
for k = 1:num1
img1_bpts_count(k) = nnz(bwmorph(L1==k,'branchpoints'));
end
if std(img1_bpts_count) > branchpts_count_threshold
disp('This is a noise image');
else
disp('This is a text image');
end
And now what you suggest if we try to use the original shape instead of the skeleton, (to avoid the loss of information).
I try to measure for a given pixel, the elongation of the strokes (instead of straight branches) that past throughout that pixel, by counting the number of transitions from white to black in a clockwise.
I am thinking to use a circle with a radius, and for the origin the pixel in consideration, and store the pixels locating at the edge of the circle in an ordered list (clockwise) and then compute the number of transitions (black to white) from this list.
by increasing the radius of the circle we could trace the shape of elongated stokes and know his orientation.
this is a schema illustrating this.
the pixels that have a number of transitions equal to 0 or bigger than 2 (red ones) have to be classified as noise, and those that have 2 or 1 transition classified as normal.
What do you think of this approach !