I have an image and I apply thresholding to it to apply binary mask.I draw histogram before and after the thresholding process.The histograms look like below.
The second figure which is after thresholding,doesn't show any peaks.Is that mean,my thresholding is wrong.Can anyone please explain these histograms.
Update
Image after thresholding
To summarize Sardar's comment, the horizontal range of your plot is tight. Simply loosen the range a bit so you can see the result better. Doing xlim([-0.5 1.5]); will certainly do that and we can see that in the last figure of your update. How you interpret the histogram... well, for black and white images, examining the histogram is never meaningful because there are only two possible intensities to examine - 0 and 1. Histograms usually give a glimpse as to the contrast of the image. If the histogram is spread out, this usually gives an indication that the image has high contrast. However, if the histogram is within a small range this usually means the image is poor contrast.
Remember that the histogram simply count the occurrence of instances in a data set. In this case, we are counting how many times we see 0 and 1 in the image. Referring to your last plot, this means that approximately 9000 pixels that are intensity 0 and approximately 4000 pixels that are intensity 1. This gives absolutely no indication as to the contrast or the spread of the intensities in your image. because there are only two possible intensities that are seen in the image. As such, to answer your question in such a long-winded way, the answer is that you can't really interpret anything.
The only thing I can possible suggest is that it tells you the ratio of object pixels to background pixels and could indicate a measure of quality. Usually when we determine what is an object and what are background pixels, we would expect that there would be more background pixels than object pixels to be able to discern this from the background. Therefore, the more black pixels you have the better it may be. That being said, I can't really say more unless you actually show us what your image looks like after you threshold it.
Related
I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.
I'm trying to segment the sky and water part in this image.
Link of the Picture
I've tried so many methods like k-means, threshold, multi threshold etc. BUt unfortunately nothing worked so well.
Here is an example of my code(Matlab):
img=imread('1.jpg');
im_gray=rgb2gray(img);
b=imadjust(im_gray);
imshow(b);
bw_remove_small=imopen(b,strel('square',5));
imshow(bw_remove_small); %after 1st iteration
m3=medfilt2(bw_remove_small,[18,16]);
imshow(m3);
m3=medfilt2(bw_remove_small,20,20]);
m3=medfilt2(bw_remove_small,[20,20]);
imshow(m3);
I1=m3;
I2=rgb2gray(I1);
I=double(I2);
figure
subplot(1,3,1)
imshow(I1)
subplot(1,3,2)
imshow(I2)
g=kmeans(I(:),4);
J = reshape(g,size(I));
subplot(1,3,3)
imshow(J,[]);
Can any one help me?please
The picture's two regions are different in hue, texture, and gray level brightness.
The horizon is the best line in the image from our point of view and can be seen by the distinct change in brightness. The brightness will not work with a single threshold because the image brightness is not flat, so use brightness a model of the distribution to flatten out the sky or the water. This implies knowledge of the objective but there are two things that can give you an approximate answer: texture and/or hue.
The hue with a threshold of 120 (derived from the hue histogram) will give you the two regions but will not be divided cleanly and will have overlapping sections. Though using these two sections a model of the brightness can be found.
The same with texture. Using a small fft of the image, subtracting the dc out, then averaging or just summing up the non dc parts will result in a histogram with two peaks that may not be as distinct as the hue's is but is enough to find a threshold and two areas that will allow a model of the brightness to be found.
The key fact is if the sky is modeled properly as a gray surface then you can subtract it out of the image and use a simple threshold to pull it out.
Edge detection is very noisy in this image to be able to easily see the line but if you can pull out the image lines without losing shape then look for a straight and long contour it may take less code/work.
Hope this helps some! I used this to find mountains in the distance when there was not a big difference between the sky and the mountains. Plus I just tried this on your pic and almost got a good answer without a good model of the sky.
Here with i have attached two consecutive frames captured by a cmos camera with IR Filter.The object checker board was stationary at the time of capturing images.But the difference between two images are nearly 31000 pixels.This could be affect my result.can u tell me What kind of noise is this?How can i remove it.please suggest me any algorithms or any function possible to remove those noises.
Thank you.Sorry for my poor English.
Image1 : [1]: http://i45.tinypic.com/2wptqxl.jpg
Image2: [2]: http://i45.tinypic.com/v8knjn.jpg
That noise appears to result from camera sensor (Bayer to RGB conversion). There's the checkerboard pattern still left.
Also lossy jpg contributes a lot to the process. You should first have an access to raw images.
From those particular images I'd first try to use edge detection filters (Sobel Horizontal and Vertical) to make a mask that selects between some median/local histogram equalization for the flat areas and to apply some checker board reducing filter to the edges. The point is that probably no single filter is able to do good for both jpeg ringing artifacts and to the jagged edges. Then the real question is: what other kind of images should be processed?
From the comments: if corner points are to be made exact, then the solution more likely is to search for features (corner points with subpixel resolution) and make a mapping from one set of points to the other images set of corners, and search for the best affine transformation matrix that converts these sets to each other. With this matrix one can then perform resampling of the other image.
One can fortunately estimate motion vectors with subpixel resolution without brute force searching all possible subpixel locations: when calculating a matched filter, one gets local maximums for potential candidates of exact matches. But this is not all there is. One can try to calculate a more precise approximation of the peak location by studying the matched filter outputs in the nearby pixels. For exact match the output should be symmetric. Otherwise the 'energies' of the matched filter are biased towards the second best location. (A 2nd degree polynomial fit + finding maximum can work.)
Looking closely at these images, I must agree with #Aki Suihkonen.
In my view, the main noise comes from the jpeg compression, that causes sharp edges to "ring". I'd try a "de-speckle" type of filter on the images, and see if this makes a difference. Some info that can help you implement this can be found in this link.
In a more quick and dirty fashion, you apply one of the many standard tools, for example, given the images are a and b:
(i) just smooth the image with a Gaussian filter, this can reduce noise differences between the images by an order of magnitude. For example:
h=fspecial('gaussian',15,2);
a=conv2(a,h,'same');
b=conv2(b,h,'same');
(ii) Reduce Noise By Adaptive Filtering
a = wiener2(a,[5 5]);
b = wiener2(b,[5 5]);
(iii) Adjust ntensity Values Using Histogram Equalization
a = histeq(a);
b = histeq(b);
(iv) Adjust Intensity Values to a Specified Range
a = imadjust(a,[0 0.2],[0.5 1]);
b = imadjust(b,[0 0.2],[0.5 1]);
If your images are supposed to be black and white but you have captured them in gray scale there could be difference due to noise.
You can convert the images to black and white by defining a threshold, any pixel with a value less than that threshold should be assigned 0 and anything larger than that threshold should be assigned 1, or whatever your gray scale range is (maybe 255).
Assume your image is I, to make it black and white assuming your gray scale image level is from 0 to 255, assume you choose a threshold of 100:
ind = find(I < 100);
I(ind) = 0;
ind = find(I >= 100);
I(ind) = 255;
Now you have a black and white image, do the same thing for the other image and you should get very small difference if the camera and the subject have note moved.
I've scanned an old photo with paper texture pattern and I would like to remove the texture as much as possible without lowering the image quality. Is there a way, probably using Image Processing toolbox in MATLAB?
I've tried to apply FFT transformation (using Photoshop plugin), but I couldn't find any clear white spots to be paint over. Probably the pattern is not so regular for this method?
You can see the sample below. If you need the full image I can upload it somewhere.
Unfortunately, you're pretty much stuck in the spatial domain, as the pattern isn't really repetitive enough for Fourier analysis to be of use.
As #Jonas and #michid have pointed out, filtering will help you with a problem like this. With filtering, you face a trade-off between the amount of detail you want to keep and the amount of noise (or unwanted image components) you want to remove. For example, the median filter used by #Jonas removes the paper texture completely (even the round scratch near the bottom edge of the image) but it also removes all texture within the eyes, hair, face and background (although we don't really care about the background so much, it's the foreground that matters). You'll also see a slight decrease in image contrast, which is usually undesirable. This gives the image an artificial look.
Here's how I would handle this problem:
Detect the paper texture pattern:
Apply Gaussian blur to the image (use a large kernel to make sure that all the paper texture information is destroyed
Calculate the image difference between the blurred and original images
EDIT 2 Apply Gaussian blur to the difference image (use a small 3x3 kernel)
Threshold the above pattern using an empirically-determined threshold. This yields a binary image that can be used as a mask.
Use median filtering (as mentioned by #Jonas) to replace only the parts of the image that correspond to the paper pattern.
Paper texture pattern (before thresholding):
You want as little actual image information to be present in the above image. You'll see that you can very faintly make out the edge of the face (this isn't good, but it's the best I have time for). You also want this paper texture image to be as even as possible (so that thresholding gives equal results across the image). Again, the right hand side of the image above is slightly darker, meaning that thresholding it well will be difficult.
Final image:
The result isn't perfect, but it has completely removed the highly-visible paper texture pattern while preserving more high-frequency content than the simpler filtering approaches.
EDIT
The filled-in areas are typically plain-colored and thus stand out a bit if you look at the image very closely. You could also try adding some low-strength zero-mean Gaussian noise to the filled-in areas to make them look more realistic. You'd have to pick the noise variance to match the background. Determining it empirically may be good enough.
Here's the processed image with the noise added:
Note that the parts where the paper pattern was removed are more difficult to see because the added Gaussian noise is masking them. I used the same Gaussian distribution for the entire image but if you want to be more sophisticated you can use different distributions for the face, background, etc.
A median filter can help you a bit:
img = imread('http://i.stack.imgur.com/JzJMS.jpg');
%# convert rgb to grayscale
img = rgb2gray(img);
%# apply median filter
fimg = medfilt2(img,[15 15]);
%# show
imshow(fimg,[])
Note that you may want to pad the image first to avoid edge effects.
EDIT: A smaller filter kernel than [15 15] will preserve image texture better, but will leave more visible traces of the filtering.
Well i have tried out a different approach using Anisotropc diffusion using the 2nd coefficient that operates on wider areas
Here is the output i got:
From what i can See from the Picture, the Noise has a relatively high Frequency Compared to the image itself. So applying a low Pass filter should work. Have a look at the Power spectrum abs(fft(...)) to determine the cutoff Frequency.
If this be the formula for MSE for RGB images A,B of same size 256*200, then how to obtain a line plot for every pixel with x axis representing pixels and y axis representing the MSE values
MSE = reshape(mean(mean((double(A) - double(B)).^2,2),1),[1,3])
There are only two images A ,B. The plot should illustrate change between each pixel of A and B which is meant by MSE.
If you want to display the changes "between each pixel" then what you're showing is not mean squared errors any more -- there's no averaging going on. (Unless you intend to average across the three colour planes, but I don't recommend that: changes in R,G,B are not equally salient to the human visual system. If you really must do this, you might want to weight them, say, 2:4:1 for something a bit more representative, but this is still ad hoc and not likely to give a very accurate idea of what differences will look biggest.)
Of course it's perfectly reasonable to want to see the per-pixel errors, but I wouldn't recommend using a line plot to display them; it's likely to be confusing rather than informative. Rather, display them as an image:
errs = (double(A)-double(B)).^2;
image(errs / max(errs(:)));
axis image;
which you can then compare by eye with A and B to see what image regions/features/... correspond to worse errors. The brightness and colour of each pixel indicate the amount of error and how it's distributed across the R, G, and B planes.
On the other hand, perhaps what you actually need is mean squared error over individual rows, or columns, of the image. In that case, after creating errs as above, use mean to compute the row or column means; that will give you a 256-by-1-by-3 image or a 1-by-200-by-3 image; now I would suggest plotting R,G,B curves separately unless you (probably foolishly in my opinion, as mentioned above) insist on averaging the planes.
row_errs = mean(errs,2); % this is now of size [n,1,3]
now row_errs(:,:,1) is a vector of MS-across-rows red errors, row_errs(:,:,2) is a vector of MS-across-rows green errors, etc. You can feed these to plot.