How to measure these black regions in Matlab - matlab

The image above has been processed to remove its background and increase contrast with im2bw. I want to now identify and measure the two elongated black regions at the top and bottom centre of the image. This is the result:
If I use imfill(I,'holes'), one of them does not get identified.
I would also like to identify the boundaries, so that I can measure the area of these regions and find their respective "weighted centroid".
What I want to achieve is something that allows me to measure an angle between the orientation of the elongated black regions in different frames, as pictured in the sketch below (the red line indicates the position of the top black region in a previous frame).

In this answer, I'll be using DIPimage 3, an image analysis toolbox for MATLAB (disclosure: I'm an author). However, the filters applied are quite simple, it should be no problem implementing this using other toolboxes instead.
The original image is very noisy. Simply thresholding that image leads to a noisy binary image that is very difficult to work with. I'm suggesting you filter the original image to highlight the structures of interest first, before thresholding and measuring.
Because we're interested in detecting lines, we'll use the Laplace of Gaussian filter. It is important to tune the sigma parameter to match the width of the lines to be detected. After applying the Laplace filter, dark lines will appear bright, and bright lines will appear dark. The bright dot in the middle of the image will also be enhanced, but appear dark.
img = readim('https://i.stack.imgur.com/0LzF3m.png');
img = img{1}; % all three channels of PNG file are identical, take one
out = laplace(img,10);
This image is straight-forward to threshold.
out = out > 0.25;
Finally, we'll measure the orientation of these two lines as the angle under which the projection is largest.
msr = measure(out,[],'feret');
angle = msr.Feret(:,4)
Output (angle in radian, 0 is to the right, pi/2 is down):
angle =
-1.7575
-1.7714

Related

Detecting strongest points on text

I need to find text areas on a natural image.
I = rgb2gray(imread('image-name.jpg'));
points = detectHarrisFeatures(I);
imshow(I); hold on;
plot(points);
Above version of code retrieves all detected strongest points.
When I change the line that starts with "plot" like this:
[m,n] = size(points.Location);
plot(points.selectStrongest(int64((m*2)/3)));
I get the points with less noise from above but in various situations I need to reduce noisy points and the output figure was:
Input image is on the left side and Output image is on the right side
As you can see, there are still noisy points out of the rectangle(red lines) area. (rectangle lines are added byme on photoshop, output is the same without red lines)
The main question is I need a perspectively noised text regions rectangle like this (red rectangle on the image):
Desired output with rectangle
By finding this rectangle, I can afford affine process to image to correct the perspective issue and make it ready for OCR process.
The interest point density in noisy regions looks low compared to the point-density in other regions. By density, I mean the number of interest-points per unit area. Assuming this observation holds in general, it is possible to filter out the noisy regions.
I don't have matlab, so the code is in opencv.
As I mentioned in a comment, I initially thought a median filter would work, but when I tried it, it didn't. So I tried adaptive thresholding, because it is doing kind-of density calculation in my implementation and rejecting less-dense regions. Please see the comments in the code for further clarification.
/* load image as gray scale */
Mat im = imread("yEEy9.jpg", 0);
/* find interest points: using FAST here */
vector<KeyPoint> keypoints;
FAST(im, keypoints, 15);
/* mark interest points pixels with value 255 in a blank image */
Mat goodfeatures = Mat::zeros(im.rows, im.cols, CV_8U);
for (KeyPoint p: keypoints)
{
goodfeatures.at<unsigned char>(p.pt) = 255;
}
/* density filtering using adaptive thresholding:
compute a threshold for each pixel as the mean value of blocksize*blocksize neighborhood
of (x,y) minus c */
int blocksize = 15, c = 7;
Mat bw;
adaptiveThreshold(goodfeatures, bw, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, blocksize, c);
We can't detect bounding rectangle from printed text lines as lines may not cover entire page area or line detection itself may be improper as we've not yet done perspective corrections.
So i suggest eased out approach for problem:
Detect all four page edge lines which will give good estimate of page's rotation on table's plane (or camera roll). Correct image for rotation first.
I guess much correction may not be required to image for camera yaw and tilt, as one will not be shooting a page from high angles say 45 degrees and for 5 to 10 degree yaw / tilt characters will still be recognizable. Moreover difference in width of top to bottom edge and left to right edge can be used to estimate correction factor against tilt and yaw for easing out detection algorithm threashold.

Matlab : ROI substraction

I'm learning about statistical feature of an image.A quote that I'm reading is
For the first method which is statistical features of texture, after
the image is loaded, it is converted to gray scale image. Then the
background is subtracted from the original image. This is done by
subtract the any blue intensity pixels for the image. Finally, the ROI
is obtained by finding the pixels which are not zero value.
The implementation :
% PREPROCESSING segments the Region of Interest (ROI) for
% statistical features extraction.
% Convert RGB image to grayscale image
g=rgb2gray(I);
% Obtain blue layer from original image
b=I(:,:,3);
% Subtract blue background from grayscale image
r=g-b;
% Find the ROI by finding non-zero pixels.
x=find(r~=0);
f=g(x);
My interpretation :
The purpose of substracting the blue channel here is related to the fact that the ROI is non blue background? Like :
But in the real world imaging like for example an object but surrounded with more than one colors? What is the best way to extract ROI in that case?
like for example (assuming only 2 colors on all parts of the bird which are green and black, & geometri shaped is ignored):
what would I do in that case? Also the picture will be transformed to gray scale right? while there's a black part of the ROI (bird) itself.
I mean in the bird case how can I extract only green & black parts? and remove the rest colors (which are considered as background ) of it?
Background removal in an image is a large and potentielly complicated subject in a general case but what I understand is that you want to take advantage of a color information that you already have about your background (correct me if I'm wrong).
If you know the colour to remove, you can for instance:
switch from RGB to Lab color space (Wiki link).
after converting your image, compute the Euclidean from the background color (say orange), to all the pixels in your image
define a threshold under which the pixels are background
In other words, if coordinates of a pixel in Lab are close to orange coordinates in Lab, this pixel is background. The advantage of using Lab is that Euclidean distance between points relates to human perception of colours.
I think this should work, please give it a shot or let me know if I misunderstood the question.

How to remove camera noises in CMOS camera

Here with i have attached two consecutive frames captured by a cmos camera with IR Filter.The object checker board was stationary at the time of capturing images.But the difference between two images are nearly 31000 pixels.This could be affect my result.can u tell me What kind of noise is this?How can i remove it.please suggest me any algorithms or any function possible to remove those noises.
Thank you.Sorry for my poor English.
Image1 : [1]: http://i45.tinypic.com/2wptqxl.jpg
Image2: [2]: http://i45.tinypic.com/v8knjn.jpg
That noise appears to result from camera sensor (Bayer to RGB conversion). There's the checkerboard pattern still left.
Also lossy jpg contributes a lot to the process. You should first have an access to raw images.
From those particular images I'd first try to use edge detection filters (Sobel Horizontal and Vertical) to make a mask that selects between some median/local histogram equalization for the flat areas and to apply some checker board reducing filter to the edges. The point is that probably no single filter is able to do good for both jpeg ringing artifacts and to the jagged edges. Then the real question is: what other kind of images should be processed?
From the comments: if corner points are to be made exact, then the solution more likely is to search for features (corner points with subpixel resolution) and make a mapping from one set of points to the other images set of corners, and search for the best affine transformation matrix that converts these sets to each other. With this matrix one can then perform resampling of the other image.
One can fortunately estimate motion vectors with subpixel resolution without brute force searching all possible subpixel locations: when calculating a matched filter, one gets local maximums for potential candidates of exact matches. But this is not all there is. One can try to calculate a more precise approximation of the peak location by studying the matched filter outputs in the nearby pixels. For exact match the output should be symmetric. Otherwise the 'energies' of the matched filter are biased towards the second best location. (A 2nd degree polynomial fit + finding maximum can work.)
Looking closely at these images, I must agree with #Aki Suihkonen.
In my view, the main noise comes from the jpeg compression, that causes sharp edges to "ring". I'd try a "de-speckle" type of filter on the images, and see if this makes a difference. Some info that can help you implement this can be found in this link.
In a more quick and dirty fashion, you apply one of the many standard tools, for example, given the images are a and b:
(i) just smooth the image with a Gaussian filter, this can reduce noise differences between the images by an order of magnitude. For example:
h=fspecial('gaussian',15,2);
a=conv2(a,h,'same');
b=conv2(b,h,'same');
(ii) Reduce Noise By Adaptive Filtering
a = wiener2(a,[5 5]);
b = wiener2(b,[5 5]);
(iii) Adjust ntensity Values Using Histogram Equalization
a = histeq(a);
b = histeq(b);
(iv) Adjust Intensity Values to a Specified Range
a = imadjust(a,[0 0.2],[0.5 1]);
b = imadjust(b,[0 0.2],[0.5 1]);
If your images are supposed to be black and white but you have captured them in gray scale there could be difference due to noise.
You can convert the images to black and white by defining a threshold, any pixel with a value less than that threshold should be assigned 0 and anything larger than that threshold should be assigned 1, or whatever your gray scale range is (maybe 255).
Assume your image is I, to make it black and white assuming your gray scale image level is from 0 to 255, assume you choose a threshold of 100:
ind = find(I < 100);
I(ind) = 0;
ind = find(I >= 100);
I(ind) = 255;
Now you have a black and white image, do the same thing for the other image and you should get very small difference if the camera and the subject have note moved.

Edge removal (in matlab)

I have a image which looks like this:
The (blue) background have the value zero, the the (red) ring has a "large" value (compared to the rest of the image). I want to plot only the orange part of the sample. However, due to the finite resolution of the image the edges still appear as show here:
As you can see specially the white regions (yes there are a few) above are hard to see due to all the noise from edges.
Is there a good algorithm (preferable in matlab) which can help me clean up these images?
Find the binary mask for the ring
Dilate the mask a bit using imdilate and strel
Use the inverted mask to 'and out' the ring and the region around it

Efficient segment boundary marking after segmentation of an image

One can mark the boundary of a binary image by bwboundaries function of MATLAB.
What should be done for obtaining boundaries of all segments as a binary image?
I have segmented an image and want to know if there is a way to mark boundaries between each neighbouring segment without applying morphological operations on each segment.
I have added images to illustrate what i want to do. Actually i want to obtain a binary image that keeps pink boundary marker pixels between all segments. Thus, I can overlay them with original image by the help of imoverlay function of Steve Eddins.
Random colored labeling of segmentation result:
Roughly-marked pink boundaries between segments:
You can find the region boundaries using a range filter, which finds the intensity range within each pixel's neighborhood. This takes advantage of the fact that the label matrix only has non-zero range at the region boundaries.
im = imread('http://i.stack.imgur.com/qPiA3.png');
boundaries = rangefilt(im,ones(3)) > 0;
imoverlay(label2rgb(im),boundaries,[0 0 0]);
These edges are also two pixels wide. Actually, I think the edges have to be two pixels wide; otherwise the regions will "lose" pixels to the border non-uniformly.
Since erosion and dilation work on non-binary images as well, you can write
img = imread('http://i.stack.imgur.com/qPiA3.png');
ei = imerode(img,ones(3));
di = imdilate(img,ones(3));
boundaries = ei~=img | di~=img;
This results in a bw image that has a boundary at the edge of each colored region (thus, the boundary line will be two pixels wide).
Note that this will not return an ordered list of pixels as bwboundaries, but rather a logical mask like bwperim, which is what imoverlay needs as input.
As a round-about way, I thought of making use of edge function of MATLAB.
First, I need to apply something like a label2gray operation. labels is the segmentation output (first image provided in the question) in the code below.
grayLabels = mat2gray(255* double(labels) ./ double(max(labels(:)))); %label2gray
bw_boundaries = edge(grayLabels,0.001);