findContours(OpenCV) vs. regionprops(Matlab) - matlab

My final purpose is to calculate moments of all connected regions in image.
Problem: Regions storage methods in OpenCV and Matlab are different. It is clear that moments of regions and contours are different too.
So if I want to get result of extracting regions in Matlab:
test = imread('test.bmp');
l = bwlabel(test, 8);
it is necessary follow code (OpenCV):
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
Mat test = imread("..\\test.bmp", CV_LOAD_IMAGE_UNCHANGED);
findContours(test , contours, hierarchy, CV_RETR_CCOMP , CV_CHAIN_APPROX_NONE);
for ( int i = 0; i < contours.size(); i++ )
{
if (hierarchy[i][3] > -1)
continue;
Mat imProcessing = Mat::zeros(test .size(), test .type());
drawContours(imProcessing, contours, i, Scalar(255), CV_FILLED, 8, hierarchy, 1);
// now imProcessing contain connected region, not contour!
}
Is there a more efficient way?

Related

relation between harris detector results in matlab and opencv

I am working on corner feature detection using harris detector. I wrote program detect feature in image in matlab using following code to detect harris feature
corners = detectHarrisFeatures(img, 'MinQuality', 0.0001);
S = corners.selectStrongest(100);
then I transfer all program from matlab to opencv
I used following code to detect harris corner points
int thresh = 70;
for( int j = 0; j < dst_norm.rows && cont < 100; j++ )
{
for( int i = 0; i < dst_norm.cols && cont < 100; i++ )
{
if((int) dst_norm.at<float>(j, i) > thresh )
{
S.at<int>(cont, 0) = i;
S.at<int>(cont, 1) = j;
I.at<int>(cont, 0) = i;
I.at<int>(cont, 1) = j;
cont = cont + 1;
}
}
}
extracted region was different in both program and I discovered that harris detected corner points in matlab not as harris detected corner points in opencv.
How can I make detected corner points from both programs are same?
Is dst_norm an array of Harris corner metric values? In that case you are choosing first 100 pixels with the corner metric above the threshold, which is incorrect.
In your MATLAB code, detectHarrisFeatures finds points which are local maxima of the corner metric. Then selectStrongest method selects 100 of those points with the highest metric. So, first you have to find the local maxima. Then you have to sort them, and take the top 100.
Even then, the results will not be exactly the same, because detectHarrisFeatures locates the corners with sub-pixel accuracy, using interpolation.

Find a nearly circular band of bright pixels in this image

This is the problem I have: I have an image as shown below. I want to detect the circular region which I have marked with a red line for display here (that particular bright ring).
Initially, this is what I do for now: (MATLAB)
binaryImage = imdilate(binaryImage,strel('disk',5));
binaryImage = imfill(binaryImage, 'holes'); % Fill holes.
binaryImage = bwareaopen(binaryImage, 20000); % Remove small blobs.
binaryImage = imerode(binaryImage,strel('disk',300));
out = binaryImage;
img_display = immultiply(binaryImage,rgb2gray(J1));
figure, imshow(img_display);
The output seems to be cut on one of the parts of the object (for a different image as input, not the one displayed above). I want an output in such a way that it is symmetric (its not always a perfect circle, when it is rotated).
I want to strictly avoid im2bw since as soon as I binarize, I lose a lot of information about the shape.
This is what I was thinking of:
I can detect the outer most circular (almost circular) contour of the image (shown in yellow). From this, I can find out the centroid and maybe find a circle which has a radius of 50% (to locate the region shown in red). But this won't be exactly symmetric since the object is slightly tilted. How can I tackle this issue?
I have attached another image where object is slightly tilted here
I'd try messing around with the 'log' filter. The region you want is essentially low values of the 2nd order derivative (i.e. where the slope is decreasing), and you can detect these regions by using a log filter and finding negative values. Here's a very basic outline of what you can do, and then tweak it to your needs.
img = im2double(rgb2gray(imread('wheel.png')));
img = imresize(img, 0.25, 'bicubic');
filt_img = imfilter(img, fspecial('log',31,5));
bin_img = filt_img < 0;
subplot(2,2,1);
imshow(filt_img,[]);
% Get regionprops
rp = regionprops(bin_img,'EulerNumber','Eccentricity','Area','PixelIdxList','PixelList');
rp = rp([rp.EulerNumber] == 0 & [rp.Eccentricity] < 0.5 & [rp.Area] > 2000);
bin_img(:) = false;
bin_img(vertcat(rp.PixelIdxList)) = true;
subplot(2,2,2);
imshow(bin_img,[]);
bin_img(:) = false;
bin_img(rp(1).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,3);
imshow(img_new,[]);
bin_img(:) = false;
bin_img(rp(2).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,4);
imshow(img_new,[]);
Output:

Segmenting Lungs and nodules in CT images

I am new with Image processing in Matlab, I am trying to segment LUNG and nodules from CT image. I have done initial image enhancement.
I searched lot on the same but I haven't found any relevant materials.
Trying to segment lung part from the given image; and then detecting nodules on Lung part.
Code I tried in Matlab:
d1 = dicomread('000000.dcm');
d1ca = imadjust(d1);
d1nF = wiener2(d1ca);
d1Level = graythresh(d1nF);
d1sBW = im2bw(d1nF,d1Level);
sed = strel('diamon',1);
BWfinal = imerode(d1sBW,sed);
BWfinal = imerode(BWfinal,sed);
BWoutline = bwperim(BWfinal);
Segout = d1nF;
Segout(BWoutline) = 0;
edgePrewitt = edge(d1nF,'prewitt');
Result of above code:
Want results like this:
http://oi41.tinypic.com/35me7pj.jpg
http://oi42.tinypic.com/2jbtk6p.jpg
http://oi44.tinypic.com/w0kthe.jpg
http://oi40.tinypic.com/nmfaio.jpg
http://oi41.tinypic.com/2nvdrie.jpg
http://oi43.tinypic.com/2nvdnhk.jpg
I know its may be easy for experts. Please help me out.
Thank you!
The following is not a Matlab answer! However, OpenCV and Matlab share many features in common, and I'm sure you will be able to translate this C++ code to Matlab with no problems.
For more information about the methods being called, check the OpenCV documentation.
#include <iostream>
#include <vector>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
int main(int argc, char* argv[])
{
// Load input image (colored, i.e. 3-channel)
cv::Mat input = cv::imread(argv[1]);
if (input.empty())
{
std::cout << "!!! failed imread()" << std::endl;
return -1;
}
// Convert input image to grayscale (1-channel)
cv::Mat grayscale = input.clone();
cv::cvtColor(input, grayscale, cv::COLOR_BGR2GRAY);
What grayscale looks like:
// Erode & Dilate to remove noises and improve the result of the next operation (threshold)
int erosion_type = cv::MORPH_RECT; // MORPH_RECT, MORPH_CROSS, MORPH_ELLIPSE
int erosion_size = 3;
cv::Mat element = cv::getStructuringElement(erosion_type,
cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
cv::Point(erosion_size, erosion_size));
cv::erode(grayscale, grayscale, element);
cv::dilate(grayscale, grayscale, element);
What grayscale looks like after morphological operations:
// Threshold to segment the area of the lungs
cv::Mat thres;
cv::threshold(grayscale, thres, 80, 150, cv::THRESH_BINARY);
What thres looks like:
// Find the contours of the lungs in the thresholded image
std::vector<std::vector<cv::Point> > contours;
cv::findContours(thres, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
// Fill the areas of the lungs with BLUE for better visualization
cv::Mat lungs = input.clone();
for (size_t i = 0; i < contours.size(); i++)
{
std::vector<cv::Point> cnt = contours[i];
double area = cv::contourArea(cv::Mat(cnt));
if (area > 15000 && area < 35000)
{
std::cout << "* Area: " << area << std::endl;
cv::drawContours(lungs, contours, i, cv::Scalar(255, 0, 0),
CV_FILLED, 8, std::vector<cv::Vec4i>(), 0, cv::Point() );
}
}
What lungs looks like:
// Using the image with blue lungs as a mask, we create a new image containing only the lungs
cv::Mat blue_mask = cv::Mat::zeros(input.size(), CV_8UC1);
cv::inRange(lungs, cv::Scalar(255, 0, 0), cv::Scalar(255, 0, 0), blue_mask);
cv::Mat output;
input.copyTo(output, blue_mask);
What output looks like:
At this point you have the lungs isolated in the image and can proceed to execute other filter operations to isolate the nodules.
Good luck.
Try this:
% dp6BK.png is your original image, I downloaded directly
I = im2double(imread('dp6BK.png'));
I=I(:,:,1);
imshow(I)
cropped = I(50:430,8:500); %% Crop region of interest
thresholded = cropped < 0.35; %% Threshold to isolate lungs
clearThresh = imclearborder(thresholded); %% Remove border artifacts in image
Liver = bwareaopen(clearThresh,100); %% Remove objects less than 100 pixels
Liver1 = imfill(Liver,'hole'); % fill in the vessels inside the lungs
figure,imshow(Liver1.*cropped)
What you will get:

How Kinect depth images are created ? Can simple RGB images can be converted to images like those depth images?

My primary motive is to detect hand from simple RGB images (images from my webcam ).
I found a sample code find_hand_point
function [result, depth] = find_hand_point(depth_frame)
% function result = find_hand_point(depth_frame)
%
% returns the coordinate of a pixel that we expect to belong to the hand.
% very simple implementation, we assume that the hand is the closest object
% to the sensor.
max_value = max(depth_frame(:));
current2 = depth_frame;
current2(depth_frame == 0) = max_value;
blurred = imfilter(current2, ones(5, 5)/25, 'symmetric', 'same');
minimum = min(blurred(:));
[is, js] = find(blurred == minimum);
result = [is(1), js(1)];
depth = minimum;
The result variable is the co-ordinate of the nearest thing to the camera (the hand).
A depth image from kinect device was passed to this function and the result is as:
http://img839.imageshack.us/img839/5562/testcs.jpg
the green rectangle shows the closest thing to the camera (the hand).
PROBLEM:
The images that my laptop camera captures are not Depth images but are simple RGB images.
Is there a way to convert my RGB images to those depth images ?
Is there a simple alternative technique to detect hand ?
Kinect uses extra sensors to retrieve the depth data. There is not enough information in a single webcam image to reconstruct a 3D picture. But it is possible to make far-reaching estimates based on a series of images. This is the principle behind XTR-3D and similar solutions.
A much simpler approach can be found in http://www.andol.info/hci/830.htm
There the author converts the rgb image to hsv, and he just keeps specific ranges of the H, S and V values, that he assumes that are hand-like colors.
In Matlab:
function [hand] = get_hand(rgb_image)
hsv_image = rgb2hsv(rgb_image)
hand = ( (hsv_image(:,:,1)>= 0) & (hsv_image(:,:,1)< 20) ) & ( (hsv_image(:,:,2)>= 30) & (hsv_image(:,:,2)< 150) ) & ( (hsv_image(:,:,3)>= 80) & (hsv_image(:,:,3)< 255) )
end
the hand=... will give you a matrix that will have 1s in the pixels where
0 <= H < 20 AND 30 <= S < 150 AND 80 <= V < 255
A better technique I found to detect hand via skin color :)
http://www.edaboard.com/thread200673.html

Boundry detect paper sheet opencv

I am new in openCV, I already detect edge of paper sheet but my result image is blurred after draw lines on edge, How I can draw lines on edges of paper sheet so my image quality remain unaffected.
what I am Missing..
My code is below.
Many thanks.
-(void)forOpenCV
{
if( imageView.image != nil )
{
cv::Mat greyMat=[self cvMatFromUIImage:imageView.image];
vector<vector<cv::Point> > squares;
cv::Mat img= [self debugSquares: squares: greyMat ];
imageView.image =[self UIImageFromCVMat: img];
}
}
- (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image
{
NSLog(#"%lu",squares.size());
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(image.size(), CV_8U), gray;
vector<vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&image, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
NSLog(#"%lu",squares.size());
for( size_t i = 0; i < squares.size(); i++ )
{
cv:: Rect rectangle = boundingRect(Mat(squares[i]));
if(i==squares.size()-1)////Detecting Rectangle here
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(#"%d",n);
line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), Scalar(255,0,0),2,8);
polylines(image, &p, &n, 1, true, Scalar(255,255,0), 5, CV_AA);
fx1=rectangle.x;
fy1=rectangle.y;
fx2=rectangle.x+rectangle.width;
fy2=rectangle.y+rectangle.height;
line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), Scalar(0,0,255),2,8);
}
}
return image;
}
Instead of
Mat blurred(image);
you need to do
Mat blurred = image.clone();
Because the first line does not copy the image, but just creates a second pointer to the same data.
When you blurr the image, you are also changing the original.
What you need to do instead is, to create a real copy of the actual data and operate on this copy.
The OpenCV reference states:
by using a copy constructor or assignment operator, where on the right side it can
be a matrix or expression, see below. Again, as noted in the introduction, matrix assignment is O(1) operation because it only copies the header and increases the reference counter.
Mat::clone() method can be used to get a full (a.k.a. deep) copy of the matrix when you need it.
The first problem is easily solved by doing the entire processing on a copy of the original image. That way, after you get all the points of the square you can draw the lines on the original image and it will not be blurred.
The second problem, which is cropping, can be solved by defining a ROI (region of interested) in the original image and then copying it to a new Mat. I've demonstrated that in this answer:
// Setup a Region Of Interest
cv::Rect roi;
roi.x = 50
roi.y = 10
roi.width = 400;
roi.height = 450;
// Crop the original image to the area defined by ROI
cv::Mat crop = original_image(roi);
cv::imwrite("cropped.png", crop);