Using HoughCircles to detect and measure pupil and iris - iphone

I'm trying to use OpenCV, more specifically its HoughCircles to detect and measure the pupil and iris, currently I've been playing with some of the variables in the function, because it either returns 0 circles, or an excessive amount. Below is the code and test image I'm using.
Code for measuring iris:
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::vector<cv::Vec3f> circles;
cv::cvtColor(eye2, eye1, CV_RGBA2GRAY);
cv::morphologyEx(eye1, eye1, 4, cv::getStructuringElement(cv::MORPH_RECT,cv::Size(3, 3)));
cv::threshold(eye1, eye1, 0, 255, cv::THRESH_OTSU);
eye1 = [self circleCutOut:eye1 Size:50];
cv::GaussianBlur(eye1, eye1, cv::Size(7, 7), 0);
cv::HoughCircles(eye1, circles, CV_HOUGH_GRADIENT, 2, eye1.rows/4);
Code for measuring pupil:
eye1 = [self increaseBlackPupil:eye1];
cv::Mat eye2 = cv::Mat::zeros(eye1.rows, eye1.cols, CV_8UC3);
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::threshold(eye2, eye1, 25, 255, CV_THRESH_BINARY);
cv::SimpleBlobDetector::Params params;
params.minDistBetweenBlobs = 75.0f;
params.filterByInertia = false;
params.filterByConvexity = false;
params.filterByCircularity = false;
params.filterByArea = true;
params.minArea = 50;
params.maxArea = 500;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);
blob_detector->create("SimpleBlob");
cv::vector<cv::KeyPoint> keypoints;
blob_detector->detect(eye1, keypoints);
I know the image is rough, I've been also trying to find a way to clean it up and make the edges clearer.
So my question to put it plainly: What can I do to adjust the parameters in the function HoughCircles or changes to the images to make the iris and pupil detected?
Thanks

Ok, without experimenting too much, what I understand is that you've only applied a bilateral filter to the image before using the Hough circle detector.
In my opinion, you need to include a thresholding step into the process.
I took your sample image that you provided in the post and made it undergo the following steps:
conversion to greyscale
morphological gradient
thresholding
hough circle detection.
after the thresholding step, I got the following image for the left eye only:
here's the code:
greyscale:
cvCvtColor(im_rgb,im_rgb,CV_RGB2GRAY);
morphology:
cv::morphologyEx(im_rgb,im_rgb,4,cv::getStructuringElement(cv::MORPH_RECT,cv::Size(size,size)));
thresholding:
cv::threshold(im_rgb, im_rgb, low, high, cv::THRESH_OTSU);
hough circle detection:
cv::vector<cv::Vec3f> circles;
cv::HoughCircles(im_rgb, circles, CV_HOUGH_GRADIENT, 2, im_rgb.rows/4);
Now when I print:
NSLog(#"Found %ld cirlces", circles.size());
I get:
"Found 1 cirlces"
Hope this helps.

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

Circle detection via Hough Transform

I am writing a matlab code that takes in a photo and detects the circular object. After using some filters, I got below image.
To detect circular object(it is not a perfect circle), I tried to apply Hough Transform passing different values of radius and threshold, but it couldn't detect properly. Why this happens? Is it about shape of object or background of image?
Also is it possible to detect same object at the following image using Hough Transform?
Edge of circular object seems by human eye, but I am not sure that background can be eliminated from image completely via Hough Transform.
You can use imfindcircles in the Image Processing Toolbox. Using morphology to fill in the circle and cranking up sensitivity may help:
im = imread('pattern.jpg');
im2 = rgb2gray(im(100:end-100, 100:end-100, :));
im3 = im2bw(im2, 0.1);
im4 = imclose(im3, strel('disk', 4, 4));
im5 = imfill(im4, 'holes');
imshow(im5);
[centers, radii] = imfindcircles(im5, [180, 200], 'Sensitivity', .99);
viscircles(centers, radii);

HoughCircles gives wrong number of circles and position - iOS

Im using OpenCV to help me detect a coin in an image taken from an iPhone camera. Im using HoughCircles method to help me find them but the results are less than hopeful.
cv::Mat greyMat;
cv::Mat filteredMat;
cv::vector<cv::Vec3f> circles;
cv::cvtColor(mainImageCV, greyMat, CV_BGR2GRAY);
cv::threshold(greyMat, filteredMat, 100, 255, CV_THRESH_BINARY);
for ( int i = 1; i < 31; i = i + 2 )
{
// cv::blur( filteredMat, greyMat, cv::Size( i, i ), cv::Point(-1,-1) );
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
// cv::medianBlur(filteredMat, greyMat, i);
// cv::bilateralFilter(filteredMat, greyMat, i, i*2, i/2);
}
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
NSLog(#"Circles: %ld", circles.size());
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
[self removeOverViews];
[self.imageView setImage: [self UIImageFromCVMat:greyMat]];
This current segment of code returns that i have 15 circles and the are all position along the right side of the image which has me confused.
Im new to OpenCV and there are barely any examples for iOS which has left me desperate.
Any help would be greatly appreciated, thanks in advance!
Your algorithm doesn't make much sense. It seems that you are using cv::GaussianBlur iteratively, but when you run HoughCircles on it, it's only going to work on the grey image that has been filtered by a GassianBlur with a 31x31 kernel, which is going to blur the crap out of the image. It might make better sense to do something like this to see the best results:
This will show you all images iteratively, which I believe is what you wanted to do in the first place.
// NOTE only psuedocode, won't compile, need to fix up.
for ( int i = 1; i < 31; i = i + 2 )
{
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
cv::imshow("Circles i " + i, greyMat);
}
You still need some edges for the HoughCircle implementation to work. It uses a Canny edge detector and if you are blurring your image that much.
Also I would suggest you work with the bilateralFilter which blurs but attempts to keep some edges.
This might help as well for defining the correct parameters: HoughCircles Parameters to recognise balls
All the above code does is run the same process over and over, so your circles detect the drawn circles over and over again. Not the best. Also uses Gaussian Blur over and over, not the best way, in my opinion. I can see the Gaussian Blur in a for loop to make the image more readable, but not HoughCircles in the for loop. You need to include all the variables in houghcircles, it doubled my recognition rate when I used them all.
cv::HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 1, 30, 50, 20, 10, 25);
Same format that is available on opencv website it is the C++ format.
Here is a link to my iPhone sim pic. Costco aspirin on my desktop. App counts circles in image and displays total in label.
Here is my code, it has a lot of comments included to show what I have tried...and sifted through. Hope this helps.
OpenCV install in xcode
I know this is an old question, so just putting this here in case someone else will make the same mistake (as did I...):
This line:
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
has the brackets messed up, the double "((" at the beginning is causing the point to be initialized with only one parameter instead of two, it should be:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
Hope that helps.

color replacement in image for iphone application

Basically i want to implement color replacement feature for my paint application.
Below are original and expected output
Original:
After changing wall color selected by user along with some threshold for replacement
I have tried two approaches but could not got working as expected
Approach 1:
Queue-based Flood Fill algorithm for color replacement
but with i got below output with terribly slow and wall shadow has not been preserved.
Approach 2:
So i have tried to look at another option and found below post from SO
How to change a particular color in an image?
but i could not understand logic and not sure about my code implementation from step 3.
Please find below code for each step wise with my understanding.
1) Convert the image from RGB to HSV using cvCvtColor (we only want to
change the hue).
IplImage *mainImage=[self CreateIplImageFromUIImage:[UIImage imageNamed:#"original.jpg"]];
IplImage *hsvImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
IplImage *threshImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
cvCvtColor(mainImage,hsvImage,CV_RGB2HSV);
2) Isolate a color with cvThreshold specifying a
certain tolerance (you want a range of colors, not one flat color).
cvThreshold(hsvImage, threshImage, 0, 100, CV_THRESH_BINARY);
3) Discard areas of color below a minimum size using a blob detection
library like cvBlobsLib. This will get rid of dots of the similar
color in the scene. Do i need to specify original image or thresold image?
CBlobResult blobs = CBlobResult(threshImage, NULL, 0);
blobs.Filter( blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10);
4) Mask the color with cvInRangeS and use the
resulting mask to apply the new hue.
Not sure about this function how it helps in color replacement and not able to understand arguments to be provided.
5) cvMerge the new image with the
new hue with an image composed by the saturation and brightness
channels that you saved in step one.
i understand that cvMerge will merge three channel of H S and V but how i can use output of above three steps.
so basically stuck with opencv implementation,
if possible then please guide me for opencv implemenation or any other solution to tryout.
Finally i am able to achieve some desired output using below javacv code and same ported to opencv too.
this solution has 2 problems
don't have edge detection, i think using contours i can achieve it
replaced color has flat hue and sat which should set based on source
pixel hue sat difference but not sure how to achieve that. may be
instead of cvSet using cvAddS
IplImage image = cvLoadImage("sample.png");
CvSize cvSize = cvGetSize(image);
IplImage hsvImage = cvCreateImage(cvSize, image.depth(),image.nChannels());
IplImage hChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage sChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage vChannel = cvCreateImage(cvSize, image.depth(), 1);
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
IplImage cvInRange = cvCreateImage(cvSize, image.depth(), 1);
CvScalar source=new CvScalar(72/2,0.07*255,66,0); //source color to replace
CvScalar from=getScaler(source,false);
CvScalar to=getScaler(source, true);
cvInRangeS(hsvImage, from , to, cvInRange);
IplImage dest = cvCreateImage(cvSize, image.depth(), image.nChannels());
IplImage temp = cvCreateImage(cvSize, IPL_DEPTH_8U, 2);
cvMerge(hChannel, sChannel, null, null, temp);
cvSet(temp, new CvScalar(45,255,0,0), cvInRange);// destination hue and sat
cvSplit(temp, hChannel, sChannel, null, null);
cvMerge(hChannel, sChannel, vChannel, null, dest);
cvCvtColor(dest, dest, CV_HSV2BGR);
cvSaveImage("output.png", dest);
method to for calculating threshold
CvScalar getScaler(CvScalar seed,boolean plus){
if(plus){
return CV_RGB(seed.red()+(seed.red()*thresold),seed.green()+(seed.green()*thresold),seed.blue()+(seed.blue()*thresold));
}else{
return CV_RGB(seed.red()-(seed.red()*thresold),seed.green()-(seed.green()*thresold),seed.blue()-(seed.blue()*thresold));
}
}
I know this answer will be useful to someone someday.
try this out in your view viewdidLoad() override method for iOS.
image in the code snippet below should be from your UIImageView
seed also is fixed.you can make it dynamic based on user tap event.
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
imageView.image = [self UIImageFromCVMat:image];
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
try {
if(seed.x > 0 && seed.y > 0){
cv::floodFill(image, mask, seed, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed2, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed3, cv::Scalar(50, 155, 0) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
}
} catch (Exception ex) {
}
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.image = [self UIImageFromCVMat:image];

What is the best type of marker to detect with OpenCV and how can I find the 2D location near real-time on the iPhone?

I am writing an iPhone app to use OpenCV to detect the 2D location in the iPhone camera of some sort of predefined marker (only one). What is the best type of marker? Circle? Square? Color? What is the fastest way to detect that marker? In addition, the detection algorithm needs to run near real-time.
I have tried openCV's circle detection but I got 1 fps (640x480 image):
Mat gray;
vector<Vec3f> circles;
CGPoint findLargestCircle(IplImage *image) {
Mat img(image);
cvtColor(img, gray, CV_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, cv::Size(9, 9), 2, 2 );
HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
2, gray.rows/4, 200, 100 );
double radius=-1;
size_t ind;
for( size_t i = 0; i < circles.size(); i++ ) {
if(circles[i][2] > radius) {
radius = circles[i][2];
ind = i;
}
}
if(ind == -1) {
return CGPointMake(0, 0);
}else {
return CGPointMake(circles[ind][0], circles[ind][1]);
}
}
Any advice or code would be helpful.
Thanks in advance.
Maybe you can try some specific colored marker, then take color filtering into consideration. Another, the object with specific oriented texture is a good choice too.