HoughCircles gives wrong number of circles and position - iOS - iphone

Im using OpenCV to help me detect a coin in an image taken from an iPhone camera. Im using HoughCircles method to help me find them but the results are less than hopeful.
cv::Mat greyMat;
cv::Mat filteredMat;
cv::vector<cv::Vec3f> circles;
cv::cvtColor(mainImageCV, greyMat, CV_BGR2GRAY);
cv::threshold(greyMat, filteredMat, 100, 255, CV_THRESH_BINARY);
for ( int i = 1; i < 31; i = i + 2 )
{
// cv::blur( filteredMat, greyMat, cv::Size( i, i ), cv::Point(-1,-1) );
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
// cv::medianBlur(filteredMat, greyMat, i);
// cv::bilateralFilter(filteredMat, greyMat, i, i*2, i/2);
}
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
NSLog(#"Circles: %ld", circles.size());
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
[self removeOverViews];
[self.imageView setImage: [self UIImageFromCVMat:greyMat]];
This current segment of code returns that i have 15 circles and the are all position along the right side of the image which has me confused.
Im new to OpenCV and there are barely any examples for iOS which has left me desperate.
Any help would be greatly appreciated, thanks in advance!

Your algorithm doesn't make much sense. It seems that you are using cv::GaussianBlur iteratively, but when you run HoughCircles on it, it's only going to work on the grey image that has been filtered by a GassianBlur with a 31x31 kernel, which is going to blur the crap out of the image. It might make better sense to do something like this to see the best results:
This will show you all images iteratively, which I believe is what you wanted to do in the first place.
// NOTE only psuedocode, won't compile, need to fix up.
for ( int i = 1; i < 31; i = i + 2 )
{
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
cv::imshow("Circles i " + i, greyMat);
}
You still need some edges for the HoughCircle implementation to work. It uses a Canny edge detector and if you are blurring your image that much.
Also I would suggest you work with the bilateralFilter which blurs but attempts to keep some edges.
This might help as well for defining the correct parameters: HoughCircles Parameters to recognise balls

All the above code does is run the same process over and over, so your circles detect the drawn circles over and over again. Not the best. Also uses Gaussian Blur over and over, not the best way, in my opinion. I can see the Gaussian Blur in a for loop to make the image more readable, but not HoughCircles in the for loop. You need to include all the variables in houghcircles, it doubled my recognition rate when I used them all.
cv::HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 1, 30, 50, 20, 10, 25);
Same format that is available on opencv website it is the C++ format.
Here is a link to my iPhone sim pic. Costco aspirin on my desktop. App counts circles in image and displays total in label.
Here is my code, it has a lot of comments included to show what I have tried...and sifted through. Hope this helps.
OpenCV install in xcode

I know this is an old question, so just putting this here in case someone else will make the same mistake (as did I...):
This line:
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
has the brackets messed up, the double "((" at the beginning is causing the point to be initialized with only one parameter instead of two, it should be:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
Hope that helps.

Related

SpriteKit : calculate distance between two texture masks

I have two irregular shapes in SpriteKit, and I want to calculate the vertical distance from the base of a space ship and the (irregular) terrain right below.
Is there a way to do it ?
Thanks !
Place an SKPhysicsBody that is in a shape of a line at the center of your ship with a width of 1 and the height of your scene, then in the didBeginContact method, grab the 2 contact points. You now know 2 points, just use the distance formula (in this case it is just y2-y1) and you have your answer
I found a different way to solve my problem, but I think that KnightOfDragon's one is conceptually better (although I did not manage to make it work).
The terrain's texture is essentially a bitmap with opaque and transparent pixels. So I decided to parse these pixels, storing the highest opaque pixel for each column, building a "radar altitude map". So I just have to calculate the difference between the bottom of the ship and the altitude of the column right beneath its center:
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(terrain.texture.CGImage));
const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
NSMutableArray *radar = [NSMutableArray new];
for (long col = 0; col < terrain.size.width; col++)
[radar addObject:#(0)];
for (long ind = 0; ind < (terrain.size.height * terrain.size.width); ind++)
{
if (pixels[ind] & 0xff000000) // non-transparent pixel
{
long line = ind/terrain.size.width;
long col = ind - (line*terrain.size.width);
if ([radar[col]integerValue] <terrain.size.height-line) radar[col] = #(terrain.size.height-line);
}
}
This solution could be optimized, of course. It's just the basic idea.
I've added an image to show the original texture, its representation as opaque/transparent pixels, and a test by putting little white nodes to check where the "surface" was.

Cocos2d / Box2d CCRibbon Collision Detection

I'm developing a game on iOS w/ cocos2d+box2d as the game engine, and am trying to add a CCRibbon (wherein the points get populated with touches), that I know how to, and to get that CCRibbon's shape linked up to box2d, so when an object collides with it (due to gravity), it bounces off as if it were a normal thing. Would anyone happen to know how to do this / give me alternatives ?
Many thanks,
Alexandre Cassagne
Take each point and create a thin static rectangular box2d polygon using the points + the adjustment to make it a shape.
for (int i = 0; i < ccribbon.points.length - 1; i++)
{
int j = i;
j++;
int width = 2;
Array ar = [];
ar[0] = new b2Vec2(ccribbon.points[i].x, ccribbon.points[i].y);
ar[1] = new b2Vec2(ccribbon.points[i].x + width, ccribbon.points[i].y + width);
ar[2] = new b2Vec2(ccribbon.points[j].x, ccribbon.points[j].y);
ar[3] = new b2Vec2(ccribbon.points[j].x + width, ccribbon.points[j].y + width);
//create new static object
b2Polygon b2p = new b2Polygon();
b2p.setAsArray(ar);
//do rest to add it to world etc.
}
of course don't copy that code exactly its just from what i remember and i'm also sure its a combination of C# and Actionscript 3. its kindof a not so pseudo code with lots of blanks you'll need to fill in. Why the comments are there :P.
Thats basically how i would do it though. My experience is only in box2d for flash though.
Have you read this....http://www.raywenderlich.com/606/how-to-use-box2d-for-just-collision-detection-with-cocos2d-iphone

What is the best type of marker to detect with OpenCV and how can I find the 2D location near real-time on the iPhone?

I am writing an iPhone app to use OpenCV to detect the 2D location in the iPhone camera of some sort of predefined marker (only one). What is the best type of marker? Circle? Square? Color? What is the fastest way to detect that marker? In addition, the detection algorithm needs to run near real-time.
I have tried openCV's circle detection but I got 1 fps (640x480 image):
Mat gray;
vector<Vec3f> circles;
CGPoint findLargestCircle(IplImage *image) {
Mat img(image);
cvtColor(img, gray, CV_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, cv::Size(9, 9), 2, 2 );
HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
2, gray.rows/4, 200, 100 );
double radius=-1;
size_t ind;
for( size_t i = 0; i < circles.size(); i++ ) {
if(circles[i][2] > radius) {
radius = circles[i][2];
ind = i;
}
}
if(ind == -1) {
return CGPointMake(0, 0);
}else {
return CGPointMake(circles[ind][0], circles[ind][1]);
}
}
Any advice or code would be helpful.
Thanks in advance.
Maybe you can try some specific colored marker, then take color filtering into consideration. Another, the object with specific oriented texture is a good choice too.

Is CGContextAddArc really that slow (compared to a circle drawn with a few lines

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!

How to move incrementally in a 3D world using glRotatef() and glTranslatef()

I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).