Face Detection issue using CIDetector - iphone

I'm working on an app in which i have to detect left eye, right eye, and mouth position.
I have an imageView on my self.view and imageView contains a face image, now I want to get both eyes and mouth coordinates. I have seen 2-3 sample codes for this but all are approximately same in all codes we have to invert my view for matching the coordinates which I don't want because my view have some other controls. And one more thing they all are using
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"image.png"]];
but my imageView has frame and i cant init it with image. When I do so I found faceFeature's eyes and mouth coordinates wrong.
I had started my code from this sample code but in this also view is being invert its Y coordinate.
Can any one help me how can i detect the face eyes and mouth position on UIImageView's image without invert my self.view.
Please let me know if my question is not clear enough.

The trick here is to transform the returned points and bounds from CIDetector to your coordinates, instead of flipping your own view. CIImage has origin at the bottom left which you will need to transform to the top left
int height = CVPixelBufferGetHeight(pixelBuffer);
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -1 * height);
/* Do your face detection */
CGRect faceRect = CGRectApplyAffineTransform(feature.bounds, transform);
CGPoint mouthPoint = CGPointApplyAffineTransform(feature.mouthPosition, transform);
// Same for eyes, etc
For your second question about UIImageView, you just have to do
imageview.image = yourImage
After you have initialized your imageview

Worked it out! - edited the class to have a faceContainer which contains all of the face objects (the mouth and eyes), then this container is rotated and thats all. Obviously this is very crude, but it does work. Here is a link, http://www.jonathanlking.com/download/AppDelegate.m. Then replace the app delegate from the sample code with it.
-- OLD POST --
Have a look at this Apple Documentation, and slide 42 onwards from this apple talk. Also you should probably watch the talk as it has a demo of what you are trying to achieve, it's called "Using Core Image on iOS & Mac OS X" and is here.

Related

How to flip a CGImageRef over y-axis

I'm trying to animate a sprite using only CoreAnimation. It's working so far except I can't figure out how to flip the sprite sheet.
Right now, I have a walking animation, but I want the sprite to face the direction it's walking (because it looks kind of silly walking backwards).
I guess I could add the reversed images on the sprite sheet, but I would rather avoid that because it could make it really big when I decide to add more.
Right now, my sprite is extending CALayer and I've set its contents to the CGImageRef which is the sprite sheet:
self.contents = (id) image;
To flip it I tried:
UIImage *tmp = [UIImage imageWithCGImage:image scale:1.0 orientation:UIImageOrientationUpMirrored];
CGImageRef flippedImage = tmp.CGImage;
self.contents = (id) flippedImage;
..and that's not working.
I found other solutions which involve animating, but I don't want to animate the flip. I just want it to happen instantly.
Is there a simple way to do this?
If there's a way to flip the whole CALayer, I'd like to know that too. =]
Thanks!
Rather than flipping your image, you could try applying a transform to your layer object.
Something like:
self.transform = CATransform3DMakeScale(-1, 1, 1);
This may be better for performance too; if the sprite needs to walk in the opposite direction, you can just set your transform back to CATransform3DIdentity rather than allocing a new image and rotating it.

Marking face and eyes on image

I am working on a app that detects face and marks eyes and mouth on image.I have detected the face ,eyes and mouth using CIDetector but the position of eyes and face that it returns is with respect to original image not according to the view of imageview on which i have to mark faces and eyes i.e for example i have a image of 720 *720 the position of face and eyes that it retuns is with respect to size of 720 *720.But the problem is ..i have to show eyes and face annotated on a image view of size 320 * 320. please advice me how can i map the postion of face returned by CIdetector to position of face on image view.
You can solve this by considering the imageview size to image size ratio.
Following is something really simple and could be used to solve your problem.
//'returnedPoint' is the position of eye returned by CIDetector
CGFloat ratio = 320/ 720.0;
//It is like, CGFloat ratio = yourImageView.frame.size.width/ yourImage.size.width;
CGPoint pointOnImageView = CGPointMake((ratio * returnedPoint.x), (ratio * returnedPoint.y));

Arc's position on circle

I've got few pixels wide circle (circle is UIBezierPath based). I have to put an arc (which is basically UIView subclass with custom drawing) on the circle so the arc covers circle. I know how to calculate rotation of arc and position but something is not right. I mean I know the reason - it's beacause center property which is assigned to center of UIView, if it was center of the arc, everything would be great. But it's not.
I also know how to solve that. I have to calculate smaller radius where I will put arcs on. But how to do that. Basically it seems easy but because of the arc is in rectangular UIView it gets a bit harder. I'll show you some images so you can see the problem.
The easiest way to do this is to change the anchor point of each arc view's layer. You can read about the anchor point here if you don't already know about it.
You will need to add the QuartzCore framework to your build target and add #import <QuartzCore/QuartzCore.h>.
CGRect circleBounds = circleView.bounds;
topArcView.layer.anchorPoint = CGPointMake(.5, 0);
topArcView.layer.position = CGPointMake(CGRectGetMidX(circleBounds), 0);
bottomArcView.layer.anchorPoint = CGPointMake(.5, 1);
bottomArcView.layer.position = CGPointMake(CGRectGetMidX(circleBounds), CGRectGetMaxY(circleBounds));
leftArcView.layer.anchorPoint = CGPointMake(0, .5);
leftArcView.layer.position = CGPointMake(circleBounds.origin.x, CGRectGetMidY(circleBounds));
rightArcView.layer.anchorPoint = CGPointMake(1, .5);
rightArcView.layer.position = CGPointMake(CGRectGetMaxX(circleBounds), CGRectGetMidY(circleBounds));
Perhaps you could make the UIView subclass the same size and center point as the circle's rect. Then draw the arc in the correct location.

What does "invalid frame" mean exactley after a transform is performed?

I have a simple app that allows a user to drag / scale an image on the screen using the pan and pinch gesture recognizers. It's common knowledge from the Apple API that after you perform a transform, as I am doing in my scale method, that the frame becomes "invalid". Being that I continually use the frame of my imageview to calculate the scale size and translation distance even after I have performed a transform and everything continues to work fine, I'm wondering what about the frame has become invalid?
I ask this, because now I am trying to crop the image that has been moved around and scaled, and I'm using its frame value to determine the crop area, and I'm getting funny results. I'm wondering if the invalid frame might be the problem, or if it could be something else. Here is my code for your reference:
CGRect croppedRect = CGRectMake(fabsf(self.oldImageView.frame.origin.x), fabsf(self.oldImageView.frame.origin.y), 316, 316);
CGImageRef croppedImageRef = CGImageCreateWithImageInRect(self.oldImageView.image.CGImage, croppedRect);
UIImage *croppedImage = [UIImage imageWithCGImage: croppedImageRef];
CGImageRelease(croppedImageRef);
UIImageView *newImageView = [[UIImageView alloc] initWithImage: croppedImage];
Invalid frame = funny results.
I think invalid means it is not guaranteed to have the wrong value, but it might. You've been getting away with using the frame by luck.
If you have a transformed view, and you need the frame, you can use this code:
CGRect frame = [[view superview] convertRect:view.bounds fromView:view];
As long as it's not rotated. If it's rotated, the "frame" can't be represented as a CGRect, which can only represent rectangles aligned with the axes. You can use convertPoint to convert each of the corners of the bounds, if that information is of any use to your program.
Or to set the frame if it's not rotated
view.bounds = [view convertRect:newFrame fromView:[view superview]];
You shouldn't be using the frame anyway in that case. Frame is defined in the superview's coordinate system. You specify the argument to CGImageCreateWithImageInRect in the image's coordinate system, which I suspect is the same as the image view's coordinate system, even when transformed. You should use bounds in place of frame in that code. Either that or (0.0, 0.0) as the origin. I can't tell quite what you're trying to achieve.

how to apply an imageview frame with the inclined coordinates

hi all upto now i know making rectangle with the CGrectmake and this rect(frame) i am using as imageview frame like UIImageView *someImage=[[uiimageview alloc]initwithframe:someRect]; now i can add an image with the frame of someRect. my problem here is when the coordinates like
(rectangleFirstx-coordinate,tectangleFirstY-cordinate)=(10,10)
(rectangleLastx-cordinate,rectangleLasty-cordinate)=(17,7) this, how can i give frame to the uiimageview....This is like a inclined rectangle..can any one suggest me how to apply frame through the ios library for these type of coordinates..Thanks in advance..
Your example isn't very clear because a rectangle with opposite corners at (10,10) and (10,7) can be in any one of a myriad of different orientations, including one perfectly aligned along the x and y axis.
What you can certainly do is create a UIImageView of the desired size and location and then rotate it by using one of many techniques, including animation methods.
[UIImageView animateWithDuration:0.1 animations:^
{
your_UIImageView_here.transform = CGAffineTransformMakeRotation((M_PI/180.0) * degrees);
}];
You can hide the UIImageView until the rotation is done and then show it.
If your question is about how to use the coordinates you provided to arrive at an angle I'd suggest that more data is needed because it is impossible to pick one of the billions of possible rectangles with corners at those two points without more information. Once you have more data then it is pretty basic trigonometry to figure out the angle to feed into the rotation.