iOS Face detection transformation - iphone

I have followed a tutorial to detect a face within an image, it works. It creates a red rectangle around the face by making a UIView *faceView. Now i am trying to obtain the coordinates of the face detected however the results returned are off slightly in the y-axis. How can i fix this? where am i going wrong.
This is what i have attempted :
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This is the source code for the detection :
-
(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
NSLog(#"My view frame: %#", NSStringFromCGRect(newBounds));
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
}
if(faceFeature.hasRightEyePosition)
{
}
if(faceFeature.hasMouthPosition)
{
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"jolie.jpg"]];
// Draw the face detection image
[self.view addSubview:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
}

CoreImage Coordination system and UIKit coordination system are quite different. CIFaceFeature provides coordinates in coreimage coordination system and for your work you need to convert them into uikit coordinate system:
// CoreImage coordinate system origin is at the bottom left corner and UIKit is at the top left corner
// So we need to translate features positions before drawing them to screen
// In order to do so we make an affine transform
// **Note**
// Its better to convert CoreImage coordinates to UIKit coordinates and
// not the other way around because doing so could affect other drawings
// i.e. In the original sample project you see the image and the bottom, Isn't weird?
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -_pickerImageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// add the new view to create a box around the face
[_pickerImageView addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// Get the left eye position: Translate CoreImage coordinates to UIKit coordinates
const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
// Note1:
// If you want to add this to the the faceView instead of the imageView we need to translate its
// coordinates a bit more {-x, -y} in other words: {-faceFeature.bounds.origin.x, -faceFeature.bounds.origin.y}
// You could do the same for the other eye and the mouth too.
// Create an UIView to represent the left eye, its size depend on the width of the face.
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.x*/, // See Note1
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.y*/, // See Note1
faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
//[faceView addSubview:leftEyeView]; // See Note1
[_pickerImageView addSubview:leftEyeView];
}
}

Related

How can I draw a curved shadow?

Like so:
I know that this will not work with NSShadow, drawing it in drawRect: will work just fine.
You can do this and many other kinds of shadows using Core Animations layers and the shadowPath property. The shadow that you are describing can be make with an elliptical shadow path.
The code to produce this shadow is below. You can tweak the size of the ellipse to have a rounder shape of the shadow. You can also tweak the position, opacity, color and blur radius using the shadow properties on the layer.
self.wantsLayer = YES;
NSView *viewWithRoundShadow = [[NSView alloc] initWithFrame:NSMakeRect(30, 30, 200, 100)];
[self addSubview:viewWithRoundShadow];
CALayer *backingLayer = viewWithRoundShadow.layer;
backingLayer.backgroundColor = [NSColor orangeColor].CGColor;
// Configure shadow
backingLayer.shadowColor = [NSColor blackColor].CGColor;
backingLayer.shadowOffset = CGSizeMake(0, -1.);
backingLayer.shadowRadius = 5.0;
backingLayer.shadowOpacity = 0.75;
CGRect shadowRect = backingLayer.bounds;
CGFloat shadowRectHeight = 25.;
shadowRect.size.height = shadowRectHeight;
// make narrow
shadowRect = CGRectInset(shadowRect, 5, 0);
backingLayer.shadowPath = CGPathCreateWithEllipseInRect(shadowRect, NULL);
Just to show some examples of other shadows than can be created using the same technique; a path like this
will produce a shadow like this
It's far from perfect but I think it does draw the sort of shadow you are looking for. Bear in mind that I have left a plain linear gradient in place from a total black to a clear color. Being so dark, this will not give you a super-realistic shadow unless you tweak the values a bit. You may want to play with the gradient by adding more locations with different alpha values to get whatever stepping you like. Some experimentation is probably required but the values are all there to play with.
As per your suggestion it's a drawRect:(CGRect)rect thing. Just create a custom view and only override it:
- (void)drawRect:(CGRect)rect {
// Get the context
CGContextRef context = UIGraphicsGetCurrentContext();
// Setup the gradient locations. We just want 0 and 1 as in the start and end of the gradient.
CGFloat locations[2] = { 0.0, 1.0 };
// Setup the two colors for the locations. A plain black and a plain black with alpha 0.0 ;-)
CGFloat colors[8] = { 0.0f, 0.0f, 0.0f, 1.0f, // Start color
0.0f, 0.0f, 0.0f, 0.0f }; // End color
// Build the gradient
CGGradientRef gradient = CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceRGB(),
colors,
locations,
2);
// Load a transformation matrix that will squash the gradient in the current context
CGContextScaleCTM(context,1.0f,0.1f);
// Draw the gradient
CGContextDrawRadialGradient(context, // The context
gradient, // The gradient
CGPointMake(self.bounds.size.width/2,0.0f), // Starting point
0.0f, // Starting redius
CGPointMake(self.bounds.size.width/2,0.0f), // Ending point
self.bounds.size.width/2, // Ending radius
kCGGradientDrawsBeforeStartLocation); // Options
// Release it an pray that everything was well written
CGGradientRelease(gradient);
}
This is how it looks like on my screen...
I simply placed an image just over the shadow but you can easily merge the shadow with an image if you subclass UIImageView and override it's drawRect method.
As you can see, what I did was to simply setup a circular gradient but I loaded a scaling matrix to squash it before drawing it to the context.
If you plan to do anything else in that method, remember that you have the matrix in place and everything you do will be deformed by it. You may want to save the the CTM with CGContextSaveGState() before loading the matrix and then restore the original state with CGContextRestoreGState()
Hope this was what you where looking for.
Cheers.
I could explain how to do this in code, or explain how to use a tool which generate this code for you. I choose the latter.
Using PaintCode (free demo available, 1 hour limit per session).
Draw an oval
Draw a Rectangle which intersects with the bottom of the oval.
CMD click both the rectangle and the oval, in the "Objects" list in the top left corner.
Press the Intersect button in the Toolbar.
Select the Bezier from the Objects list.
Set its Stroke to "No Stroke"
Click the Gradient button (located on the left, below the Selection Inspector)
Press the "+" button
Change the gradient color to light grey.
From the Selection inspector, change the Fill Style to "Gradient"
Select Gradient: Linear
adjust the gradient till you are satisfied.
- (void)viewDidLoad
{
UIImage *natureImage = [UIImage imageNamed:#"nature.jpg"];
CALayer *layer = [CALayer layer];
layer.bounds = CGRectMake(0, 0, 200, 200);
layer.position = CGPointMake(380, 200);
layer.contents = (id)natureImage.CGImage;
layer.shadowOffset = CGSizeMake(0,2);
layer.shadowOpacity = 0.70;
layer.shadowPath = (layer.shadowPath) ? nil : [self bezierPathWithCurvedShadowForRect:layer.bounds].CGPath;
[self.view.layer addSublayer:layer];
}
- (UIBezierPath*)bezierPathWithCurvedShadowForRect:(CGRect)rect {
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint topLeft = rect.origin;
CGPoint bottomLeft = CGPointMake(0.0, CGRectGetHeight(rect) + offset);
CGPoint bottomMiddle = CGPointMake(CGRectGetWidth(rect)/2, CGRectGetHeight(rect) - curve);
CGPoint bottomRight = CGPointMake(CGRectGetWidth(rect), CGRectGetHeight(rect) + offset);
CGPoint topRight = CGPointMake(CGRectGetWidth(rect), 0.0);
[path moveToPoint:topLeft];
[path addLineToPoint:bottomLeft];
[path addQuadCurveToPoint:bottomRight controlPoint:bottomMiddle];
[path addLineToPoint:topRight];
[path addLineToPoint:topLeft];
[path closePath];
return path;
}
Hope this will help you.

hitTest overlapping CALayers

I have a UIView that contains a drawing that I've made using CALayers added as sublayers. It is a red square with a blue triangle centered inside. I am able to determine which shape has been touched using the following code:
CGPoint location = [gesture locationInView:self.view];
CALayer* layerThatWasTapped = [self.view.layer hitTest:location];
NSLog(#"Master Tap Location: %#", NSStringFromCGPoint(location));
NSLog(#"Tapped Layer Name: %#", layerThatWasTapped.name);
NSLog(#"Tapped Layer Parent: %#", layerThatWasTapped.superlayer.name);
int counter = layerThatWasTapped.superlayer.sublayers.count;
NSArray * subs = layerThatWasTapped.superlayer.sublayers;
//Loop through all sublayers of the picture
for (int i=0; i<counter; i++) {
CALayer *layer = [subs objectAtIndex:i];
CAShapeLayer* loopLayer = (CAShapeLayer*)layerThatWasTapped.modelLayer;
CGPathRef loopPath = loopLayer.path;
CGPoint loopLoc = [gesture locationInView:cPage];
loopLoc = [self.view.layer convertPoint:loopLoc toLayer:layer];
NSLog(#"loopLoc Tap Location: %#", NSStringFromCGPoint(loopLoc));
//determine if hit is on a layer
if (CGPathContainsPoint(loopPath, NULL, loopLoc, YES)) {
NSLog(#"Layer %i Name: %# Hit",i, layer.name);
} else {
NSLog(#"Layer %i Name: %# No Hit",i, layer.name);
}
}
My problem lies with areas where the bounds of the triangle overlap the square.
This results in the triangle registering the hit even when the hit is outside of the
triangles path. This is a simplified example (I may have many overlapping shapes stacked in the view)
Is there a way to loop through all of the sublayers and hittest each one to see if it lies under the tapped point?
OR
Is there a way to have the bounds of my layers match their paths so the hit occurs only on a visible area?
Since you're using CAShapeLayer, this is pretty easy. Make a subclass of CAShapeLayer and override its containsPoint: method, like this:
#implementation MyShapeLayer
- (BOOL)containsPoint:(CGPoint)p
{
return CGPathContainsPoint(self.path, NULL, p, false);
}
#end
Make sure that wherever you were allocating a CAShapeLayer, you change it to allocate a MyShapeLayer instead:
CAShapeLayer *triangle = [MyShapeLayer layer]; // this way
CAShapeLayer *triangle = [[MyShapeLayer alloc] init]; // or this way
Finally, keep in mind that when calling -[CALayer hitTest:], you need to pass in a point in the superlayer's coordinate space:
CGPoint location = [gesture locationInView:self.view];
CALayer *myLayer = self.view.layer;
location = [myLayer.superlayer convertPoint:location fromLayer:myLayer];
CALayer* layerThatWasTapped = [myLayer hitTest:location];

Add a magnifier in cocos2d games

i want to add a magnifier in cocos2d game. here is what i found online:
http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone
I've changed the code a bit:(since i don't want to let the loupe follow our touch)
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:magnifier_rect])) {
// make the circle-shape outline with a nice border.
self.layer.borderColor = [[UIColor lightGrayColor] CGColor];
self.layer.borderWidth = 3;
self.layer.cornerRadius = 250;
self.layer.masksToBounds = YES;
touchPoint = CGPointMake(CGRectGetMidX(magnifier_rect), CGRectGetMidY(magnifier_rect));
}
return self;
}
Then i want to add it in one of my scene init method:
loop = [[MagnifierView alloc] init];
[loop setNeedsDisplay];
loop.viewToMagnify = [CCDirector sharedDirector].openGLView;
[[CCDirector sharedDirector].openGLView.superview addSubview:loop];
But the result is: the area inside the loupe is black.
Also this loupe just magnify images with the same scale, how can i change it to magnify more near the center and less near the edge? (just like real magnifier)
Thank you !!!
Here I assume that you want to magnify the center of the screen.
You have to change dynamically size attribute to your wishes according to your app needs.
CGSize size = [[CCDirector sharedDirector] winSize];
id lens = [CCLens3D actionWithPosition:ccp(size.width/2,size.height/2) radius:240 grid:ccg(15,10) duration:0.0f];
[self runAction:lens];
Cocos2d draws using OpenGL, not CoreAnimation/Quartz. The CALayer you are drawing is empty, so you see nothing. You will either have to use OpenGL graphics code to perform the loupe effect or sample the pixels and alter them appropriately to achieve the magnification effect, as was done in the Christmann article referenced from the article you linked to. That code also relies on CoreAnimation/Quartz, so you will need to work out another way to get your hands on the image data you wish to magnify.

Scaling custom draw code for different iOS resolutions

I am struggling to get my custom drawing code to render at the proper scale for all iOS devices, i.e., older iPhones, those with retina displays and the iPad.
I have a subclass of UIView that has a custom class that displays a vector graphic. It has a scale property that I can set. I do the scaling in initWithCoder when the UIView loads and I first instantiate the vector graphic. This UIView is shown when the user taps a button on the home screen.
At first I tried this:
screenScaleFactor = 1.0;
if ([UIScreen instancesRespondToSelector:#selector(scale)]) {
screenScaleFactor = [[UIScreen mainScreen] scale];
}
// and then I multiply stuff by screenScale
... which worked for going between normal iPhones and retina iPhones, but chokes on the iPad. As I said, you can get to the UIView at issue by tapping a button on the home screen. When run on the iPad, if you display the UIView when at 1X, it works, but at 2X I get a vector graphic that twice as big as it should be.
So I tried this instead:
UPDATE: This block is the one that's right. (with the corrected spelling, of course!)
screenScaleFactor = 1.0;
if ([self respondsToSelector:#selector(contentScaleFactor)]) { //EDIT: corrected misspellng.
screenScaleFactor = (float)self.contentScaleFactor;
}
// again multiplying stuff by screenScale
Which works at both 1X and 2X on the iPad and on the older iPhones, but on a retina display, the vector graphic is half the size it should be.
In the first case, I query the UIScreen for its scale property and in the second case, I'm asking the parent view of the vector graphic for its contentsScaleFactor. Neither of these seem to get me where I want for all cases.
Any suggestions?
UPDATE:
Here's the method in my subclassed UIView (it's called a GaugeView):
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
needle is of class VectorSprite which is a subclass of Sprite which is subclassed from NSObject. These are from a programming book I'm working through. needle has the scale property that I set.
updateBox comes from Sprite and looks like this:
- (void) updateBox {
CGFloat w = width*scale;
CGFloat h = height*scale;
CGFloat w2 = w*0.5;
CGFloat h2 = h*0.5;
CGPoint origin = box.origin;
CGSize bsize = box.size;
CGFloat left = -kScreenHeight*0.5;
CGFloat right = -left;
CGFloat top = kScreenWidth*0.5;
CGFloat bottom = -top;
offScreen = NO;
if (wrap) {
if ((x+w2) < left) x = right + w2;
else if ((x-w2) > right) x = left - w2;
else if ((y+h2) < bottom) y = top + h2;
else if ((y-h2) > top) y = bottom - h2;
}
else {
offScreen =
((x+w2) < left) ||
((x-w2) > right) ||
((y+h2) < bottom) ||
((y-h2) > top);
}
origin.x = x-w2*scale;
origin.y = y-h2*scale;
bsize.width = w;
bsize.height = h;
box.origin = origin;
box.size = bsize;
}
Sprite also has the draw and drawBody methods which are:
- (void) draw: (CGContextRef) context {
CGContextSaveGState(context);
// Position the sprite
CGAffineTransform t = CGAffineTransformIdentity;
t = CGAffineTransformTranslate(t,x,y);
t = CGAffineTransformRotate(t,rotation);
t = CGAffineTransformScale(t,scale,scale);
CGContextConcatCTM(context, t);
// draw sprite body
[self drawBody: context];
CGContextRestoreGState(context);
}
- (void) drawBody: (CGContextRef) context {
// Draw your sprite here, centered
// on (x,y)
// As an example, we draw a filled white circle
if (alpha < 0.05) return;
CGContextBeginPath(context);
CGContextSetRGBFillColor(context, r,g,b,alpha);
CGContextAddEllipseInRect(context, CGRectMake(-width/2,-height/2,width,height));
CGContextClosePath(context);
CGContextDrawPath(context,kCGPathFill);
}
How, exactly, are you rendering the graphic?
This should be handled automatically in drawRect: (the context you get should be already 2x). This should also be handled automatically with UIGraphicsBeginImageContextWithOptions(size,NO,0); if available (if you need to fall back to UIGraphicsBeginImageContext(), assume a scale of 1). You shouldn't need to worry about it unless you're drawing the bitmap yourself somehow.
You could try something like self.contentScaleFactor = [[UIScreen mainScreen] scale], with appropriate checks first (this might mean if you display it in an iPad at 2x, you'll get high-res graphics).
Fundamentally, there's not much difference between an iPad in 2x mode and a "retina display", except that the iPad can switch between 1x and 2x.
Finally, there's a typo: #selector(contentsScaleFactor) has an extra s.

iPhone SDK: repeat subviews

I have one UIView, which I'm using as my main view, and I want to repeat the subview across the screen. How exactly can I do this?
You can look at Core Animation. There is a layer called the CAReplicatorLayer that might help you. Alternatively you can use generic CALayers and set their contents all to the same image. You would just need to figure out the width of your parent view and how big you want each tile to be and then just create CALayers for each tile shifting the position of each new layer depending on your grid dimensions. Something like this:
UIImage *imageToReplicate = [UImage imageNamed:#"tile"];
for (i = 0; i < 10; ++i)
{
for (j=0; j < 10; ++j)
{
CGFloat xPos = 0.0; // Calculate your x position
CGFloat yPos = 0.0; // Calculate your y position
CALayer *layer = [CALayer layer];
[layer setBounds:CGRectMake(0.0f, 0.0f, TILE_WIDTH, TILE_HEIGHT)];
[layer setPosition:CGPointMake(xPos, yPos)];
[layer setContents:(id)[image CGImage]];
[[[self view] layer] addSublayer:layer];
}
}
You'll have to figure out the calculation for each iteration of your layer positions. Remember that by default the anchor point of the layer is its center. You either calculate it by subtracting half of the layer tile size or you can change the anchor point to be a corner instead. For more information on that, take a look at the layer geometry section of the Core Animation documentation.