Coordinate system for CIFaceFeature - iphone

I'm using the CIFeature Class Reference for Face Detection and I'm more than a little confused by the Core Graphics coordinates and the regular UIKit coordinates. This is my code:
UIImage *mainImage = [UIImage imageNamed:#"facedetectionpic.jpg"];
CIImage *image = [[CIImage alloc] initWithImage:mainImage];
NSDictionary *options = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
NSArray *features = [detector featuresInImage:image];
CGRect faceRect;
for (CIFaceFeature *feature in features)
{
faceRect= [feature bounds];
}
It's pretty standard. Now according to the official documentation it says:
bounds The rectangle that holds discovered feature. (read-only)
Discussion The rectangle is in the coordinate system of the image.
When I directly output FaceRect, I get: get rect {{136, 427}, {46, 46}}. When I apply the CGAffineTransfer to flip it the right way, I get negative coordinates which doesn't seem right. The image I am working with is in an ImageView.
So in which coordinate system are these coordinates in? The image? The ImageView? Core Graphic coordinates? Regular coordinates?

I finally figured it out. As the documentation points out, the rectangle drawn by the CIFaceFeature is in the coordinate system of the image. This means that the rectangle has the coordinates of the original image. If you have the Autoresize option checked, that means your image gets scaled down to fit in UIImageView. So you need to convert the old image coordinates to the new image coordinates.
This nifty piece of code that I adapted from here does that for you:
- (CGPoint)convertPointFromImage:(CGPoint)imagePoint {
CGPoint viewPoint = imagePoint;
CGSize imageSize = self.setBody.image.size;
CGSize viewSize = self.setBody.bounds.size;
CGFloat ratioX = viewSize.width / imageSize.width;
CGFloat ratioY = viewSize.height / imageSize.height;
UIViewContentMode contentMode = self.setBody.contentMode;
if (contentMode == UIViewContentModeScaleAspectFit)
{
if (contentMode == UIViewContentModeScaleAspectFill)
{
CGFloat scale;
if (contentMode == UIViewContentModeScaleAspectFit) {
scale = MIN(ratioX, ratioY);
}
else /*if (contentMode == UIViewContentModeScaleAspectFill)*/ {
scale = MAX(ratioX, ratioY);
}
viewPoint.x *= scale;
viewPoint.y *= scale;
viewPoint.x += (viewSize.width - imageSize.width * scale) / 2.0f;
viewPoint.y += (viewSize.height - imageSize.height * scale) / 2.0f;
}
}
return viewPoint;
}

Related

ios Resizing images for thumbs - Blurry although high interpolation

I have written a category on UIImage to resize images like the iphone photos app.
Although the Interpolation is set to high, the image does not really look like the one in the photos app. Instead it looks unsharp or blurry.
Here is what I did:
- (UIImage *)resizeImageProportionallyIntoNewSize:(CGSize)newSize;
{
CGFloat scaleWidth = 1.0f;
CGFloat scaleHeight = 1.0f;
NSLog(#"Origin Size = %#", NSStringFromCGSize(self.size));
if (CGSizeEqualToSize(self.size, newSize) == NO) {
//calculate "the longer side"
if(self.size.width > self.size.height) {
scaleWidth = self.size.width / self.size.height;
} else {
scaleHeight = self.size.height / self.size.width;
}
}
// now draw the new image in a context with scaling proportionally
UIImage *sourceImage = self;
UIImage *newImage = nil;
//now we create a context in newSize and draw the image out of the bounds of the context to get
//an proportionally scaled image by cutting of the image overlay
if([[UIScreen mainScreen] scale] == 2.00) {
UIGraphicsBeginImageContextWithOptions(newSize, YES, 2.0);
}
UIGraphicsBeginImageContext(newSize);
// Set the quality level to use when rescaling
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
//Center image point so that on each egde is a little cutoff
CGRect thumbnailRect = CGRectZero;
thumbnailRect.size.width = (int) newSize.width * scaleWidth;
thumbnailRect.size.height = (int) newSize.height * scaleHeight;
thumbnailRect.origin.x = (int) (newSize.width - thumbnailRect.size.width) * 0.5;
thumbnailRect.origin.y = (int) (newSize.height - thumbnailRect.size.height) * 0.5;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil) NSLog(#"could not scale image");
return newImage ;
}
BTW: There is no difference on using high interpolation or not. Maybe I am doing something wrong.
Thanks in advance for your help!
The best way to make a thumbnail is to ask the system to do it for you, using the Image I/O Framework. The only trick is that since you're using a CGImage you must take account of the screen resolution:
CGImageSourceRef src = CGImageSourceCreateWith... // whatever
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat w = // maximum size, multiplied by the scale
NSDictionary* d =
[NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceShouldAllowFloat,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
[NSNumber numberWithInt:(int)w], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef imref =
CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
UIImage* im =
[UIImage imageWithCGImage:imref scale:scale orientation:UIImageOrientationUp];
CFRelease(imref); CFRelease(src);
if it is unsharp / blurry, you should try to turn off anti-aliasing by:
CGContextSetShouldAntialias(context, NO);
hope this helps.

Add a UIImage on top of CATiledLayer

Is it possible to draw a UIImage on top of the CATiledLayer. The main idea is to note the position on the view. I have used PhotoScroller example from Apple Library and I am trying to add an UIImage on top of the tileRect. Any help will be appreciated.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
/**** Trying to add UIImage on top of CGRect rect. But not working.****/
CGRect pointRect = CGRectMake(100,100,32,32);
UIImage *image = [UIImage imageNamed:#"map-pointer32.png"];
[image drawInRect:pointRect];
// get the scale from the context by getting the current transform matrix, then asking for
// its "a" component, which is one of the two scale components. We could also ask for "d".
// This assumes (safely) that the view is being scaled equally in both dimensions.
CGFloat initialScale = CGContextGetCTM(context).a;
NSString *value = [NSString stringWithFormat:#"%.3f", initialScale];
CGFloat scale = [value floatValue];
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
// Even at scales lower than 100%, we are drawing into a rect in the coordinate system of the full
// image. One tile at 50% covers the width (in original image coordinates) of two tiles at 100%.
// So at 50% we need to stretch our tiles to double the width and height; at 25% we need to stretch
// them to quadruple the width and height; and so on.
// (Note that this means that we are drawing very blurry images as the scale gets low. At 12.5%,
// our lowest scale, we are stretching about 6 small tiles to fill the entire original image area.
// But this is okay, because the big blurry image we're drawing here will be scaled way down before
// it is displayed.)
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
UIImage *tile = [self tileForScale:scale row:row col:col];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as to avoid
// stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
[tile drawInRect:tileRect];
if (self.annotates) {
// [[UIColor whiteColor] set];
CGContextSetLineWidth(context, 6.0 / scale);
CGContextStrokeRect(context, tileRect);
}
}
}
}
Probably the best solution would be to add the image in separate view, without any catiledlayer.
You can add an empty view, and add the view with the the catiledlayer an the uiimageview to that view.

Restrirct Boundary 320 x 480, while scaling UIImage

I am working on Interior Decoration Application, we can add Sofa, Table, Chair Table Lamp on Camera screen then can scale UIImage with touch to zoom picture. But by zooming we don't want to increase UIImage Size greater then 320 x 480, I mean want to restrict in iphone Boundary.
any suggestion, I had implemented and tried, but couldn't get exact solution. I use to check on base of center but this approach is not working, want some thing like Edge detection may be that would be exact solution,
Already Thanks, looking forwards
here is some code that I am using to Resize my UIImageView
-(BOOL)isValidSizeForView:(UIView *)myView forSize:(CGSize)size
{
BOOL Decision = NO;
CGRect rect = myView.frame;
CGRect BoundRect = CGRectMake(0, 0, 320, 480);
float MinX = (BoundRect.origin.x+(myView.frame.size.width/2));
float MaxX = ((BoundRect.origin.x+BoundRect.size.width)-(myView.frame.size.width/2));
float MinY = (BoundRect.origin.y+(myView.frame.size.height/2));
float MaxY = ((BoundRect.origin.y+BoundRect.size.height)-(myView.frame.size.height/2));
if(rect.origin.x > MinX && rect.origin.x< MaxX && rect.origin.y> MinY && rect.origin.y<MaxY)
{
Decision = YES;
}
else
{
printf(":( no sorry \n");
}
return Decision;
}
You could get the maximum zoomed size for your image by doing something like this:
- (CGSize)zoomedSizeForImage:(UIImage *)image constrainedTo:(CGSize)maxSize {
CGFloat width = image.size.width;
CGFloat height = image.size.height;
if (width > maxSize.width) {
width = maxSize.width;
height = height * maxSize.width / width;
}
if (height > maxSize.height) {
width = width * maxSize.height / height;
height = maxSize.height;
}
return CGSizeMake(width, height);
}

Cropping image captured by AVCaptureSession

I'm writing an iPhone App which uses AVFoundation to take a photo and crop it.
The App is similar to a QR code reader: It uses a AVCaptureVideoPreviewLayer with an overlay.
The overlay has a square. I want to crop the image so the cropped image is exactly what the user has places inside the square.
The preview layer has gravity AVLayerVideoGravityResizeAspectFill.
It looks like what the camera actually captures is not exactly what the user sees in the preview layer. This means that I need to move from the preview coordinate system to the captured image coordinate system so I can crop the image. For this I think that I need the following parameters:
1. ration between view size and captured image size.
2. information which tells which part of the captured image matches what is displayed in the preview layer.
Does anybody know how I can obtain this info, or if there is a different approach to crop the image.
(p.s. capturing a screenshot of the preview is not an option, as I understand it might resulting in the App being rejected).
Thank you in advance
Hope this meets your requirements
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
NSLog(#"---------");
NSLog(#"*cropRect.origin.y=%f",cropRect.origin.y);
NSLog(#"*cropRect.origin.x=%f",cropRect.origin.x);
NSLog(#"*cropRect.size.width=%f",cropRect.size.width);
NSLog(#"*cropRect.size.height=%f",cropRect.size.height);
NSLog(#"---------");
NSLog(#"*size.width=%f",size.width);
NSLog(#"*size.height=%f",size.height);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
you can use this api from AVFoundation: AVMakeRectWithAspectRatioInsideRect
it will return the crop region for an image in a bounding region, apple doc is here:
https://developer.apple.com/library/ios/Documentation/AVFoundation/Reference/AVFoundation_Functions/Reference/reference.html
I think this is just simple as this
- (CGRect)computeCropRect:(CGImageRef)cgImageRef
{
static CGFloat cgWidth = 0;
static CGFloat cgHeight = 0;
static CGFloat viewWidth = 320;
if(cgWidth == 0)
cgWidth = CGImageGetWidth(cgImageRef);
if(cgHeight == 0)
cgHeight = CGImageGetHeight(cgImageRef);
CGRect cropRect;
// Only based on width
cropRect.origin.x = cropRect.origin.y = kMargin * cgWidth / viewWidth;
cropRect.size.width = cropRect.size.height = kSquareSize * cgWidth / viewWidth;
return cropRect;
}
with kMargin and kSquareSize (20point and 280point in my case) are the margin and Scanning area respectively
Then perform cropping
CGRect cropRect = [self computeCropRect:cgCapturedImageRef];
CGImageRef croppedImageRef = CGImageCreateWithImageInRect(cgCapturedImageRef, cropRect);

Drawing in CATiledLayer with CoreGraphics CGContextDrawImage

I would like to use a CATiledLayer in iPhone OS 3.1.3 and to do so all drawing in -(void)drawLayer:(CALayer *)layer inContext:(CGContext)context has to be done with coregraphics only.
Now I run into the problems of the flipped coordinate system on the iPhone and there are some suggestions how to fix it using transforms:
Image is drawn upside down
CATiledLayer or CALayer not working
My problem is that I cannot get it to work. I started using the PhotoScroller sample code and replacing the drawing method with coregraphics calls only. It looks like this
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context {
CGContextSaveGState(context);
CGRect rect = CGContextGetClipBoundingBox(context);
CGFloat scale = CGContextGetCTM(context).a;
CGContextConcatCTM(context, CGAffineTransformMakeTranslation(0.f, rect.size.height));
CGContextConcatCTM(context, CGAffineTransformMakeScale(1.f, -1.f));
CATiledLayer *tiledLayer = (CATiledLayer *)layer;
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
// if (row == 0 ) continue;
UIImage *tile = [self tileForScale:scale row:row col:col];
CGImageRef tileRef = [tile CGImage];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as to avoid
// stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
NSLog(#"row:%d, col:%d, x:%.0f y:%.0f, height:%.0f, width:%.0f", row, col,tileRect.origin.x, tileRect.origin.y, tileRect.size.height, tileRect.size.width);
//[tile drawInRect:tileRect];
CGContextDrawImage(context, tileRect, tileRef);
// just to draw the row and column index in the tile and mark the origin of the tile with a red line
if (self.annotates) {
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSetLineWidth(context, 6.0 / scale);
CGContextStrokeRect(context, tileRect);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextMoveToPoint(context, tileRect.origin.x, tileRect.origin.y);
CGContextAddLineToPoint(context, tileRect.origin.x+100.f, tileRect.origin.y+100.f);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextSetFillColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSelectFont(context, "Courier", 128, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetShouldAntialias(context, true);
char text[30];
int length = sprintf(text,"row:%d col:%d",row,col);
CGContextSaveGState(context);
CGContextShowTextAtPoint(context, tileRect.origin.x+110.f,tileRect.origin.y+100.f, text, length);
CGContextRestoreGState(context);
}
}
}
CGContextRestoreGState(context);
}
As you can see I am using a Scale transform to invert the coordinate system and a translation transform to shift the origin to the lower left corner. The images draw correctly but only the first row of tiles is being drawn. I think there is a problem with the translation operation or the way the coordinates of the tiles are computed.
This is how it looks like:
I am a bit confused with all this transformations.
Bonus question:
How would one handle the retina display pictures in core graphics?
EDIT:
To get it working on the retina display I just took the original method from the sample code to provide the images:
- (UIImage *)tileForScale:(CGFloat)scale row:(int)row col:(int)col
{
// we use "imageWithContentsOfFile:" instead of "imageNamed:" here because we don't want UIImage to cache our tiles
NSString *tileName = [NSString stringWithFormat:#"%#_%d_%d_%d", imageName, (int)(scale * 1000), col, row];
NSString *path = [[NSBundle mainBundle] pathForResource:tileName ofType:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
return image;
}
In principle the scale of the display is ignored since Core Graphics is working in pixels not points so when asked to draw more pixels, more CATiledLayers (or sublayers) are used to fill the screen.
Thanks alot
Thomas
Thomas, I started by following the WWDC 2010 ScrollView talk and there is little or no documentation on working within drawLayer:inContext for iOS 3.x. I had the same issues as you do when I moved the demo code from drawRect: across to drawLayer:inContext:.
Some investigation showed me that within drawLayer:inContext: the size and offset of rect returned from CGContextGetClipBoundingBox(context) is exactly what you want to draw in. Where drawRect: gives you the whole bounds.
Knowing this you can remove the row and column iteration, as well as the CGRect intersection for the edge tiles and just draw to the rect once you've translated the context.
Here's what I've ended up with:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGRect rect = CGContextGetClipBoundingBox(ctx);
CGFloat scale = CGContextGetCTM(ctx).a;
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
int col = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int row = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
CGImageRef image = [self imageForScale:scale row:row col:col];
if(NULL != image) {
CGContextTranslateCTM(ctx, 0.0, rect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
rect = CGContextGetClipBoundingBox(ctx);
CGContextDrawImage(ctx, rect, image);
CGImageRelease(image);
}
}
Notice that rect is redefined after the TranslateCTM and ScaleCTM.
And for reference here is my imageForScale:row:col function:
- (CGImageRef) imageForScale:(CGFloat)scale row:(int)row col:(int)col {
CGImageRef image = NULL;
CGDataProviderRef provider = NULL;
NSString *filename = [NSString stringWithFormat:#"img_name_here%0.0f_%d_%d",ceilf(scale * 100),col,row];
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"png"];
if(path != nil) {
NSURL *imageURL = [NSURL fileURLWithPath:path];
provider = CGDataProviderCreateWithURL((CFURLRef)imageURL);
image = CGImageCreateWithPNGDataProvider(provider,NULL,FALSE,kCGRenderingIntentDefault);
CFRelease(provider);
}
return image;
}
There's still a bit of work to be done on these two functions in order to support high resolution graphics properly, but it does look nice on everything but an iPhone 4.