Add a UIImage on top of CATiledLayer - iphone

Is it possible to draw a UIImage on top of the CATiledLayer. The main idea is to note the position on the view. I have used PhotoScroller example from Apple Library and I am trying to add an UIImage on top of the tileRect. Any help will be appreciated.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
/**** Trying to add UIImage on top of CGRect rect. But not working.****/
CGRect pointRect = CGRectMake(100,100,32,32);
UIImage *image = [UIImage imageNamed:#"map-pointer32.png"];
[image drawInRect:pointRect];
// get the scale from the context by getting the current transform matrix, then asking for
// its "a" component, which is one of the two scale components. We could also ask for "d".
// This assumes (safely) that the view is being scaled equally in both dimensions.
CGFloat initialScale = CGContextGetCTM(context).a;
NSString *value = [NSString stringWithFormat:#"%.3f", initialScale];
CGFloat scale = [value floatValue];
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
// Even at scales lower than 100%, we are drawing into a rect in the coordinate system of the full
// image. One tile at 50% covers the width (in original image coordinates) of two tiles at 100%.
// So at 50% we need to stretch our tiles to double the width and height; at 25% we need to stretch
// them to quadruple the width and height; and so on.
// (Note that this means that we are drawing very blurry images as the scale gets low. At 12.5%,
// our lowest scale, we are stretching about 6 small tiles to fill the entire original image area.
// But this is okay, because the big blurry image we're drawing here will be scaled way down before
// it is displayed.)
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
UIImage *tile = [self tileForScale:scale row:row col:col];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as to avoid
// stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
[tile drawInRect:tileRect];
if (self.annotates) {
// [[UIColor whiteColor] set];
CGContextSetLineWidth(context, 6.0 / scale);
CGContextStrokeRect(context, tileRect);
}
}
}
}

Probably the best solution would be to add the image in separate view, without any catiledlayer.
You can add an empty view, and add the view with the the catiledlayer an the uiimageview to that view.

Related

Autoresize image while preserving centre

I have a UIImageView in my Nib file that stretches the width of the screen. What I'd like to do is have the middle third of the image remain the same height and width when autoresizing is done (with device rotation) and have only the 1st and 3rd thirds of the image stretched.
Any ideas how to do this?
There is no straight-forward way to do this as far as I know. Here is one way to do it:
Instead of one ImageView, create 3 UIImageView with equal width and height(1/3rd of original image). Upper and lower ImageView will stick to the top and bottom edge respectively, middle ImageView will have flexible bottom and top margin. You need to set the contentMode property of middle ImageView to UIViewContentModeScaleAspectFit(or UIViewContentModeCenter based on how you want to handle rotation) and the other ones to UIViewContentModeScaleToFill. You can set all these property from IB.
Now you need to to set the image source of each one from code. In the viewDidLoad method slice the UIImage into 3 parts using the solutions from this post or using the following code snippet:
-(NSMutableArray *)getImagesFromImage:(UIImage *)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
NSMutableArray *images = [NSMutableArray array];
CGSize imageSize = image.size;
CGFloat xPos = 0.0, yPos = 0.0;
CGFloat width = imageSize.width/rows;
CGFloat height = imageSize.height/columns;
for (int y = 0; y < columns; y++) {
xPos = 0.0;
for (int x = 0; x < rows; x++) {
CGRect rect = CGRectMake(xPos, yPos, width, height);
CGImageRef cImage = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *dImage = [[UIImage alloc] initWithCGImage:cImage];
[images addObject:dImage];
xPos += width;
}
yPos += height;
}
return images;
}
You may need to tweak a few things, but hopefully you get the idea.
If you have the option, you can pre-split the image into 3 parts using photoshop/gimp and place them into the bundle. In that case you don't need to do image splitting in the code and everything can be done from IB.
Hope this helps :)

Coordinate system for CIFaceFeature

I'm using the CIFeature Class Reference for Face Detection and I'm more than a little confused by the Core Graphics coordinates and the regular UIKit coordinates. This is my code:
UIImage *mainImage = [UIImage imageNamed:#"facedetectionpic.jpg"];
CIImage *image = [[CIImage alloc] initWithImage:mainImage];
NSDictionary *options = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
NSArray *features = [detector featuresInImage:image];
CGRect faceRect;
for (CIFaceFeature *feature in features)
{
faceRect= [feature bounds];
}
It's pretty standard. Now according to the official documentation it says:
bounds The rectangle that holds discovered feature. (read-only)
Discussion The rectangle is in the coordinate system of the image.
When I directly output FaceRect, I get: get rect {{136, 427}, {46, 46}}. When I apply the CGAffineTransfer to flip it the right way, I get negative coordinates which doesn't seem right. The image I am working with is in an ImageView.
So in which coordinate system are these coordinates in? The image? The ImageView? Core Graphic coordinates? Regular coordinates?
I finally figured it out. As the documentation points out, the rectangle drawn by the CIFaceFeature is in the coordinate system of the image. This means that the rectangle has the coordinates of the original image. If you have the Autoresize option checked, that means your image gets scaled down to fit in UIImageView. So you need to convert the old image coordinates to the new image coordinates.
This nifty piece of code that I adapted from here does that for you:
- (CGPoint)convertPointFromImage:(CGPoint)imagePoint {
CGPoint viewPoint = imagePoint;
CGSize imageSize = self.setBody.image.size;
CGSize viewSize = self.setBody.bounds.size;
CGFloat ratioX = viewSize.width / imageSize.width;
CGFloat ratioY = viewSize.height / imageSize.height;
UIViewContentMode contentMode = self.setBody.contentMode;
if (contentMode == UIViewContentModeScaleAspectFit)
{
if (contentMode == UIViewContentModeScaleAspectFill)
{
CGFloat scale;
if (contentMode == UIViewContentModeScaleAspectFit) {
scale = MIN(ratioX, ratioY);
}
else /*if (contentMode == UIViewContentModeScaleAspectFill)*/ {
scale = MAX(ratioX, ratioY);
}
viewPoint.x *= scale;
viewPoint.y *= scale;
viewPoint.x += (viewSize.width - imageSize.width * scale) / 2.0f;
viewPoint.y += (viewSize.height - imageSize.height * scale) / 2.0f;
}
}
return viewPoint;
}

Draw circle on rectangle

I have UIView class and in method I want to draw first rectangles and sometimes circle
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
if ([WhatToDraw isEqual:#"Fields"]) {
[self DrawField:context];
}
if ([WhatToDraw isEqual:#"Ball"]) {
[self DrawBall:context x:20 y:20];
}
}
-(void)DrawBall:(CGContextRef)context x:(float) x y:(float) y
{
UIGraphicsPushContext(context);
CGRect rect = CGRectMake(x, y, 25, 25);
CGContextClearRect(context, rect);
CGContextFillEllipseInRect(context, rect);
}
-(void)DrawField:(CGContextRef)context
{
columns = 6;
float offset = 5;
float boardWidth = self.frame.size.width;
float allOffset = (columns + 2) * offset;
float currentX = 10;
float currentWidth = (boardWidth - allOffset) / columns;
float currentHeight = currentWidth;
self.fieldsArray = [[NSMutableArray alloc] init];
//create a new dynamic button board
for (int columnIndex = 0; columnIndex<columns; columnIndex++) {
float currentY = offset;
for (int rowIndex=0; rowIndex<columns; rowIndex++) {
UIGraphicsPushContext(context);
//create new field
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextBeginPath(context);
CGRect rect = CGRectMake(currentX, currentY, currentWidth, currentHeight);
CGContextAddRect(context, rect);
CGContextFillPath(context);
currentY = currentY + offset + currentHeight;
}
currentX = currentX + offset + currentWidth;
}
}
I also have method changing what to draw
-(void)Draw:(NSString*)Thing
{
self.WhatToDraw = Thing;
[self setNeedsDisplay];
}
Drawing rectangles (Fields) is ok, but when I click button to draw circle all rectangles disappear and only circle was drawn.
How can I draw circle on existing rectangle ?
The Problem
Your problem is that when a UIView redraws a region as marked by setNeedsDisplay or setNeedsDisplayInRect it will completely clear that region before executing your drawing code. This means that unless you draw both the rectangles and circle in a single drawing operation within drawRect you will never see the two both drawn in the area you choose to redraw, whether it be the entire view bounds with setNeedsDisplay or a specific area with setNeedsDisplayInRect.
The Solutions
There's no reason why you can't draw both the rectangles and circle each time within drawRect and optimise the performance of the drawing by only redrawing the regions necessary with setNeedsDisplayInRect.
Alternatively you could break up the content using CALayers and have the rectangles in one layer and the circle in another. This would allow you to leverage the animation capabilities of Core Animation. Core animation provides a simple and effective way to manipulate onscreen layers with implicit animations such as moving, resizing, changing colour etc.
my guess, the CGContextClearRect call in your DrawBall method is the responsable of rectangles disappearing... From Apple documentation: If the provided context is a window or bitmap context, Quartz effectively clears the rectangle.

Using CATiledLayer to display a large image that has been programmatically split into tiles

I'm running into an issue with using CATiledLayer... I have a large image, a map, that is 4726 x 2701. Basically I'm needing to break this image into tiles and be able to zoom in to view a lot of detail, but also be able to zoom all the way out and see the entire map. Using my current implementation it works perfectly if the zoomlevel of the scrollview is set to 1.0 (the maximum zoomscale), but if you zoom out the tile are replaced incorrectly.
Here the zoomlevel is set to 1.0. The map looks perfect.
But if the zoomlevel is all the way out (0.2 I believe) the map is all messed up. I should be seeing the entire map not just two tiles.
Here is how I'm getting the tiles from the large image:
- (UIImage *)tileForScale:(CGFloat)scale row:(int)row col:(int)col {
float tileSize = 256.0f;
CGRect subRect = CGRectMake(col*tileSize, row * tileSize, tileSize, tileSize);
CGImageRef tiledImage = CGImageCreateWithImageInRect([mapImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
I'm displaying the tiles exactly like Apple does in the PhotoScroller application.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat scale = CGContextGetCTM(context).a;
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
UIImage *tile = [self tileForScale:scale row:row col:col];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
tileRect = CGRectIntersection(self.bounds, tileRect);
[tile drawInRect:tileRect];
}
}
}
Also, here is the code where I'm setting the levels of detail, which returns 4 for my image:
- (id)initWithImageName:(NSString *)name andImage:(UIImage*)image size:(CGSize)size {
self = [super initWithFrame:CGRectMake(0, 0, size.width, size.height)];
if (self) {
_imageName = [name retain];
mapImage = [image retain];
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.levelsOfDetail = [self zoomLevelsForSize:mapImage.size];
}
return self;
}
- (NSUInteger)zoomLevelsForSize:(CGSize)imageSize {
int zLevels = 1;
while(YES) {
imageSize.width /= 2.0f;
imageSize.height /= 2.0f;
if(imageSize.height < 256.0 || imageSize.width < 256.0) break;
++zLevels;
}
return zLevels;
}
I'm assuming it has something to do with the tile size and the zoom scale, but I really have no clue how to solve the problem. I can't seem to find any solutions to my issue on the Google. Any help would be great! :)
You have a leak in - (UIImage *)tileForScale:(CGFloat)scale row:(int)row col:(int)col.
CGImageRef tiledImage = CGImageCreateWithImageInRect([mapImage CGImage], subRect);
Don't forget to release the CGImageRef: CGImageRelease(tiledImage);
I think there's a bug where the scale is incorrectly set in the sample code.
I have the following:
CGFloat scale = CGContextGetCTM(context).a;
scale = 1.0f / roundf(1.0f / scale);
if (scale == INFINITY) {
scale = 1.0f;
}

Drawing in CATiledLayer with CoreGraphics CGContextDrawImage

I would like to use a CATiledLayer in iPhone OS 3.1.3 and to do so all drawing in -(void)drawLayer:(CALayer *)layer inContext:(CGContext)context has to be done with coregraphics only.
Now I run into the problems of the flipped coordinate system on the iPhone and there are some suggestions how to fix it using transforms:
Image is drawn upside down
CATiledLayer or CALayer not working
My problem is that I cannot get it to work. I started using the PhotoScroller sample code and replacing the drawing method with coregraphics calls only. It looks like this
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context {
CGContextSaveGState(context);
CGRect rect = CGContextGetClipBoundingBox(context);
CGFloat scale = CGContextGetCTM(context).a;
CGContextConcatCTM(context, CGAffineTransformMakeTranslation(0.f, rect.size.height));
CGContextConcatCTM(context, CGAffineTransformMakeScale(1.f, -1.f));
CATiledLayer *tiledLayer = (CATiledLayer *)layer;
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
// if (row == 0 ) continue;
UIImage *tile = [self tileForScale:scale row:row col:col];
CGImageRef tileRef = [tile CGImage];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as to avoid
// stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
NSLog(#"row:%d, col:%d, x:%.0f y:%.0f, height:%.0f, width:%.0f", row, col,tileRect.origin.x, tileRect.origin.y, tileRect.size.height, tileRect.size.width);
//[tile drawInRect:tileRect];
CGContextDrawImage(context, tileRect, tileRef);
// just to draw the row and column index in the tile and mark the origin of the tile with a red line
if (self.annotates) {
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSetLineWidth(context, 6.0 / scale);
CGContextStrokeRect(context, tileRect);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextMoveToPoint(context, tileRect.origin.x, tileRect.origin.y);
CGContextAddLineToPoint(context, tileRect.origin.x+100.f, tileRect.origin.y+100.f);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextSetFillColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSelectFont(context, "Courier", 128, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetShouldAntialias(context, true);
char text[30];
int length = sprintf(text,"row:%d col:%d",row,col);
CGContextSaveGState(context);
CGContextShowTextAtPoint(context, tileRect.origin.x+110.f,tileRect.origin.y+100.f, text, length);
CGContextRestoreGState(context);
}
}
}
CGContextRestoreGState(context);
}
As you can see I am using a Scale transform to invert the coordinate system and a translation transform to shift the origin to the lower left corner. The images draw correctly but only the first row of tiles is being drawn. I think there is a problem with the translation operation or the way the coordinates of the tiles are computed.
This is how it looks like:
I am a bit confused with all this transformations.
Bonus question:
How would one handle the retina display pictures in core graphics?
EDIT:
To get it working on the retina display I just took the original method from the sample code to provide the images:
- (UIImage *)tileForScale:(CGFloat)scale row:(int)row col:(int)col
{
// we use "imageWithContentsOfFile:" instead of "imageNamed:" here because we don't want UIImage to cache our tiles
NSString *tileName = [NSString stringWithFormat:#"%#_%d_%d_%d", imageName, (int)(scale * 1000), col, row];
NSString *path = [[NSBundle mainBundle] pathForResource:tileName ofType:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
return image;
}
In principle the scale of the display is ignored since Core Graphics is working in pixels not points so when asked to draw more pixels, more CATiledLayers (or sublayers) are used to fill the screen.
Thanks alot
Thomas
Thomas, I started by following the WWDC 2010 ScrollView talk and there is little or no documentation on working within drawLayer:inContext for iOS 3.x. I had the same issues as you do when I moved the demo code from drawRect: across to drawLayer:inContext:.
Some investigation showed me that within drawLayer:inContext: the size and offset of rect returned from CGContextGetClipBoundingBox(context) is exactly what you want to draw in. Where drawRect: gives you the whole bounds.
Knowing this you can remove the row and column iteration, as well as the CGRect intersection for the edge tiles and just draw to the rect once you've translated the context.
Here's what I've ended up with:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGRect rect = CGContextGetClipBoundingBox(ctx);
CGFloat scale = CGContextGetCTM(ctx).a;
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
int col = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int row = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
CGImageRef image = [self imageForScale:scale row:row col:col];
if(NULL != image) {
CGContextTranslateCTM(ctx, 0.0, rect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
rect = CGContextGetClipBoundingBox(ctx);
CGContextDrawImage(ctx, rect, image);
CGImageRelease(image);
}
}
Notice that rect is redefined after the TranslateCTM and ScaleCTM.
And for reference here is my imageForScale:row:col function:
- (CGImageRef) imageForScale:(CGFloat)scale row:(int)row col:(int)col {
CGImageRef image = NULL;
CGDataProviderRef provider = NULL;
NSString *filename = [NSString stringWithFormat:#"img_name_here%0.0f_%d_%d",ceilf(scale * 100),col,row];
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"png"];
if(path != nil) {
NSURL *imageURL = [NSURL fileURLWithPath:path];
provider = CGDataProviderCreateWithURL((CFURLRef)imageURL);
image = CGImageCreateWithPNGDataProvider(provider,NULL,FALSE,kCGRenderingIntentDefault);
CFRelease(provider);
}
return image;
}
There's still a bit of work to be done on these two functions in order to support high resolution graphics properly, but it does look nice on everything but an iPhone 4.