What i am doing, i downloded a code for calender now i want to show images on its tiles(for date).
What i am trying shows in code
- (void)drawTextInContext:(CGContextRef)ctx
{
CGContextSaveGState(ctx);
CGFloat width = self.bounds.size.width;
CGFloat height = self.bounds.size.height;
CGFloat numberFontSize = floorf(0.3f * width);
CGContextSetFillColorWithColor(ctx, kDarkCharcoalColor);
CGContextSetTextDrawingMode(ctx, kCGTextClip);
for (NSInteger i = 0; i < [self.text length]; i++) {
NSString *letter = [self.text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithFont:[UIFont boldSystemFontOfSize:numberFontSize]];
CGContextSaveGState(ctx); // I will need to undo this clip after the letter's gradient has been drawn
[letter drawAtPoint:CGPointMake(4.0f+(letterSize.width*i), 0.0f) withFont:[UIFont boldSystemFontOfSize:numberFontSize]];
if ([self.date isToday]) {
CGContextSetFillColorWithColor(ctx, kWhiteColor);
CGContextFillRect(ctx, self.bounds);
} else {
// CGContextDrawLinearGradient(ctx, TextFillGradient, CGPointMake(0,0), CGPointMake(0, height/3), kCGGradientDrawsAfterEndLocation);
CGDataProviderRef dataProvider = CGDataProviderCreateWithFilename("left-arrow.png");
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO, kCGRenderingIntentDefault);
//UIImage* image = [UIImage imageNamed:#"left-arrow.png"];
//CGImageRef imageRef = image.CGImage;
CGContextDrawImage(ctx, CGRectMake(8.0f+(letterSize.width*i), 0.0f, 5, 5), image);
//im.image=[UIImage imageNamed:#"left-arrow.png"];
}
CGContextRestoreGState(ctx); // get rid of the clip for the current letter
}
CGContextRestoreGState(ctx);
}
In else condition i want to show images on the tile so for that i am converting image objects in the CGImageRef.
Please help me.
I am not sure this would be done in same manner or in other manner please suggest your way to do this.
Thanx a lot.
The file-path of the image seems to problematic. You can retrieve the correct path with the NSBundle-methods. Also you're leaking a lot of memory, because you don't release your images and data-providers. To make a long story short, try this:
[[UIImage imageNamed:#"left-arrow.png"] drawInRect:...]
or even simpler:
[[UIImage imageNamed:#"left-arrow.png"] drawAtPoint:...]
Related
If I override drawRect in order to display an image and place a dinamically-generated overlay on it (see code), whenever I scale up the image it is drawn in a very blurry way as the result of the scaling.
The image is composed of two pieces, an image drawn from a png (whose original size is 2x the wanted one, so it should not give problems when scaled, but it does) and the other is dinamically generated according to the rect size, so it should also adapt to the current rect size, but it doesn't.
Any help?
- (void) drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawImage(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height), [UIImage imageNamed:#"actionBg.png"].CGImage);
// generate the overlay
if ([self isActive] == NO && self.fullDelay != 0) { // TODO: remove fullDelay check!
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef overlayCtx = UIGraphicsGetCurrentContext();
int segmentSize = (rect.size.height / [self fullDelay]);
for (int i=0; i<[self fullDelay]; i++) {
float alpha = 0.9 - (([self fullDelay] * 0.1) - (i * 0.1));
[[UIColor colorWithRed:120.0/255.0 green:14.0/255.0 blue:14.0/255.0 alpha:alpha] setFill];
if (currentDelay > i) {
CGRect r = CGRectMake(0, i * segmentSize, rect.size.width, segmentSize);
CGContextFillRect(overlayCtx, r);
}
[[UIColor colorWithRed:1 green:1 blue:1 alpha:0.3] setFill];
CGRect line = CGRectMake(0, (i * segmentSize) + segmentSize - 1 , rect.size.width, 1);
CGContextFillRect(overlayCtx, line);
}
UIImage *overlay = UIGraphicsGetImageFromCurrentImageContext();
UIImage *overlayMasked = [TDUtilities maskImage:overlay withMask:[UIImage imageNamed:#"actionMask.png"]];
// prevent the drawings to be flipped
CGContextTranslateCTM(overlayCtx, 0, rect.size.height);
CGContextScaleCTM(overlayCtx, 1.0, -1.0);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, rect, overlayMasked.CGImage);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);
UIGraphicsEndImageContext();
}
The problem is that you are drawing overlayMasked as a CGImage with CGContextDrawImage, which knows nothing of scale. Either you will have to double the size yourself manually if you're in a double-scale situation, or you should use UIImage, which knows about scale.
I'm programming on objective-c. I have an image a line (see below) (1 x 30) pixels.
How can I get a UIImage (50 x 30) (see below) from this line?
Create a CGBitmapContext with size of 50 * 30 than you can just draw that image on the context by using CGContextDrawImage.
After that use CGBitmapContextCreateImage and [UIImage imageWithCGImage:] to create the UIImage
CGContextRef CreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4); // RGBA
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSCAssert(context != NULL, #"cannot create bitmap context");
CGColorSpaceRelease( colorSpace );
return context;
}
CGContextRef context = CreateBitmapContext(50, 30);
UIImage *yourLineImage = ...;
CGImageRef cgImg = [yourLineImage CGImage];
for (int i = 0; i < 50; i++) {
CGRect rect;
rect.origin.x = i;
rect.origin.y = 0;
rect.size.width = 1;
rect.size.height = 30;
CGContextDrawImage(context, rect, cgImg);
}
CGImageRef output = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:output];
if your line has simple color, try this lazy method:
UIImageView *line = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 50, 30)];
[line setImage:[UIImage imageNamed:#"your gray line"]];
[self.view addSubView:line];
You can use +[UIColor colorWithPatternImage] in iOS:
NSString *path =
[[NSBundle mainBundle] pathForResource:#"<# the pattern file #>" ofType:#"png"];
UIColor *patternColor = [UIColor colorWithPatternImage:
[UIImage imageWithContentsOfFile:path]];
/* use patternColor anywhere as a regular UIColor instance */
It works better with seamless patterns. For OSX you can use +[NSColor colorWithPatternImage] method.
If you just want to draw the image, you might want to try UIImage's drawInRect: method.
You'd typically want to call this from your custom UIView's drawRect:.
There are different approaches to drawing in Cocoa (and Cocoa-Touch) so here's Apple's Drawing and Printing Guide for iOS.
I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone.
Please help me out in cropping the face in circular/elliptical pattern
(UIImage *) opencvFaceDetect:(UIImage *)originalImage
{
cvSetErrMode(CV_ErrModeParent);
IplImage *image = [self CreateIplImageFromUIImage:originalImage];
// Scaling down
/*
Creates IPL image (header and data) ----------------cvCreateImage
CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels );
*/
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2),
IPL_DEPTH_8U, 3);
/*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;
// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_default" ofType:#"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
NSLog(#"ERROR: Could not load classifier cascade\n");
//return;
}
//Allocate the Memory storage
CvMemStorage* storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( storage );
CGColorSpaceRef colorSpace;
CGContextRef contextRef;
CGRect face_rect;
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20));
cvReleaseImage(&small_image);
// Create canvas to show the results
CGImageRef imageRef = originalImage.CGImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
//VIKAS
CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef);
CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5);
// Draw results on the iamge:Draw all components of face in the form of small rectangles
// Loop the number of faces found.
for(int i = 0; i < faces->total; i++)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Calc the rect of faces
// Create a new rectangle for drawing the face
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
// CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef,
// CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
face_rect = CGContextConvertRectToDeviceSpace(contextRef,
CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25
));
facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate];
facedetectapp.grabcropcoordrect=face_rect;
NSLog(#" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height);
CGContextStrokeRect(contextRef, face_rect);
//CGContextFillEllipseInRect(contextRef,face_rect);
CGContextStrokeEllipseInRect(contextRef,face_rect);
[pool release];
}
}
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);
return returnImage;
}
}
Thanks
Vikas
There are a pile of blend modes to choose from, a few of which are useful for "masking". I believe this should do approximately what you want:
CGContextSaveGState(contextRef);
CGContextSetBlendMode(contextRef,kCGBlendModeDestinationIn);
CGContextFillEllipseInRect(contextRef,face_rect);
CGContextRestoreGState(contextRef);
"approximately" because it'll mask the entire context contents every time, thus doing the wrong thing for more than one face. To handle this case, use CGContextAddEllipseInRect() in the loop and CGContextFillPath() at the end.
You might also want to look at CGContextBeginTransparencyLayerWithRect().
Following is the answer I given in How to crop UIImage on oval shape or circle shape? to make the image circle. It works for me..
Download the Support archive file from URL http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
#import "UIImage+RoundedCorner.h"
#import "UIImage+Resize.h"
Following lines used to resize the image and convert in to round with radius
UIImage *mask = [UIImage imageNamed:#"mask.jpg"];
mask = [mask resizedImage:CGSizeMake(47, 47) interpolationQuality:kCGInterpolationHigh ];
mask = [mask roundedCornerImage:23.5 borderSize:1];
Hope it helps some one..
I am using different images and i want to include change color option. But i cant. Any body help me?
If you want to do image tinting, see UIImage+Tint.m in kballard/MGImageUtilities. If you want wholesale color replacement (e.g. treat an image as a silhouette and change the entire color to one flat color), see UIImage+Tint.m in mattgemmell/MGImageUtilities.
Its latest and most simple way to do it.
theImageView.image = [theImageView.image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
[theImageView setTintColor:[UIColor redColor]];
One way to accomplish this is to desaturate your image, and add a tint on top of that image with the color you desire.
Desaturate
-(UIImage *) getImageWithUnsaturatedPixelsOfImage:(UIImage *)image {
const int RED = 1, GREEN = 2, BLUE = 3;
CGRect imageRect = CGRectMake(0, 0, image.size.width*2, image.size.height*2);
int width = imageRect.size.width, height = imageRect.size.height;
uint32_t * pixels = (uint32_t *) malloc(width*height*sizeof(uint32_t));
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t * rgbaPixel = (uint8_t *) &pixels[y*width+x];
uint32_t gray = (0.3*rgbaPixel[RED]+0.59*rgbaPixel[GREEN]+0.11*rgbaPixel[BLUE]);
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
CGImageRef newImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
UIImage * resultUIImage = [UIImage imageWithCGImage:newImage scale:2 orientation:0];
CGImageRelease(newImage);
return resultUIImage;
}
Overlay With Color
-(UIImage *) getImageWithTintedColor:(UIImage *)image withTint:(UIColor *)color withIntensity:(float)alpha {
CGSize size = image.size;
UIGraphicsBeginImageContextWithOptions(size, FALSE, 2);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0];
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetAlpha(context, alpha);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(CGPointZero.x, CGPointZero.y, image.size.width, image.size.height));
UIImage * tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
How-To
//For a UIImageView
yourImageView.image = [self getImageWithUnsaturatedPixelsOfImage:yourImageView.image];
yourImageView.image = [atom getImageWithTintedColor:yourImageView.image withTint:[UIColor redColor] withIntensity:0.7];
//For a UIImage
yourImage = [self getImageWithUnsaturatedPixelsOfImage:yourImage];
yourImage = [atom getImageWithTintedColor:yourImageView.image withTint:[UIColor redColor] withIntensity:0.7];
You can change the color of the tint to whatever you desire.
I would like to use a CATiledLayer in iPhone OS 3.1.3 and to do so all drawing in -(void)drawLayer:(CALayer *)layer inContext:(CGContext)context has to be done with coregraphics only.
Now I run into the problems of the flipped coordinate system on the iPhone and there are some suggestions how to fix it using transforms:
Image is drawn upside down
CATiledLayer or CALayer not working
My problem is that I cannot get it to work. I started using the PhotoScroller sample code and replacing the drawing method with coregraphics calls only. It looks like this
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context {
CGContextSaveGState(context);
CGRect rect = CGContextGetClipBoundingBox(context);
CGFloat scale = CGContextGetCTM(context).a;
CGContextConcatCTM(context, CGAffineTransformMakeTranslation(0.f, rect.size.height));
CGContextConcatCTM(context, CGAffineTransformMakeScale(1.f, -1.f));
CATiledLayer *tiledLayer = (CATiledLayer *)layer;
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(rect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(rect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
// if (row == 0 ) continue;
UIImage *tile = [self tileForScale:scale row:row col:col];
CGImageRef tileRef = [tile CGImage];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as to avoid
// stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
NSLog(#"row:%d, col:%d, x:%.0f y:%.0f, height:%.0f, width:%.0f", row, col,tileRect.origin.x, tileRect.origin.y, tileRect.size.height, tileRect.size.width);
//[tile drawInRect:tileRect];
CGContextDrawImage(context, tileRect, tileRef);
// just to draw the row and column index in the tile and mark the origin of the tile with a red line
if (self.annotates) {
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSetLineWidth(context, 6.0 / scale);
CGContextStrokeRect(context, tileRect);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextMoveToPoint(context, tileRect.origin.x, tileRect.origin.y);
CGContextAddLineToPoint(context, tileRect.origin.x+100.f, tileRect.origin.y+100.f);
CGContextStrokePath(context);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor]CGColor]);
CGContextSetFillColorWithColor(context, [[UIColor whiteColor]CGColor]);
CGContextSelectFont(context, "Courier", 128, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetShouldAntialias(context, true);
char text[30];
int length = sprintf(text,"row:%d col:%d",row,col);
CGContextSaveGState(context);
CGContextShowTextAtPoint(context, tileRect.origin.x+110.f,tileRect.origin.y+100.f, text, length);
CGContextRestoreGState(context);
}
}
}
CGContextRestoreGState(context);
}
As you can see I am using a Scale transform to invert the coordinate system and a translation transform to shift the origin to the lower left corner. The images draw correctly but only the first row of tiles is being drawn. I think there is a problem with the translation operation or the way the coordinates of the tiles are computed.
This is how it looks like:
I am a bit confused with all this transformations.
Bonus question:
How would one handle the retina display pictures in core graphics?
EDIT:
To get it working on the retina display I just took the original method from the sample code to provide the images:
- (UIImage *)tileForScale:(CGFloat)scale row:(int)row col:(int)col
{
// we use "imageWithContentsOfFile:" instead of "imageNamed:" here because we don't want UIImage to cache our tiles
NSString *tileName = [NSString stringWithFormat:#"%#_%d_%d_%d", imageName, (int)(scale * 1000), col, row];
NSString *path = [[NSBundle mainBundle] pathForResource:tileName ofType:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
return image;
}
In principle the scale of the display is ignored since Core Graphics is working in pixels not points so when asked to draw more pixels, more CATiledLayers (or sublayers) are used to fill the screen.
Thanks alot
Thomas
Thomas, I started by following the WWDC 2010 ScrollView talk and there is little or no documentation on working within drawLayer:inContext for iOS 3.x. I had the same issues as you do when I moved the demo code from drawRect: across to drawLayer:inContext:.
Some investigation showed me that within drawLayer:inContext: the size and offset of rect returned from CGContextGetClipBoundingBox(context) is exactly what you want to draw in. Where drawRect: gives you the whole bounds.
Knowing this you can remove the row and column iteration, as well as the CGRect intersection for the edge tiles and just draw to the rect once you've translated the context.
Here's what I've ended up with:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGRect rect = CGContextGetClipBoundingBox(ctx);
CGFloat scale = CGContextGetCTM(ctx).a;
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
int col = floorf((CGRectGetMaxX(rect)-1) / tileSize.width);
int row = floorf((CGRectGetMaxY(rect)-1) / tileSize.height);
CGImageRef image = [self imageForScale:scale row:row col:col];
if(NULL != image) {
CGContextTranslateCTM(ctx, 0.0, rect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
rect = CGContextGetClipBoundingBox(ctx);
CGContextDrawImage(ctx, rect, image);
CGImageRelease(image);
}
}
Notice that rect is redefined after the TranslateCTM and ScaleCTM.
And for reference here is my imageForScale:row:col function:
- (CGImageRef) imageForScale:(CGFloat)scale row:(int)row col:(int)col {
CGImageRef image = NULL;
CGDataProviderRef provider = NULL;
NSString *filename = [NSString stringWithFormat:#"img_name_here%0.0f_%d_%d",ceilf(scale * 100),col,row];
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"png"];
if(path != nil) {
NSURL *imageURL = [NSURL fileURLWithPath:path];
provider = CGDataProviderCreateWithURL((CFURLRef)imageURL);
image = CGImageCreateWithPNGDataProvider(provider,NULL,FALSE,kCGRenderingIntentDefault);
CFRelease(provider);
}
return image;
}
There's still a bit of work to be done on these two functions in order to support high resolution graphics properly, but it does look nice on everything but an iPhone 4.