I've noticed examples of two ways to put images into core graphics. One is much simpler than the other. So what's the advantage of the second, more sophisticated, approach? Is it faster?
example 1
UIImage *myImage = [UIImage imageNamed:#"picture.png"];
CGRect imageRect = CGRectMake(70, 330, 40, 40);
[myImage drawInRect:imageRect];
[myImage release];
example 2
// Load image from application bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"picture.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
// Draw image
CGContextTranslateCTM(context, 70, 370 );
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0, 0, 40, 40), image);
CGImageRelease(image);
Directly comparing the two, the advantage of the second method is that it doesn't put the image into the +[UIImage imageNamed:] cache. It's not possible for you to remove images from that cache, so you may be better off not using imageNamed:.
However, you can get the best of both worlds.
First off, use -[NSBundle URLForResource:withExtension:] to get the URL to the image. This simplifies that step over getting the resource directory path or URL and appending the filename to it yourself.
Second, use +[UIImage imageWithContentsOfURL:] to create the image, passing the URL you obtained from the NSBundle.
From there, continue on as in your first example. The total will be only five lines (two of them replacing the first line of your first example).
In example 1 you are just gonna show the image in the view as it is, and it is good too to show static images.
But in example 2 you are using core graphics to display the image so you can do more operations on it, more than just a static image. You can rotate it, scale it, translate it, reduce its alpha value and many more that i also not aware of.
so it depends on your choice of use. whether you want to just show the image or want it to do something more than that.if you are begineer and want to develop utility apps than i dont think you will ever use 2 option. but if you want to develop games or want to animate the image you will use should 2 option.
Cheers
I don't know any advantages for the second way. It has the disadvantage of having more steps. If you use the first way and have a UIImage and you need a CGImageRef you can access it through the CGImage property. I find myself using the first most often. When you have a UIImage you can use your own UIImageView to add it to a view hierarchy or you can assign it directly to some of the UIKit objects like UITableViewCell's imageView. If you get the CGImageRef (second way) then you would need to imageWithCGImage: first...
Related
I was very surprised to not find the answer on Stackoverflow to this one.
I have a vector path in pdf format, like Safari or the Mac App Store apps usually use as image icons.
Now, I'd like to specify the fill-color and a custom shadow in code, rather than making images and exporting them. I didn't figure out how to do this.
The shadow works, however the fill color does not.
Can anyone tell me how to do this?
Current Code
[NSGraphicsContext saveGraphicsState];
{
// Has no effect
[[NSColor colorWithCalibratedRed:0.92f green:0.97f blue:0.98f alpha:1.00f] setFill];
// Neither has this
[[NSColor colorWithCalibratedRed:0.92f green:0.97f blue:0.98f alpha:1.00f] set];
NSShadow *shadow = [NSShadow new];
[shadow setShadowOffset:NSMakeSize(0, -1)];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:3.0];
[shadow set];
[image drawInRect:imgRect
fromRect:NSZeroRect
operation:NSCompositeXOR
fraction:1.0
respectFlipped:YES
hints:nil];
}
[NSGraphicsContext restoreGraphicsState];
In terms of calling drawInRect:... the image "is what it is". Setting the fill and stroke will effect only primitive operations. A good way to think about this is to realize that all images, vector or raster, have to behave the same way; It would be weird for the current fill color that's set on the context to affect the drawing of a raster-based image, right? Same idea -- the image is the image. The vector image might also have multiple paths in it, each with different fills. It wouldn't make sense for those to be overridden by the fill color set on the context either.
The shadow works regardless because it's effectively a compositing operation; Drawing a given image with a given shadow setting produces the same shadow whether the image was raster-based or the vector equivalent thereof.
In short, if you want to change the contents of the image, you're going to have to write the code to extract the vectors from the image and then draw them as primitives.
Alternately, if all you want is to fill any filled areas with the color, you could use the vector image to create a mask on the context, then you could set the color
on the context, and fill. That might produce the desired effect.
The size of A UIImage in my app is (320,460)
I created another UIImage object using
- (id)initWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
I assigned orientation to UIImageOrientationLeft.
Then I printed the new UIImage object's size, the result was (460,320).
It has rotated to left already.
I needed to store the UIImage in my document directory.
NSData *imageData = UIImagePNGRepresentation(rotateImageView);
NSString * path = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
[imageData writeToFile:[path stringByAppendingPathComponent:#"test.png"] atomically:NO];
But when I got the UIImage object from "test.png"
the size of it was changed to (320,460),it has rotated to its orignal status.
I wanted that it can be stored in (460,320)
Did I make some mistakes?
Thanks!
I've run into this problem as well. When you pass around image orientations within Apple code, you don't actually rotate any pixel data. Rather, there is basically an enum value stored with the image. Many of Apple's image renderer's are smart enough to read this enum value, and use it to display the image properly. So the code snippets you share just change this enum value. The renderers that respect this value will display what you want, while many other renderers will ignore it.
There are a couple solutions available.
First, if you're displaying an image through iOS, you can use the transform property of UIImageView along with CGAffineTransformMakeRotation to get the desired orientation.
Second, you could actually rotate the raw pixel data, which can be accomplished like this:
How to rotate image file?
I would recommend the first solution, since it's easier to code, and more efficient. However, if you will be sharing these images outside of iOS, the second approach will give more reliable results.
My app stores images as NSData objects. However, when these are loaded on an iPhone 4, they are displayed at double the size because the default scale factor is 1. I have 2 questions I would appreciate help with please:
Is there any way to set the scale of the UIImage without using initWithCGImage:scale:orientation:
If the answer to 1 is no, what is the most efficient way to load the NSData into a UIImage using the method above? At present it seems I will have to create a UIImage from the NSData and then create another UIImage using the method noted in 1 above.
Thank you.
UIImage is immutable, so I guess there is no way to do so without hacking.
UIImage is just a wrapper of CGImage , so I think using initWithCGImage: as you describe won't have any noticeable performance impact. If you really worry about that, you can load it to CGImageRef first.
In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.
I want to be able to create a greyscale image with no alpha from a png in the app bundle.
This works, and I get an image created:
// Create graphics context the size of the overlapping rectangle
UIGraphicsBeginImageContext(rectangleOfOverlap.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// More stuff
CGContextDrawImage(ctx, drawRect2, [UIImage imageNamed:#"Image 01.png"].CGImage);
// Create the new UIImage from the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
However the resulting image is 32 bits per pixel and has an alpha channel, so when I use CGCreateImageWithMask it doesn't work. I've tried creating a bitmap context thus:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef ctx =CGBitmapContextCreate(nil, rectangleOfOverlap.size.width, rectangleOfOverlap.size.height, 8, rectangleOfOverlap.size.width , colorSpace, kCGImageAlphaNone);
UIGraphicsGetImageFromCurrentImageContext returns zero and the resulting image is not created. Am I doing something dumb here?
Any help would be greatly appreciated.
Regards
Dave
OK, think I have an answer (or several of them)...
If the context create fails then the console window gives a reasonable error message and tells you why it failed. In this case the context is created, so no error.
Secondly the list of supported parameters for the CGBtimapContext is here:
http://developer.apple.com/mac/library/qa/qa2001/qa1037.html
Thirdly UIGraphicsGetImageFromCurrentImageContext() only works with a generic context created using UIGraphicsBeginGraphicsContext() I needed to use:
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(ctx)];
and forthly, I cannot get at the underlying pixel data with CGBitmapContextGetData(ctx) if I create the context using NULL as the first parameter, even though the docs imply that from 10.3 onwards memory is handled for you. To get around this I created a method called:
- (CGContextRef) newGreyScaleBitmapContextOfSize:(CGSize) size;
The method creates the context by malloc'ing the memory and returns a context ref. I am not sure if that is comprehensive, but as I've spent days on this I thought I'd let you know what I have discovered so far.
Hope this helps,
Dave