My question is related to this link
I would like to know how we can save images larger than device resolution using CGBitmapContextCreate .
Any sample code for guidance will be much appreciated.
thanks!
Don't use CGBitmapContextCreate, use UIGraphicsBeginImageContextWithOptions, it's much easier. Use it like this:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), YES, 1.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//do your drawing
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEdnImageContext();
//your resultant UIImage is now stored in image variable
The three parameters to UIGraphicsBeginImageContext are:
The size of the image - this can be anything you want
Whether the image has transparency or not
The scale of pixels in the image. 0.0 is the default, so on an iPhone 3GS it will be 1.0 and on an iPhone 4 with a retina display it will be 2.0. You can pass in any scale you want though, so if you pass in 5.0, each pixel unit in your image will actually be 5x5 pixels in the bitmap, just like 1 pixel on a Retina display is really 2x2 pixels on screen.
Edit: it turns out that the question of whether UIGraphicsBeginImageContext() is thread-safe seems to be a bit controversial. If you do need to do this concurrently on a background thread, there is an alternative (rather more complex approach) using CGBitMapContextCreate() here: UIGraphicsBeginImageContext vs CGBitmapContextCreate
Related
I want to scale the image in iPhone app but not entirely. I just want to scale specific parts like the bottom part or the middle part...How do I do it?
Please help.
Thanks
It sounds like you want to do a form of 9-slice scaling or 3-slice scaling. Let's say you have the following image:
and you want to make it look like this:
(the diagonal end pieces do not stretch at all, the top and bottom pieces stretch horizontal, and the left and right pieces stretch vertical)
To do this, use -stretchableImageWithLeftCapWidth:topCapHeight: in iOS 4.x and earlier, or -resizableImageWithCapInsets: starting with iOS 5.
UIImage *myImage = [UIImage imageNamed:#"FancyButton"];
UIImage *myResizableImage = [myImage resizableImageWithCapInsets:UIEdgeInsetsMake(21.0, 13.0, 21.0, 13.0)];
[anImageView setImage:myResizableImage]
To help visualize the scaling, here is an image showing the above cap insets:
I'm not aware of any way to adjust the scale of just a part of a UIImage. I'd approach is slightly differently by creating seperate images from your primary image using CGImageCreateWithImageInRect and then scaling the seperate images with the different rates that you require.
See:
Cropping a UIImage
CGImage Reference
Quartz 2D Programming Guide
I'm trying to render a bitmap to save to the user's photos album that has to be higher resolution than the 320x480 iPhone screen (but still within iOS memory limitations).
However, using this code to create contexts:
UIGraphicsBeginImageContext(CGSizeMake(finalImgWidth, finalImgHeight));
CGContextRef ctx = UIGraphicsGetCurrentContext();
or the CG analog:
CGContextRef ctx = CGBitmapContextCreate(rawData, finalImgWidth, finalImgHeight, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
returns a nil context if width or height is greater than 320x480.
Is there any other way to create a high-res image?
Note: I already know how to draw into contexts normally and take screenshots. I need a solution that scales to typical photo resolution.
Have you tried [UIImage initWithCGImage:scale:orientation:] ?
First, initialized an UIImage object with the image, to get the image's geometry.
Then calculate the scale factor.
Well, i think it will help you. In the following Apple Sample codes, they show how to display a large image in the screen.
Scrolling Apple Sample Code
I'm building some sort of censoring app. I've gotten so far that i can completele pixelate an image taken with my iPhone.
But I want to achieve in the end an image like this: http://images-mediawiki-sites.thefullwiki.org/11/4/8/8/8328511755287292.jpg
So my thought was to fully pixelate my image and then add a mask on top of it, to achieve the desired effect. So in terms of layers it goes like: originalImage + maskedPixelatedVersionOfImage.. I was thinking to animate the mask when touching the image, to scale the mask to the desired size. The longer you hold your finger on the image, the bigger the mask becomes...
After some searching, I guess this can be done using CALayers and CAAnimation. But how do I then composite those layers to an image that I can save in the photoalbum on the iphone?
Am I taking the right approach here?
EDIT:
Okay, I guess Ole's solution is the correct one, though I'm still not getting what I want: the code I use is:
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So in my imageView i did: setImage, which has the original (unedited) version of the photo. On top of that i add a sublayer, mosaicLayer, which has a mask property: maskLayer. I thought by rendering the rootlayer of the imageView, everything would turn out ok. Is that not correct?
Also, I figured out something else: my mask is stretched and rotated, which i'm guessing has something to do with imageOrientation? I noticed by accidentally saving mosaicLayer to my library, which also explains the problem I had that the mask seemed to mask the wrong part of my image...
To render a layer tree, put all layers in a common container layer and call:
UIGraphicsBeginImageContext(containerLayer.bounds.size);
[containerLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If you're willing to drop support for pre-iPhone 3G S devices (iPhone and iPhone 3G), I'd suggest using OpenGL ES 2.0 shaders for this. While it may be easy to overlay a CALayer containing a pixelated version of the image, I think you'll find the performance to be lacking.
In my tests, performing a simple CPU-based calculation on every pixel of a 480 x 320 image led to a framerate of about 4 FPS on an iPhone 4. You might be able to sample only a fraction of these pixels to achieve the desired effect, but it still will be a slow operation to redraw a pixelated image to match the live video.
Instead, if you use an OpenGL ES 2.0 fragment shader to process the incoming live video image, you should be able to take in the raw camera image, apply this filter selectively over the desired area, and either display or save the resulting camera image. This processing will take place almost entirely on the GPU, which I've found to do simple operations like this at 60 FPS on the iPhone 4.
While getting a fragment shader to work quite right can require a little setup, you might be able to use this sample application I wrote for processing camera input and doing color tracking to be a decent starting point. You might also look at the touch gesture I use there, where I take the initial touch down point to be the location to center an effect around and then a subsequent drag distance to control the strength or radius of an effect.
Before the retina display came to iOS, I've developed some controls which are drawn using stretchable images and lots of core graphics code.
Now I tested it on a retina display device, and the graphics are misplaced and distorted. Everything else that's loaded with #2x suffix and UIImage imageNamed works fine.
I assume there must be some special considerations when using images in core graphics. For example, I obtain the CGImage from an UIImage very often.
Does anyone know?
UIImage is a facade on top of CGImage (and IOSurface as a private API). Since CGImage has no concept of scale, it will reflect the actual size of the image's buffer--for #2x images, the CGImage dimensions will be twice what the UIImage's size are.
I have a 320x480 PNG that I would like to texture map/manipulate on the iPhone but these dimensions are obviously not power of 2. I already tested my texture map manipulation algorithm on a 512x512 PNG that is a black background with a 320x480 image superimposed on it, centered at the origin (lower left corner (0,0)) where the 320x480 area is properly oriented/centered/scaled on the iPhone screen.
What I would like to do now is progress to the point where I can take 320x480 source images and apply them to a blank/black background 512x512 texture generated in code so that the two would combine as one texture so that I can apply the vertices and texture coordinates I used in my 512x512 test. This will be eventually used for camera captured and camera roll sourced images, etc.
Any thoughts? (must be for OpenGL ES 1.1 without use of GL util toolkit, etc.).
Thanks,
Ari
One method I've found to work is to simply draw both images into the current context and then extract the resulting combined image. Is there another way that is more geared towards OpenGL that may be more efficent?
// CGImageRef for background image
// CGImageRef for foreground image
// CGSize for current context
// Define CGContextRef for current context
// UIGraphicsBeginImageContext using CGSize
// Get value for current context with UIGraphicsGetCurrentContext()
// Define 2 rectangles, one for the background and one for the foreground images
// CGContextDrawImage(currentContext, backgroundRect, backgroundImage);
// CGContextDrawImage(currentContext, foregroundRect, foregroundImage);
// UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
// spriteImage = finalImage.CGImage();
// UIGraphicsEndImageContext();
At this point you can proceed to use spriteImage as the image source for the texture and it will be a combination of a blank 512x512 PNG with a 320x480 PNG for example.
I'll replace the 512x512 blank PNG with an image generated in code but this does work.