I need to create an UIImage from a byte array.
Here is now I create the byte array:
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);//kCGImageAlphaNoneSkipFirst);//kCGImageAlphaNone);//
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
The data variable is an unsigned char.
And this is how I try to get the UIImage:
NSData *imgData = [NSData dataWithBytes:data[i][j] length:TILE_WIDTH*TILE_HEIGHT*numberOfCompponents];
UIImage *img = [UIImage imageWithData: imgData];
The img (UIImage) is remaining nil.
OK, not this is the background: I am trying to create a pixelate application :). The images from the iPhone 4 camera are too big in size, so I split the image in smaller images. Doing so, when the area pixelated (touched) needs to be updated in order for the pixelate effect to be displayed, I am updating a smaller UIImage. I needed to do it like this because in previous tests it seamed like the update of an UIImage is killing the CPU. Still, the smaller images are now around 80x100 pxl, and the update is not working as smooth as possible. Sometimes, if you move the finger to fast it misses some spots :D. I think that using this method to create an UIImage from the byte array is faster than this one:
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *imageToBeUpdated = [UIImage imageWithCGImage:cgImage];
Am I correct?
[UIImage imageWithData:data] parses data that is in a known image file format (e.g. jpeg, png, or gif; full list in the documentation. You're passing it raw pixel data, which is not supported.
Try this instead of CGBitmapContextGetData to get the image out of the context:
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage *img = [UIImage imageWithCGImage:imgRef];
Related
I want to call CGBitmapContextCreate with texture->data to create a CGContextRef,
and create a CGImageRef by CGBitmapContextCreateImage(context).
However, the image created is not as expected :(
The one created from CGBitmapContextCreateImage:
The actual one (slight different since I take it with another camera):
Codes: (texture.bytesPerPixel = 2)
CGContextRef context = CGBitmapContextCreate(texture.data, 512, 512, 5, 512 * texture.bytesPerPixel, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
CGImageRef cg_img = CGBitmapContextCreateImage(context);
UIImage* ui_img = [UIImage imageWithCGImage: cg_img];
UIImageWriteToSavedPhotosAlbum(ui_img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
Complete Code:
http://ihome.ust.hk/~tm_lksac/OpenGLSprite.m
The application usually calls - (void)drawSelfIfNeeded:(BOOL)needed
to update the texture in the application. But I want to take a "screenshoot" of the texture and save it as UIImage as further image processing.
I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.
How do I make sense of the image data for a grayscale image given the following scenario: I capture video data from the "sample buffer" and extract an 80x20 section and then turn that into a grayscale UIImage. But when I examine the raw pixel bytes I am unable to make sense of them in a way that would allow me to go on and "binarize" them (my real goal).
When I simply save the UIImage to the photo album using UIImageWriteToSavedPhotosAlbum to verify just what kind of image data I have, I indeed get a plain, white 80x20 image (it's actually light-grayish). I captured a plain white image to simplify things, expecting to see only values between, say, 200 or so and 255, and yet there are sections of the image data full of zeroes, that clearly suggest rows of black pixels. Any help is appreciated. The relevant code and the image data (16 pixels at a time) are below.
Here is how I create the 80x20 grayscale image from a portion of the CMSampleBufferRef video data:
UIImage *imageFromImage(UIImage *image, CGRect rect)
{
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
CGImageRef grayScaleImg = grayscaleCGImageFromCGImage(newImageRef);
CGImageRelease(newImageRef);
UIImage *newImage = [UIImage imageWithCGImage:grayScaleImg scale:1.0 orientation:UIImageOrientationLeft];
return newImage;
}
CGImageRef grayscaleCGImageFromCGImage(CGImageRef inputImage)
{
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
// Create a gray scale context and render the input image into that
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
CGContextDrawImage(context, CGRectMake(0,0, width,height), inputImage);
// Get an image representation of the grayscale context which the input
// was rendered into.
CGImageRef outputImage = CGBitmapContextCreateImage(context);
// Cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return (CGImageRef)[(id)outputImage autorelease];
}
and then, when I use the following code to dump the pixel data to the Console:
CGImageRef inputImage = [imgIn CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(inputImage);
CFDataRef imageData = CGDataProviderCopyData(dataProvider);
const UInt8 *rawData = CFDataGetBytePtr(imageData);
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
size_t numPixels = height * width;
for (int i = 0; i < numPixels ; i++)
{
if ((i % 16) == 0)
NSLog(#" -%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-\n\n", rawData[i],
rawData[i+1], rawData[i+2], rawData[i+3], rawData[i+4], rawData[i+5],
rawData[i+6], rawData[i+7], rawData[i+8], rawData[i+9], rawData[i+10],
rawData[i+11], rawData[i+12], rawData[i+13], rawData[i+14], rawData[i+15]);
}
I consistently get output like following:
-216-217-214-215-217-215-216-213-214-214-214-215-215-217-216-216-
-219-219-216-219-220-217-212-214-215-214-217-220-219-217-214-219-
-216-216-218-217-218-221-217-213-214-212-214-212-212-214-214-213-
-213-213-212-213-212-214-216-214-212-210-211-210-213-210-213-208-
-212-208-208-210-206-207-206-207-210-205-206-208-209-210-210-207-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
(this pattern repeats for the remaining bytes, 80 bytes of pixel data in the 200's, depending on lighting, followed by 240 bytes of zeros -- there's a total of 1600 bytes since the image is 80x20)
This:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
Should be:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
width, colorspace, kCGBitmapByteOrderDefault);
In other words, for an 8 bit gray image, the number of bytes per row is the same as the width.
You've probably forgotten image stride - you're assuming that your images are stored as width*height but several systems store them as stride*height where stride > width. The zeros are padding that you should skip.
By the way, what do you mean "binarize" ? I guess you mean quantize to a less grey levels ?
I'm trying to write an iPhone app that takes PNG tilesets and displays segments of them on-screen, and I'm trying to get it to refresh the whole screen at 20fps. Currently I'm managing about 3 or 4fps on the simulator, and 0.5 - 2fps on the device (an iPhone 3G), depending on how much stuff is on the screen.
I'm using Core Graphics at the moment and currently trying to find ways to avoid biting the bullet and refactoring in OpenGL. I've done a Shark time profile analysis on the code and about 70-80% of everything that's going on is boiling down to a function called copyImageBlockSetPNG, which is being called from within CGContextDrawImage, which itself is calling all sorts of other functions with PNG in the name. Inflate is also in there, accounting for 37% of it.
Question is, I already loaded the image into memory from a UIImage, so why does the code still care that it was a PNG? Does it not decompress into a native uncompressed format on load? Can I convert it myself? The analysis implies that it's decompressing the image every time I draw a section from it, which ends up being 30 or more times a frame.
Solution
-(CGImageRef)inflate:(CGImageRef)compressedImage
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
width,
height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
It's based on zneak's code (so he gets the big tick) but I've changed some of the parameters to CGBitmapContextCreate to stop it crashing when I feed it my PNG images.
To answer your last questions, your empirical case seems to prove they're not uncompressed once loaded.
To convert them into uncompressed data, you can draw them (once) in a CGBitmapContext and get a CGImage out of it. It should be well enough uncompressed.
Off my head, this should do it:
CGImageRef Inflate(CGImageRef compressedImage)
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = CGBitmapContextCreate(
NULL,
width,
height,
CGImageGetBitsPerComponent(compressedImage),
CGImageGetBytesPerRow(compressedImage),
CGImageGetColorSpace(compressedImage),
CGImageGetBitmapInfo(compressedImage)
);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
Don't forget to release the CGImage you get once you're done with it.
This question totally saved my day! thanks!! I was having this problem although I wasn't sure where the problem was. Speed up UIImage creation from SpriteSheet
I would like to add that there is another way to load the image directly decompressed, qithout having to write to a context.
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
NSData *imageData = [NSData dataWithContentsOfFile:#"path/to/image.png"]];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)(imageData), NULL);
CGImageRef atlasCGI = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
CFRelease(source);
I believe it is a little bit faster this way. Hope it helps!
Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;