Creating UIImage from raw RGBA data - iphone

I have been trying to convert an array RGBA data (int bytes) into a UIImage. My code looks like as follows:
/*height and width are integers denoting the dimensions of the image*/
unsigned char *rawData = malloc(width*height*4);
for (int i=0; i<width*height; ++i)
{
rawData[4*i] = <red_val>;
rawData[4*i+1] = <green_val>;
rawData[4*i+2] = <blue_val>;
rawData[4*i+3] = 255;
}
/*I Have the correct values displayed
- ensuring the rawData is well populated*/
NSLog(#"(%i,%i,%i,%f)",rawData[0],rawData[1],rawData[2],rawData[3]/255.0f);
NSLog(#"(%i,%i,%i,%f)",rawData[4],rawData[5],rawData[6],rawData[7]/255.0f);
NSLog(#"(%i,%i,%i,%f)",rawData[8],rawData[9],rawData[10],rawData[11]/255.0f);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
width*height*4,
NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
8,
32,
4*width,colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
/*I get the current dimensions displayed here */
NSLog(#"width=%i, height: %i", CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef) );
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
/*This is where the problem lies.
The width, height displayed are of completely different dimensions
viz. the width is always zero and the height is a very huge number */
NSLog(#"resultImg width:%i, height:%i",
newImage.size.width,newImage.size.height);
return newImage;
The output image that I receive is an image of width 0, and height 1080950784 (assuming my initil height and width were 240 and 240). I have been trying to get this sorted out and have checked many related forums e.g. (link text) on how to go about it but with little success.

It turns out the problem is a pretty silly mistake that both of us overlooked. UIImage dimensions are stored as floats, not integers. :D
Try
NSLog(#"resultImg width:%f, height:%f",
newImage.size.width,newImage.size.height);
Instead. The image size has been transferred correctly.

The problem is solved now. I get the image that I want displayed but still unable to figure out why the width and height are different. Essentially nothing wrong with the program per say. The only problem being width and height.

I just independently verified this problem, I think it should be reported as a bug to Apple. +(UIImage)imageWithCGImage: doesn't properly transfer the width and height of the source CGImageRef to the UIImage.

Related

Converting a C array to a UIImage (for iOS)

I currently have some image data in a C array, which contains RGBA data.
float array[length][4]
I am trying to get this to a UIImage, which it looks like these are initialized with files, NSData, and URLs. Since the other two methods are slow, I am most interested in the NSData approach.
I can get all of these values into an NSArray like so:
for (i=0; i<image.size.width * image.size.height; i++){
replace = [UIColor colorWithRed:array[i][0] green:array[i][1] blue:array[i][2] alpha:array[i][3]];
[output replaceObjectAtIndex:i withObject:replace];
}
So, I have a NSArray full of objects that are a UIColor. I have tried many methods, but how do I convert this to a UIImage?
I think it would be straight forward. A function sorta like imageWithData:data R:0 B:1 G:2 A:3 length:length width:width length:length would be nice, but there is no function as far as I can tell.
imageWithData: is meant for image data in a standard image file format, e.g. a PNG or JPEG file that you have in memory. It's not suitable for creating images from raw data.
For that, you would typically create a bitmap graphics context, passing your array, pixel format, size, etc. to the CGBitmapContextCreate function. When you've created a bitmap context, you can create an image from it using CGBitmapContextCreateImage, which gives you a CGImageRef that you can pass to the UIImage method imageWithCGImage:.
Here's a basic example that creates a tiny 1×2 pixel image with one red pixel and one green pixel. It just uses hard-coded pixel values that are meant to show the order of the color components, normally, you would get this data from somewhere else of course:
size_t width = 2;
size_t height = 1;
size_t bytesPerPixel = 4;
//4 bytes per pixel (R, G, B, A) = 8 bytes for a 1x2 pixel image:
unsigned char rawData[8] = {255, 0, 0, 255, //red
0, 255, 0, 255}; //green
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bytesPerRow = bytesPerPixel * width;
size_t bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
//This is your image:
UIImage *image = [UIImage imageWithCGImage:cgImage];
//Don't forget to clean up:
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);

Create image from RGB data?

I'm having real trouble with this. I have some raw rgb data, values from 0 to 255, and want to display it as an image on the iphone but can't find out how to do it. Can anyone help? I think i might need to use CGImageCreate but just don't get it. Tried looking at the class reference and am feeling quite stuck.
All I want is a 10x10 greyscale image generated from some calculations and if there is an easy way to create a png or something that would be great.
a terribly primitive example, similar to Mats' suggestion, but this version uses an external pixel buffer (pixelData):
const size_t Width = 10;
const size_t Height = 10;
const size_t Area = Width * Height;
const size_t ComponentsPerPixel = 4; // rgba
uint8_t pixelData[Area * ComponentsPerPixel];
// fill the pixels with a lovely opaque blue gradient:
for (size_t i=0; i < Area; ++i) {
const size_t offset = i * ComponentsPerPixel;
pixelData[offset] = i;
pixelData[offset+1] = i;
pixelData[offset+2] = i + i; // enhance blue
pixelData[offset+3] = UINT8_MAX; // opaque
}
// create the bitmap context:
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * Width) / 8) * ComponentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(&pixelData[0], Width, Height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
// create the image:
CGImageRef toCGImage = CGBitmapContextCreateImage(gtx);
UIImage * uiimage = [[UIImage alloc] initWithCGImage:toCGImage];
NSData * png = UIImagePNGRepresentation(uiimage);
// remember to cleanup your resources! :)
Use CGBitmapContextCreate() to create a memory based bitmap for yourself. Then call CGBitmapContextGetData() to get a pointer for your drawing code. Then CGBitmapContextCreateImage() to create a CGImageRef.
I hope this is sufficient to get you started.
On Mac OS you could do it with an NSBitmapImageRep. For iOS it seems to be a bit more complicated. I found this blog post though:
http://paulsolt.com/2010/09/ios-converting-uiimage-to-rgba8-bitmaps-and-back/

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

How is the image data interpreted for a grayscale image on an iPhone?

How do I make sense of the image data for a grayscale image given the following scenario: I capture video data from the "sample buffer" and extract an 80x20 section and then turn that into a grayscale UIImage. But when I examine the raw pixel bytes I am unable to make sense of them in a way that would allow me to go on and "binarize" them (my real goal).
When I simply save the UIImage to the photo album using UIImageWriteToSavedPhotosAlbum to verify just what kind of image data I have, I indeed get a plain, white 80x20 image (it's actually light-grayish). I captured a plain white image to simplify things, expecting to see only values between, say, 200 or so and 255, and yet there are sections of the image data full of zeroes, that clearly suggest rows of black pixels. Any help is appreciated. The relevant code and the image data (16 pixels at a time) are below.
Here is how I create the 80x20 grayscale image from a portion of the CMSampleBufferRef video data:
UIImage *imageFromImage(UIImage *image, CGRect rect)
{
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
CGImageRef grayScaleImg = grayscaleCGImageFromCGImage(newImageRef);
CGImageRelease(newImageRef);
UIImage *newImage = [UIImage imageWithCGImage:grayScaleImg scale:1.0 orientation:UIImageOrientationLeft];
return newImage;
}
CGImageRef grayscaleCGImageFromCGImage(CGImageRef inputImage)
{
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
// Create a gray scale context and render the input image into that
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
CGContextDrawImage(context, CGRectMake(0,0, width,height), inputImage);
// Get an image representation of the grayscale context which the input
// was rendered into.
CGImageRef outputImage = CGBitmapContextCreateImage(context);
// Cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return (CGImageRef)[(id)outputImage autorelease];
}
and then, when I use the following code to dump the pixel data to the Console:
CGImageRef inputImage = [imgIn CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(inputImage);
CFDataRef imageData = CGDataProviderCopyData(dataProvider);
const UInt8 *rawData = CFDataGetBytePtr(imageData);
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
size_t numPixels = height * width;
for (int i = 0; i < numPixels ; i++)
{
if ((i % 16) == 0)
NSLog(#" -%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-\n\n", rawData[i],
rawData[i+1], rawData[i+2], rawData[i+3], rawData[i+4], rawData[i+5],
rawData[i+6], rawData[i+7], rawData[i+8], rawData[i+9], rawData[i+10],
rawData[i+11], rawData[i+12], rawData[i+13], rawData[i+14], rawData[i+15]);
}
I consistently get output like following:
-216-217-214-215-217-215-216-213-214-214-214-215-215-217-216-216-
-219-219-216-219-220-217-212-214-215-214-217-220-219-217-214-219-
-216-216-218-217-218-221-217-213-214-212-214-212-212-214-214-213-
-213-213-212-213-212-214-216-214-212-210-211-210-213-210-213-208-
-212-208-208-210-206-207-206-207-210-205-206-208-209-210-210-207-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
(this pattern repeats for the remaining bytes, 80 bytes of pixel data in the 200's, depending on lighting, followed by 240 bytes of zeros -- there's a total of 1600 bytes since the image is 80x20)
This:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
Should be:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
width, colorspace, kCGBitmapByteOrderDefault);
In other words, for an 8 bit gray image, the number of bytes per row is the same as the width.
You've probably forgotten image stride - you're assuming that your images are stored as width*height but several systems store them as stride*height where stride > width. The zeros are padding that you should skip.
By the way, what do you mean "binarize" ? I guess you mean quantize to a less grey levels ?

Reading and editing pixels of image on iPhone

Curious about how to read and edit a picture's pixels on the iPhone. Am I better of using an array of points with colours?
I want to do things like.. if a CGPoint intersects with a "brown" spot on the picture, set the colour of all brown pixels in a radius to white. More questions to come, but this is a start.
Cheers
The picture data is available to you as precisely that -- a two-dimensional array of pixels, each pixel being represented by a 32 bit integer. For each of the color components (red, green, blue, and alpga) there is an 8 bit value. The ordering of these 8-bit-wide values within the 32 bit integer varies with the format of the picture data. The apple doc about all this is really good. While there is some attractive Apple stuff using CGDataProviderCopyData to give you a pointer into the actual data storage of a UIImage, in practice this can be a headache, because the format of that internal storage can vary widely from one image to the next. In practice, most people doing image processing seem to use this approach:
CGImageRef image = [UIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData_ = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel_ * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// rawData contains image data in the RGBA8888 format.
// for any pixel at coordinate x,y -- the value is
//
int pixelIndex = (bytesPerRow * y) + x * bytesPerPixel;
unsigned char red = rawData[pixelIndex];
green = rawData[pixelIndex + 1];
blue = rawData[pixelIndex + 2];
alpha = rawData[pixelIndex + 3];