Converting a C array to a UIImage (for iOS) - iphone

I currently have some image data in a C array, which contains RGBA data.
float array[length][4]
I am trying to get this to a UIImage, which it looks like these are initialized with files, NSData, and URLs. Since the other two methods are slow, I am most interested in the NSData approach.
I can get all of these values into an NSArray like so:
for (i=0; i<image.size.width * image.size.height; i++){
replace = [UIColor colorWithRed:array[i][0] green:array[i][1] blue:array[i][2] alpha:array[i][3]];
[output replaceObjectAtIndex:i withObject:replace];
}
So, I have a NSArray full of objects that are a UIColor. I have tried many methods, but how do I convert this to a UIImage?
I think it would be straight forward. A function sorta like imageWithData:data R:0 B:1 G:2 A:3 length:length width:width length:length would be nice, but there is no function as far as I can tell.

imageWithData: is meant for image data in a standard image file format, e.g. a PNG or JPEG file that you have in memory. It's not suitable for creating images from raw data.
For that, you would typically create a bitmap graphics context, passing your array, pixel format, size, etc. to the CGBitmapContextCreate function. When you've created a bitmap context, you can create an image from it using CGBitmapContextCreateImage, which gives you a CGImageRef that you can pass to the UIImage method imageWithCGImage:.
Here's a basic example that creates a tiny 1×2 pixel image with one red pixel and one green pixel. It just uses hard-coded pixel values that are meant to show the order of the color components, normally, you would get this data from somewhere else of course:
size_t width = 2;
size_t height = 1;
size_t bytesPerPixel = 4;
//4 bytes per pixel (R, G, B, A) = 8 bytes for a 1x2 pixel image:
unsigned char rawData[8] = {255, 0, 0, 255, //red
0, 255, 0, 255}; //green
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bytesPerRow = bytesPerPixel * width;
size_t bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
//This is your image:
UIImage *image = [UIImage imageWithCGImage:cgImage];
//Don't forget to clean up:
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);

Related

Getting pixel data from UIImage on iPhone, why is the red one always 255?

Have been looking into this topic, and use this method with varying luck:
-(RGBPixel*)bitmap{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
bitmapData = CGDataProviderCopyData(provider);
[self setPixelByteData: malloc( CFDataGetLength(bitmapData) )];
CFDataGetBytes( bitmapData, CFRangeMake( 0, CFDataGetLength( bitmapData ) ), pixelByteData );
pixelData = (RGBPixel*) pixelByteData;
colorLabel.text = [[NSString alloc]initWithFormat:#"Pixel data: red (%i), green (%i), blue (%i).", pixelData[100].red, pixelData[100].green, pixelData[100].blue] ;
return pixelData;
}
The only thing that doesn`t work as I want is the red pixel data: it always says 255, while the green and blue behave as expected. What am I doing wrong?
As #Nick hinted, are you certain that you know the format of the image? bitsPerComponent, bitsPerPixel, bytesPerRow, ColorSpace ? The data could be in int or float format. It could be arranged RGB, RGBA, or ARGB.
Unless you created the image yourself from raw data you may not be certain. You can use functions like CGImageGetBitmapInfo(), CGImageGetAlphaInfo(), CGImageGetColorSpace() to find out.

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

Creating UIImage from raw RGBA data

I have been trying to convert an array RGBA data (int bytes) into a UIImage. My code looks like as follows:
/*height and width are integers denoting the dimensions of the image*/
unsigned char *rawData = malloc(width*height*4);
for (int i=0; i<width*height; ++i)
{
rawData[4*i] = <red_val>;
rawData[4*i+1] = <green_val>;
rawData[4*i+2] = <blue_val>;
rawData[4*i+3] = 255;
}
/*I Have the correct values displayed
- ensuring the rawData is well populated*/
NSLog(#"(%i,%i,%i,%f)",rawData[0],rawData[1],rawData[2],rawData[3]/255.0f);
NSLog(#"(%i,%i,%i,%f)",rawData[4],rawData[5],rawData[6],rawData[7]/255.0f);
NSLog(#"(%i,%i,%i,%f)",rawData[8],rawData[9],rawData[10],rawData[11]/255.0f);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
width*height*4,
NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
8,
32,
4*width,colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
/*I get the current dimensions displayed here */
NSLog(#"width=%i, height: %i", CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef) );
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
/*This is where the problem lies.
The width, height displayed are of completely different dimensions
viz. the width is always zero and the height is a very huge number */
NSLog(#"resultImg width:%i, height:%i",
newImage.size.width,newImage.size.height);
return newImage;
The output image that I receive is an image of width 0, and height 1080950784 (assuming my initil height and width were 240 and 240). I have been trying to get this sorted out and have checked many related forums e.g. (link text) on how to go about it but with little success.
It turns out the problem is a pretty silly mistake that both of us overlooked. UIImage dimensions are stored as floats, not integers. :D
Try
NSLog(#"resultImg width:%f, height:%f",
newImage.size.width,newImage.size.height);
Instead. The image size has been transferred correctly.
The problem is solved now. I get the image that I want displayed but still unable to figure out why the width and height are different. Essentially nothing wrong with the program per say. The only problem being width and height.
I just independently verified this problem, I think it should be reported as a bug to Apple. +(UIImage)imageWithCGImage: doesn't properly transfer the width and height of the source CGImageRef to the UIImage.

How is the image data interpreted for a grayscale image on an iPhone?

How do I make sense of the image data for a grayscale image given the following scenario: I capture video data from the "sample buffer" and extract an 80x20 section and then turn that into a grayscale UIImage. But when I examine the raw pixel bytes I am unable to make sense of them in a way that would allow me to go on and "binarize" them (my real goal).
When I simply save the UIImage to the photo album using UIImageWriteToSavedPhotosAlbum to verify just what kind of image data I have, I indeed get a plain, white 80x20 image (it's actually light-grayish). I captured a plain white image to simplify things, expecting to see only values between, say, 200 or so and 255, and yet there are sections of the image data full of zeroes, that clearly suggest rows of black pixels. Any help is appreciated. The relevant code and the image data (16 pixels at a time) are below.
Here is how I create the 80x20 grayscale image from a portion of the CMSampleBufferRef video data:
UIImage *imageFromImage(UIImage *image, CGRect rect)
{
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
CGImageRef grayScaleImg = grayscaleCGImageFromCGImage(newImageRef);
CGImageRelease(newImageRef);
UIImage *newImage = [UIImage imageWithCGImage:grayScaleImg scale:1.0 orientation:UIImageOrientationLeft];
return newImage;
}
CGImageRef grayscaleCGImageFromCGImage(CGImageRef inputImage)
{
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
// Create a gray scale context and render the input image into that
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
CGContextDrawImage(context, CGRectMake(0,0, width,height), inputImage);
// Get an image representation of the grayscale context which the input
// was rendered into.
CGImageRef outputImage = CGBitmapContextCreateImage(context);
// Cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return (CGImageRef)[(id)outputImage autorelease];
}
and then, when I use the following code to dump the pixel data to the Console:
CGImageRef inputImage = [imgIn CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(inputImage);
CFDataRef imageData = CGDataProviderCopyData(dataProvider);
const UInt8 *rawData = CFDataGetBytePtr(imageData);
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
size_t numPixels = height * width;
for (int i = 0; i < numPixels ; i++)
{
if ((i % 16) == 0)
NSLog(#" -%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-\n\n", rawData[i],
rawData[i+1], rawData[i+2], rawData[i+3], rawData[i+4], rawData[i+5],
rawData[i+6], rawData[i+7], rawData[i+8], rawData[i+9], rawData[i+10],
rawData[i+11], rawData[i+12], rawData[i+13], rawData[i+14], rawData[i+15]);
}
I consistently get output like following:
-216-217-214-215-217-215-216-213-214-214-214-215-215-217-216-216-
-219-219-216-219-220-217-212-214-215-214-217-220-219-217-214-219-
-216-216-218-217-218-221-217-213-214-212-214-212-212-214-214-213-
-213-213-212-213-212-214-216-214-212-210-211-210-213-210-213-208-
-212-208-208-210-206-207-206-207-210-205-206-208-209-210-210-207-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
(this pattern repeats for the remaining bytes, 80 bytes of pixel data in the 200's, depending on lighting, followed by 240 bytes of zeros -- there's a total of 1600 bytes since the image is 80x20)
This:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
Should be:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
width, colorspace, kCGBitmapByteOrderDefault);
In other words, for an 8 bit gray image, the number of bytes per row is the same as the width.
You've probably forgotten image stride - you're assuming that your images are stored as width*height but several systems store them as stride*height where stride > width. The zeros are padding that you should skip.
By the way, what do you mean "binarize" ? I guess you mean quantize to a less grey levels ?

How to get the RGB values for a pixel on an image on the iphone

I am writing an iPhone application and need to essentially implement something equivalent to the 'eyedropper' tool in photoshop, where you can touch a point on the image and capture the RGB values for the pixel in question to determine and match its color. Getting the UIImage is the easy part, but is there a way to convert the UIImage data into a bitmap representation in which I could extract this information for a given pixel? A working code sample would be most appreciated, and note that I am not concerned with the alpha value.
A little more detail...
I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).
Method 1: Writable Pixel Information
I defined constants
#define RGBA 4
#define RGBA_8_BIT 8
In my UIImage subclass I declared instance variables:
size_t bytesPerRow;
size_t byteCount;
size_t pixelCount;
CGContextRef context;
CGColorSpaceRef colorSpace;
UInt8 *pixelByteData;
// A pointer to an array of RGBA bytes in memory
RPVW_RGBAPixel *pixelData;
The pixel struct (with alpha in this version)
typedef struct RGBAPixel {
byte red;
byte green;
byte blue;
byte alpha;
} RGBAPixel;
Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):
-(RGBAPixel*) bitmap {
NSLog( #"Returning bitmap representation of UIImage." );
// 8 bits each of red, green, blue, and alpha.
[self setBytesPerRow:self.size.width * RGBA];
[self setByteCount:bytesPerRow * self.size.height];
[self setPixelCount:self.size.width * self.size.height];
// Create RGB color space
[self setColorSpace:CGColorSpaceCreateDeviceRGB()];
if (!colorSpace)
{
NSLog(#"Error allocating color space.");
return nil;
}
[self setPixelData:malloc(byteCount)];
if (!pixelData)
{
NSLog(#"Error allocating bitmap memory. Releasing color space.");
CGColorSpaceRelease(colorSpace);
return nil;
}
// Create the bitmap context.
// Pre-multiplied RGBA, 8-bits per component.
// The source image format will be converted to the format specified here by CGBitmapContextCreate.
[self setContext:CGBitmapContextCreate(
(void*)pixelData,
self.size.width,
self.size.height,
RGBA_8_BIT,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast
)];
// Make sure we have our context
if (!context) {
free(pixelData);
NSLog(#"Context not created!");
}
// Draw the image to the bitmap context.
// The memory allocated for the context for rendering will then contain the raw image pixelData in the specified color space.
CGRect rect = { { 0 , 0 }, { self.size.width, self.size.height } };
CGContextDrawImage( context, rect, self.CGImage );
// Now we can get a pointer to the image pixelData associated with the bitmap context.
pixelData = (RGBAPixel*) CGBitmapContextGetData(context);
return pixelData;
}
Read-Only Data (Previous information) - method 2:
Step 1. I declared a type for byte:
typedef unsigned char byte;
Step 2. I declared a struct to correspond to a pixel:
typedef struct RGBPixel{
byte red;
byte green;
byte blue;
}
RGBPixel;
Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):
// Reference to Quartz CGImage for receiver (self)
CFDataRef bitmapData;
// Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)
UInt8* pixelByteData;
// A pointer to the first pixel element in an array
RGBPixel* pixelData;
Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):
//Get the bitmap data from the receiver's CGImage (see UIImage docs)
[self setBitmapData: CGDataProviderCopyData(CGImageGetDataProvider([self CGImage]))];
//Create a buffer to store bitmap data (unitialized memory as long as the data)
[self setPixelBitData:malloc(CFDataGetLength(bitmapData))];
//Copy image data into allocated buffer
CFDataGetBytes(bitmapData,CFRangeMake(0,CFDataGetLength(bitmapData)),pixelByteData);
//Cast a pointer to the first element of pixelByteData
//Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).
pixelData = (RGBPixel*) pixelByteData;
//Now you can access pixels by index: pixelData[ index ]
NSLog(#"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue);
//You can determine the desired index by multiplying row * column.
return pixelData;
Step 5. I made an accessor method:
-(RGBPixel*)pixelDataForRow:(int)row column:(int)column{
//Return a pointer to the pixel data
return &pixelData[row * column];
}
Here is my solution for sampling color of an UIImage.
This approach renders the requested pixel into a 1px large RGBA buffer and returns the resulting color values as an UIColor object. This is much faster than most other approaches I've seen and uses only very little memory.
This should work pretty well for something like a color picker, where you typically only need the value of one specific pixel at a any given time.
Uiimage+Picker.h
#import <UIKit/UIKit.h>
#interface UIImage (Picker)
- (UIColor *)colorAtPosition:(CGPoint)position;
#end
Uiimage+Picker.m
#import "UIImage+Picker.h"
#implementation UIImage (Picker)
- (UIColor *)colorAtPosition:(CGPoint)position {
CGRect sourceRect = CGRectMake(position.x, position.y, 1.f, 1.f);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, sourceRect);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *buffer = malloc(4);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGContextRef context = CGBitmapContextCreate(buffer, 1, 1, 8, 4, colorSpace, bitmapInfo);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0.f, 0.f, 1.f, 1.f), imageRef);
CGImageRelease(imageRef);
CGContextRelease(context);
CGFloat r = buffer[0] / 255.f;
CGFloat g = buffer[1] / 255.f;
CGFloat b = buffer[2] / 255.f;
CGFloat a = buffer[3] / 255.f;
free(buffer);
return [UIColor colorWithRed:r green:g blue:b alpha:a];
}
#end
You can't access the bitmap data of a UIImage directly.
You need to get the CGImage representation of the UIImage. Then get the CGImage's data provider, from that a CFData representation of the bitmap. Make sure to release the CFData when done.
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
You will probably want to look at the bitmap info of the CGImage to get pixel order, image dimensions, etc.
Lajos's answer worked for me. To get the pixel data as an array of bytes, I did this:
UInt8* data = CFDataGetBytePtr(bitmapData);
More info: CFDataRef documentation.
Also, remember to include CoreGraphics.framework
Thanks everyone! Putting a few of these answers together I get:
- (UIColor*)colorFromImage:(UIImage*)image sampledAtPoint:(CGPoint)p {
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
int col = p.x*(width-1);
int row = p.y*(height-1);
const UInt8* pixel = data + row*bytesPerRow+col*4;
UIColor* returnColor = [UIColor colorWithRed:pixel[0]/255. green:pixel[1]/255. blue:pixel[2]/255. alpha:1.0];
CFRelease(bitmapData);
return returnColor;
}
This just takes a point range 0.0-1.0 for both x and y. Example:
UIColor* sampledColor = [self colorFromImage:image
sampledAtPoint:CGPointMake(p.x/imageView.frame.size.width,
p.y/imageView.frame.size.height)];
This works great for me. I am making a couple assumptions like bits per pixel and RGBA colorspace, but this should work for most cases.
Another note - it is working on both Simulator and device for me - I have had problems with that in the past because of the PNG optimization that happened when it went on the device.
To do something similar in my application, I created a small off-screen CGImageContext, and then rendered the UIImage into it. This allowed me a fast way to extract a number of pixels at once. This means that you can set up the target bitmap in a format you find easy to parse, and let CoreGraphics do the hard work of converting between color models or bitmap formats.
I dont know how to index into image data correctly based on given X,Y cordination. Does anyone know?
pixelPosition = (x+(y*((imagewidth)*BytesPerPixel)));
// pitch isn't an issue with this device as far as I know and can be let zero...
// ( or pulled out of the math ).
Use ANImageBitmapRep which gives pixel-level access (read/write).