CGBitmapContextCreateImage error - iphone

I am getting error like this In my console:
: CGBitmapContextCreate: invalid data bytes/row: should be at least 1920 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
: CGBitmapContextCreateImage: invalid context 0x0
I use below code:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}

this is what fixed my problem:
as "Wildaker" suggested, it is "the code that's calling it", more or less.
in the Apple Example the output format is set.
perhaps you skipped this part to simplify ?!
output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey];

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);should be changed to : uint8_t *baseAddress = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer);

Related

Converting a C array to a UIImage (for iOS)

I currently have some image data in a C array, which contains RGBA data.
float array[length][4]
I am trying to get this to a UIImage, which it looks like these are initialized with files, NSData, and URLs. Since the other two methods are slow, I am most interested in the NSData approach.
I can get all of these values into an NSArray like so:
for (i=0; i<image.size.width * image.size.height; i++){
replace = [UIColor colorWithRed:array[i][0] green:array[i][1] blue:array[i][2] alpha:array[i][3]];
[output replaceObjectAtIndex:i withObject:replace];
}
So, I have a NSArray full of objects that are a UIColor. I have tried many methods, but how do I convert this to a UIImage?
I think it would be straight forward. A function sorta like imageWithData:data R:0 B:1 G:2 A:3 length:length width:width length:length would be nice, but there is no function as far as I can tell.
imageWithData: is meant for image data in a standard image file format, e.g. a PNG or JPEG file that you have in memory. It's not suitable for creating images from raw data.
For that, you would typically create a bitmap graphics context, passing your array, pixel format, size, etc. to the CGBitmapContextCreate function. When you've created a bitmap context, you can create an image from it using CGBitmapContextCreateImage, which gives you a CGImageRef that you can pass to the UIImage method imageWithCGImage:.
Here's a basic example that creates a tiny 1×2 pixel image with one red pixel and one green pixel. It just uses hard-coded pixel values that are meant to show the order of the color components, normally, you would get this data from somewhere else of course:
size_t width = 2;
size_t height = 1;
size_t bytesPerPixel = 4;
//4 bytes per pixel (R, G, B, A) = 8 bytes for a 1x2 pixel image:
unsigned char rawData[8] = {255, 0, 0, 255, //red
0, 255, 0, 255}; //green
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bytesPerRow = bytesPerPixel * width;
size_t bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
//This is your image:
UIImage *image = [UIImage imageWithCGImage:cgImage];
//Don't forget to clean up:
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);

How to correctly orient image generated from AVCaptureVideoDataOutputSampleBufferDelegate

I'm using AVCaptureVideoDataOutputSampleBufferDelegate and I receive a CMSampleBufferRef wich I convert to a UIImage - but the resulting image isn't correctly oriented.
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *img = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
If I hold the iPhone in portrait mode the resulting image is rotated 90 degrees (anti-clockwise).
If I hold the iPhone in landscape left orientation (home button is on the left) the resulting image is up.
If I hold the iPhone in landscape right orientation (home button is on the right) the resulting image is upside-down.
I'm using the front camera of the device - but I will also be using the front camera, so the resulting image should always have the correct orientation.

AVFoundation - Get grayscale image from Y plane (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)

I'm using AVFoundation to take a video and I'm recording in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format. I want to make grayscale image directly from the Y plane of the YpCbCr format.
I've tried to create CGContextRef by calling CGBitmapContextCreate, but the problem is, that I don't know what colorspace and pixelformat to choose.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
/* Get informations about the Y plane */
uint8_t *YPlaneAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
/* the problematic part of code */
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef newContext = CGBitmapContextCreate(YPlaneAddress,
width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_1Monochrome);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *grayscaleImage = [[UIImage alloc] initWithCGImage:newImage];
// process the grayscale image ...
}
When I run the code above, I got this errors:
<Error>: CGBitmapContextCreateImage: invalid context 0x0
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 16 bits/pixel; 1-component color space; kCGImageAlphaPremultipliedLast; 192 bytes/row.
PS: Sorry for my english.
If I'm not mistaken, you shouldn't go via a CGContext. Instead, you should create a data provider and then directly the image.
Another mistake in your code is the use of the kCVPixelFormatType_1Monochrome constant. It's a constant used in video processing (AV libraries), not in Core Graphics (CG libraries). Just use kCGImageAlphaNone. That a single component (gray) per pixel is needed (instead of three as for RGB) is derived from the color space.
It could look like this:
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, YPlaneAdress,
height * bytesPerRow, NULL);
CGImageRef newImage = CGImageCreate(width, height, 8, 8, bytesPerRow,
colorSpace, kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);

How is the image data interpreted for a grayscale image on an iPhone?

How do I make sense of the image data for a grayscale image given the following scenario: I capture video data from the "sample buffer" and extract an 80x20 section and then turn that into a grayscale UIImage. But when I examine the raw pixel bytes I am unable to make sense of them in a way that would allow me to go on and "binarize" them (my real goal).
When I simply save the UIImage to the photo album using UIImageWriteToSavedPhotosAlbum to verify just what kind of image data I have, I indeed get a plain, white 80x20 image (it's actually light-grayish). I captured a plain white image to simplify things, expecting to see only values between, say, 200 or so and 255, and yet there are sections of the image data full of zeroes, that clearly suggest rows of black pixels. Any help is appreciated. The relevant code and the image data (16 pixels at a time) are below.
Here is how I create the 80x20 grayscale image from a portion of the CMSampleBufferRef video data:
UIImage *imageFromImage(UIImage *image, CGRect rect)
{
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
CGImageRef grayScaleImg = grayscaleCGImageFromCGImage(newImageRef);
CGImageRelease(newImageRef);
UIImage *newImage = [UIImage imageWithCGImage:grayScaleImg scale:1.0 orientation:UIImageOrientationLeft];
return newImage;
}
CGImageRef grayscaleCGImageFromCGImage(CGImageRef inputImage)
{
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
// Create a gray scale context and render the input image into that
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
CGContextDrawImage(context, CGRectMake(0,0, width,height), inputImage);
// Get an image representation of the grayscale context which the input
// was rendered into.
CGImageRef outputImage = CGBitmapContextCreateImage(context);
// Cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return (CGImageRef)[(id)outputImage autorelease];
}
and then, when I use the following code to dump the pixel data to the Console:
CGImageRef inputImage = [imgIn CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(inputImage);
CFDataRef imageData = CGDataProviderCopyData(dataProvider);
const UInt8 *rawData = CFDataGetBytePtr(imageData);
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
size_t numPixels = height * width;
for (int i = 0; i < numPixels ; i++)
{
if ((i % 16) == 0)
NSLog(#" -%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-%i-\n\n", rawData[i],
rawData[i+1], rawData[i+2], rawData[i+3], rawData[i+4], rawData[i+5],
rawData[i+6], rawData[i+7], rawData[i+8], rawData[i+9], rawData[i+10],
rawData[i+11], rawData[i+12], rawData[i+13], rawData[i+14], rawData[i+15]);
}
I consistently get output like following:
-216-217-214-215-217-215-216-213-214-214-214-215-215-217-216-216-
-219-219-216-219-220-217-212-214-215-214-217-220-219-217-214-219-
-216-216-218-217-218-221-217-213-214-212-214-212-212-214-214-213-
-213-213-212-213-212-214-216-214-212-210-211-210-213-210-213-208-
-212-208-208-210-206-207-206-207-210-205-206-208-209-210-210-207-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
(this pattern repeats for the remaining bytes, 80 bytes of pixel data in the 200's, depending on lighting, followed by 240 bytes of zeros -- there's a total of 1600 bytes since the image is 80x20)
This:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
Should be:
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
width, colorspace, kCGBitmapByteOrderDefault);
In other words, for an 8 bit gray image, the number of bytes per row is the same as the width.
You've probably forgotten image stride - you're assuming that your images are stored as width*height but several systems store them as stride*height where stride > width. The zeros are padding that you should skip.
By the way, what do you mean "binarize" ? I guess you mean quantize to a less grey levels ?

How to grab YUV formatted video from the camera, display it and process it

I am writing an iphone (IOS 4) program that capture live video from the camera and process it in real time.
I prefer to capture in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format for easier processing (I need to process the Y Channel). how do I display data in this format? I suppose I need to somehow convert it to a UIImage and then put it in some ImageView?
Currently I have code that displays kCVPixelFormatType_32BGRA data, but naturally it does not work with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
This is the code I use now for the transformation, any help/sample on how to do the same for kCVPixelFormatType_420YpCbCr8BiPlanarFullRange will be appreciated.
(Also criticism of my current method).
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Answering my own question.
this solved the problem I had (which was to grab yuv output, display it and process it), although its not exactly the answer to the question:
To grab YUV output from the camera:
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
[videoOut setAlwaysDiscardsLateVideoFrames:YES];
[videoOut setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
To display it as is, use AVCaptureVideoPreviewLayer, it does not require any much code. (You can see the FindMyiCon sample in the WWDC samples pack for example).
To process the YUV y channel (bi-planer in this case so it's all in a single chunk, you can also use memcpy instead of looping) :
- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
// allocate space for ychannel, reallocating as needed.
if (bufferWidth != y_channel.width || bufferHeight != y_channel.height)
{
if (y_channel.data) free(y_channel.data);
y_channel.width = bufferWidth;
y_channel.height = bufferHeight;
y_channel.data = malloc(y_channel.width * y_channel.height);
}
uint8_t *yc = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int total = bufferWidth * bufferHeight;
for(int k=0;k<total;k++)
{
y_channel.data[k] = yc[k++]; // copy y channel
}
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}