iPhone SDK: accessing indexed color PNG images - iphone

I'm interested in loading indexed-color PNG images in my iPhone application. Once they're loaded, I want to access the images on a per-pixel basis. In particular, I want to get the index of the color (rather than the color itself) of individual pixels.
Unfortunately, there does not seem to be a way to access pixels via the UIImage class, let alone color index of a pixel. I'm also taking a look at Quartz2D-related APIs, but so far things look bleak.
I'd greatly appreciate any suggestions.
I'm hoping I won't have to port the necessary code from libpng.
Thanks in advance!
UPDATE: I'm able to load the PNG using Quartz2D, but for some reason it automatically converts my indexed-color 8bit PNG to a 32-bit ARGB PNG. Any thoughts how I might prevent this?
UPDATE 2: The reason this is important is due to memory limitations. I'm trying to keep the raster from blowing up form eight bits per pixel to thirty two to avoid the overhead. If anyone has the magic answer for me, 100 points are yours!

By loading the image as a CGImage rather than an UIImage, using CGImageCreateWithPNGDataProvider(), you might be able to get an indexed color space. See:
http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Reference/CGColorSpace/Reference/reference.html
which lists CGColorSpaceCreateIndexed(), CGColorSpaceGetColorTable() and more. Use CGColorSpaceGetModel(CGImageGetColorSpace(img)) to see if the color space you end up with is an indexed one, then use CGImageGetDataProvider() to get a CGDataProviderRef, which you can use with CGDataProviderCopyData() to get to the actual bitmap data...
edit a bounty always gets things going. I tested and it just works. (sorry for the crappy handling, this is proof of concept of course)
NSString *path = [[[NSBundle mainBundle] resourcePath]
stringByAppendingPathComponent:#"test.png"];
printf("path: %s\n",[path UTF8String]);
NSData *file = [[NSFileManager defaultManager] contentsAtPath:path];
if ( !file ) printf("file failed\n");
CGDataProviderRef src = CGDataProviderCreateWithCFData(file);
if ( !src ) printf("image failed\n");
CGImageRef img = CGImageCreateWithPNGDataProvider(src, NULL, NO, kCGRenderingIntentDefault);
if ( !img ) printf("image failed\n");
printf("Color space model: %d, indexed=%d\n",
CGColorSpaceGetModel(CGImageGetColorSpace(img)),
kCGColorSpaceModelIndexed);
output:
path: /Users/..../638...8C12/test.app/test.png
Color space model: 5, indexed=5
qed?
ps. my test image is from libgd, through php, using
$img = imagecreatefrompng("whateverimage.png");
imagetruecolortopalette($img,false,256);
header("Content-Type: image/png");
imagepng($img);
which results in my case (b/w image) in
$ file test.png
test.png: PNG image, 2000 x 300, 1-bit colormap, non-interlaced
edit^2 This is how you access the bitmap data. ASCII art ftw!
CGDataProviderRef data = CGImageGetDataProvider(img);
NSData *nsdata = (NSData *)(CGDataProviderCopyData(data));
char *rawbuf = malloc([nsdata length]);
if ( !rawbuf ) printf("rawbuf failed\n");
[nsdata getBytes:rawbuf];
int w = CGImageGetWidth(img);
int h = CGImageGetHeight(img);
int bpl = CGImageGetBytesPerRow(img);
printf("width: %d (%d bpl), height: %d, pixels: %d, bytes: %d\n",w,bpl,h,bpl*h,[nsdata length]);
if ( [nsdata length] != bpl*h )
{
printf("%d pixels is not %d bytes, i may be crashing now...\n",bpl*h,[nsdata length]);
}
for ( int y=0;y<h; y++ )
{
for ( int x=0;x<w; x++ )
{
char c = rawbuf[y*bpl+x];
while ( !isalnum(c) ) c += 31; //whoa!
printf("%c",c);
}
printf("\n");
}

Related

Repeated Scene items in iOS YUV video capturing output

I capture a video and handle the resulting YUV frames.
the output looks like the following:
Although it appears normally on my phone's screen. But my peer receives it like that img above.
Every item is repeated and shifted by some value horizontally and vertically
My captured video is 352x288 and my YPixelCount = 101376, UVPixelCount = YPIXELCOUNT/4
Any clue to solve this or a starting point to understand how to handle YUV video frames on iOS ?
NSNumber* recorderValue = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[videoRecorderSession setSessionPreset:AVCaptureSessionPreset352x288];
And this is the captureOutput function
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if(CMSampleBufferIsValid(sampleBuffer) && CMSampleBufferDataIsReady(sampleBuffer) && ([self isQueueStopped] == FALSE))
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
UInt8 *baseAddress[3] = {NULL,NULL,NULL};
uint8_t *yPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
UInt32 yPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,0) * CVPixelBufferGetHeightOfPlane(imageBuffer,0);
uint8_t *uvPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,1);
UInt32 uvPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,1) * CVPixelBufferGetHeightOfPlane(imageBuffer,1);
UInt32 p,q,r;
p=q=r=0;
memcpy(uPointer, uvPlaneAddress, uvPixelCount);
memcpy(vPointer, uvPlaneAddress+uvPixelCount, uvPixelCount);
memcpy(yPointer,yPlaneAddress,yPixelCount);
baseAddress[0] = (UInt8*)yPointer;
baseAddress[1] = (UInt8*)uPointer;
baseAddress[2] = (UInt8*)vPointer;
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
}
Is there anything wrong with the above code ?
Your code doesn't look to0 bad. I can see two mistakes and one potential problem:
The uvPixelCount is incorrect. The YUV 420 format means that there is color information for each 2 by 2 pixel block. So the correct count is:
uvPixelCount = (width / 2) * (height / 2);
You write something about yPixelCount / 4, but I cannot see that in your code.
The UV information is interleaved, i.e. the second plane alternatingly contains a U and a V value. Or put differently: there's a U value on all even byte addresses and a V value on all odd byte addresses. If you really need to separate the U and V information, memcpy won't do.
There can be some extra bytes after each pixel row. You should use CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) to get the number of bytes between two rows. As a consequence, a single memcpy won't do. Instead you need to copy each pixel row separately to get rid of the extra bytes between the rows.
All these things only explain part of the resulting image. The remaining parts are probably due to differences between your code and what the receiving peer expect. You did't write anything about that? Does the peer really need separated U and V values? Does it you 4:2:0 compression as well? Does it you video range instead of full range as well?
If you provide more information, I can give your more hints.

Pixels of UIImage / char interpretation

The following code translates a jpg into a string of chars.
CGImageRef imageRef = example.CGImage;
NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
a small piece of the output is:
"GJYˇKO[ˇFJSˇILQˇKNSˇKKSˇOOYˇMMYˇNNVˇOOWˇOOWˇNNVˇLLTˇKKSˇMMUˇOOWˇJMT"
I guess the 3 Symbols together consist of the infomation of one pixel, right?
And if this is true, how can i interpret this Symbols (colors e.g.)?
This will all depend on the color space of your image. If there is an alpha channel 4 chars will be one pixel and without it will be 3 chars. If it is RGBA then G is your R, J is your G , Y is your B and ˇ is your A. And if it is ARGB it G is your A etc.. Also when you convert your pixels to a char * remember to keep track of the length because any RGBA value of 0 will cause a null termination.
You can't print out a bit map like that. As #Joe says the char's are the individual color components but any zero will terminate the string and there are many other problems trying to print the bytes from an NSData like that.
Assuming the RGBA color space, the way I'd approach it is like this;
struct {
char red;
char green;
char blue;
char alpha;
} color;
color *bitmap = (color *)[data bytes];
for(int i = 0;i < [data length] / sizeof(color);i++) {
NSLog(#"%d, %d, %d, %d", colors[i].red, colors[i].green, colors[i].blue, colors[i].alpha);
}
If your image is not in the RGBA color space then you will need to adjust the color struct to match it.
Also, this code was not compiled but typed from my mind to this post. No promise is made that it won't reformat your hard drive. Please think before copy and paste.

Split NSData objects into other NSData objects of a given size

I have an NSData object of approximately 1000kB in size. Now I want to transfer this via Bluetooth. It would be better if I have, let's say, 10 objects of 100kB. It comes to mind that I should use the -subdataWithRange: method of NSData.
I haven't really worked with NSRange. Well, I know how it works, but I can't figure out how to read from a given location with the length: 'to end of file'... I've no idea how to do that.
Some code on how to split this into multiple 100kB NSData objects would really help me out here. (it probably involves the -length method to see how many objects should be made..?)
Thank you in advance.
The following piece of code does the fragmentation without copying the data:
NSData* myBlob;
NSUInteger length = [myBlob length];
NSUInteger chunkSize = 100 * 1024;
NSUInteger offset = 0;
do {
NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[myBlob bytes] + offset
length:thisChunkSize
freeWhenDone:NO];
offset += thisChunkSize;
// do something with chunk
} while (offset < length);
Sidenote: I should add that the chunk objects cannot safely be used after myBlob has been released (or otherwise modified). chunk fragments point into memory owned by myBlob, so don't retain them unless you retain myBlob.

EXC_BAD_ACCESS when calling avcodec_encode_video

I have an Objective-C class (although I don't believe this is anything Obj-C specific) that I am using to write a video out to disk from a series of CGImages. (The code I am using at the top to get the pixel data comes right from Apple: http://developer.apple.com/mac/library/qa/qa2007/qa1509.html). I successfully create the codec and context - everything is going fine until it gets to avcodec_encode_video, when I get EXC_BAD_ACCESS. I think this should be a simple fix, but I just can't figure out where I am going wrong.
I took out some error checking for succinctness. 'c' is an AVCodecContext*, which is created successfully.
-(void)addFrame:(CGImageRef)img
{
CFDataRef bitmapData = CGDataProviderCopyData(CGImageGetDataProvider(img));
long dataLength = CFDataGetLength(bitmapData);
uint8_t* picture_buff = (uint8_t*)malloc(dataLength);
CFDataGetBytes(bitmapData, CFRangeMake(0, dataLength), picture_buff);
AVFrame *picture = avcodec_alloc_frame();
avpicture_fill((AVPicture*)picture, picture_buff, c->pix_fmt, c->width, c->height);
int outbuf_size = avpicture_get_size(c->pix_fmt, c->width, c->height);
uint8_t *outbuf = (uint8_t*)av_malloc(outbuf_size);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture); // ERROR occurs here
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
CFRelease(bitmapData);
free(picture_buff);
free(outbuf);
av_free(picture);
i++;
}
I have stepped through it dozens of times. Here are some numbers...
dataLength = 408960
picture_buff = 0x5c85000
picture->data[0] = 0x5c85000 -- which I take to mean that avpicture_fill worked...
outbuf_size = 408960
and then I get EXC_BAD_ACCESS at avcodec_encode_video. Not sure if it's relevant, but most of this code comes from api-example.c. I am using XCode, compiling for armv6/armv7 on Snow Leopard.
Thanks so much in advance for help!
I have not enough information here to point to the exact error, but I think that the problem is that the input picture contains less data than avcodec_encode_video() expects:
avpicture_fill() only sets some pointers and numeric values in the AVFrame structure. It does not copy anything, and does not check whether the buffer is large enough (and it cannot, since the buffer size is not passed to it). It does something like this (copied from ffmpeg source):
size = picture->linesize[0] * height;
picture->data[0] = ptr;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size2;
picture->data[3] = picture->data[1] + size2 + size2;
Note that the width and height is passed from the variable "c" (the AVCodecContext, I assume), so it may be larger than the actual size of the input frame.
It is also possible that the width/height is good, but the pixel format of the input frame is different from what is passed to avpicture_fill(). (note that the pixel format also comes from the AVCodecContext, which may differ from the input). For example, if c->pix_fmt is RGBA and the input buffer is in YUV420 format (or, more likely for iPhone, a biplanar YCbCr), then the size of the input buffer is width*height*1.5, but avpicture_fill() expects the size of width*height*4.
So checking the input/output geometry and pixel formats should lead you to the cause of the error. If it does not help, I suggest that you should try to compile for i386 first. It is tricky to compile FFMPEG for the iPhone properly.
Does the codec you are encoding support the RGB color space? You may need to use libswscale to convert to I420 before encoding. What codec are you using? Can you post the code where you initialize your codec context?
The function RGBtoYUV420P may help you.
http://www.mail-archive.com/libav-user#mplayerhq.hu/msg03956.html

iPhone SDK : NSString NSNumber IEEE-754

Can someone help me ? I have a NSString with #"12.34" and I want to convert it into a NSString with the same float number but in single precision 32bits binary floating-point format IEEE-754 : like #"\x41\x45\x70\xa4" (with hexa characters) or #"AEp¤"...
I'm sure it's something easy but after many hours of reading the doc without finding a solution...
Thank you !
As Yuji mentioned, it's not a good idea to encode an arbitrary byte sequence into an NSString(although it can contain null bytes), as encoding transformations can(and probably WILL) destroy your byte sequence. If you want access to the raw bytes of a float, you may want to consider storing them as an NSData object(though I suggest you think through your reasons for wanting this first). To do this:
NSString *string = #"10.23";
float myFloat = [string floatValue];
NSData *myData = [[NSData alloc] initWithBytes:&myFloat length:sizeof(myFloat)];
If you want to get the raw bytes of a float, you could cast it, like so:
NSString *str = #"12.34";
float flt = [str floatValue];
unsigned char *bytes = (unsigned char *)&flt;
printf("Bytes: %x %x %x %x\n", bytes[0], bytes[1], bytes[2], bytes[3]);
However the order in which these bytes are stored in the array depends on the machine. (See http://en.wikipedia.org/wiki/Endianness). For example, on my Intel iMac it prints: "Bytes: a4 70 45 41".
To make a new NSString from an array of bytes you can use initWithBytes:length:encoding: