I need to get a CMSampleBuffer for the OpenGL frame. I'm using this:
int s = 1;
UIScreen * screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]){
s = (int)[screen scale];
}
const int w = viewController.view.frame.size.width/2;
const int h = viewController.view.frame.size.height/2;
const NSInteger my_data_length = 4*w*h*s*s;
// allocate array and read pixels into it.
GLubyte * buffer = malloc(my_data_length);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
GLubyte * buffer2 = malloc(my_data_length);
for(int y = 0; y < h*s; y++){
memcpy(buffer2 + (h*s - 1 - y)*4*w*s, buffer + (4*y*w*s), 4*w*s);
}
free(buffer);
CMBlockBufferRef * cm_block_buffer_ref;
CMBlockBufferAccessDataBytes(cm_block_buffer_ref,0,my_data_length,buffer2,*buffer2);
CMSampleBufferRef * cm_buffer;
CMSampleBufferCreate (kCFAllocatorDefault,cm_block_buffer_ref,true,NULL,NULL,NULL,1,1,NULL,0,NULL,cm_buffer);
I get EXEC_BAD_ACCESS for CMSampleBufferCreate.
Any help is appreciated, thank you.
The solution was to use the AVAssetWriterInputPixelBufferAdaptor class.
int s = 1;
UIScreen * screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]){
s = (int)[screen scale];
}
const int w = viewController.view.frame.size.width/2;
const int h = viewController.view.frame.size.height/2;
const NSInteger my_data_length = 4*w*h*s*s;
// allocate array and read pixels into it.
GLubyte * buffer = malloc(my_data_length);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
GLubyte * buffer2 = malloc(my_data_length);
for(int y = 0; y < h*s; y++){
memcpy(buffer2 + (h*s - 1 - y)*4*w*s, buffer + (4*y*w*s), 4*w*s);
}
free(buffer);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes (NULL,w*2,h*2,kCVPixelFormatType_32BGRA,buffer2,4*w*s,NULL,0,NULL,&pixel_buffer);
[av_adaptor appendPixelBuffer: pixel_buffer withPresentationTime:CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate: start_time],30)];
Why is the third parameter to CMSampleBufferCreate() true in your code? According to the documentation:
Parameters
allocator
The allocator to use to allocate memory for the CMSampleBuffer
object. Pass kCFAllocatorDefault to
use the current default allocator.
dataBuffer
This can be NULL, a CMBlockBuffer with no backing memory,
a CMBlockBuffer with backing memory
but no data yet, or a CMBlockBuffer
that already contains the media data.
Only in that last case (or if NULL and
numSamples is 0) should dataReady be
true.
dataReady
Indicates whether dataBuffer already contains the media
data.
Your cm_block_buffer_ref that is being passed in as a buffer contains no data (you should NULL it for safety, I don't believe the compiler does that by default), so you should use false here.
There may be other things wrong with this, but that's the first item that leaps out at me.
Why the double malloc and why not swap in place via a temp buffer?
Something like this:
GLubyte * raw = (GLubyte *) wyMalloc(size);
LOGD("raw address %p", raw);
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, raw);
const size_t end = h/2;
const size_t W = 4*w;
GLubyte row[4*w];
for (int i=0; i <= end; i++) {
void * top = raw + (h - i - 1)*W;
void * bottom = raw + i*W;
memcpy(row, top, W);
memcpy(top, bottom, W);
memcpy(bottom, row, W);
}
Related
So, I am trying to find the location of any pixels on the screen that are a specific colour.
The following code works, but is VERY slow, because I have to iterate over every single pixel co-ordinate, and there are a lot.
Is there any way to improve the following code to make it more efficient?
// Detect the position of all red points in the sprite
UInt8 data[4];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
for (int x = 0; x < 960; x++)
{
for (int y = 0; y < 640; y++)
{
ccColor4B *buffer = malloc(sizeof(ccColor4B));
glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
ccColor4B color = buffer[0];
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
[renderTexture end];
[renderTexture release];
You can (and should) read more than just one pixel at a time. The way to make OpenGL fast, is to pack all your stuff into as few operations as possible. That works both ways (reading and writing to the GPU).
Try reading the whole texture in one call, and finding your red pixels from the resulting array. As below.
Also note, that generally speaking it is a good idea to traverse a bitmap row by row, which means reversing the order of for -loops (y [rows] on the outside, with x on the inside)
// Detect the position of all red points in the sprite
ccColor4B *buffer = new ccColor4B[ 960 * 640 ];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
glReadPixels(0, 0, 940, 640, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
[renderTexture release];
int i = 0;
for (int y = 0; y < 640; y++)
{
for (int x = 0; x < 960; x++)
{
ccColor4B color = buffer[i];//the index is equal to y * 940 + x
++i;
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
delete[] buffer;
Don't malloc your buffer every time, just reuse the same buffer; malloc is slow! Please take a look at Apple's Memory Usage Documentation.
I don't know of any algorithms that can do this any faster, but this might help.
I have the source code for a video decoder application written in C, which I'm now porting on iphone.
My problem is as follows:
I have RGBA pixel data for a frame in a buffer that I need to display on the screen. My buffer is of type unsigned char. (I cannot change it to any other data type as the source code is too huge and not written by me.)
Most of the links I found on the net say about how to "draw and display pixels" on the screen or how to "display pixels present in an array", but none of then say how to "display pixel data present in a buffer".
I'm planning to use quartz 2D. All I need to do is just display the buffer contents on the screen. No modifications! Although my problem sounds very simple, there isn't any API that I could find to do the same. I couldn't find any appropriate link or document that was useful enough.
Kindly help!
Thanks in advance.
You can use the CGContext data structure to create a CGImage from raw pixel data. I've quickly written a basic example:
- (CGImageRef)drawBufferWidth:(size_t)width height:(size_t)height pixels:(void *)pixels
{
unsigned char (*buf)[width][4] = pixels;
static CGColorSpaceRef csp = NULL;
if (!csp) {
csp = CGColorSpaceCreateDeviceRGB();
}
CGContextRef ctx = CGBitmapContextCreate(
buf,
width,
height,
8, // 8 bits per pixel component
width * 4, // 4 bytes per row
csp,
kCGImageAlphaPremultipliedLast
);
CGImageRef img = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
return img;
}
You can call this method like this (I've used a view controller):
- (void)viewDidLoad
{
[super viewDidLoad];
const size_t width = 320;
const size_t height = 460;
unsigned char (*buf)[width][4] = malloc(sizeof(*buf) * height);
// fill up `buf` here
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
buf[y][x][0] = x * 255 / width;
buf[y][x][1] = y * 255 / height;
buf[y][x][2] = 0;
buf[y][x][3] = 255;
}
}
CGImageRef img = [self drawBufferWidth:320 height:460 pixels:buf];
self.imageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
}
Hi I am trying out drawing app and have a problem when it comes to saving the image that is drawn. Right now I'm very early in learning this but I have added code from:
How to get UIImage from EAGLView? to save the image that was drawn.
I have created a new app, then displayed a viewController that I created. In IB I have added a view which is the PaintingView, and an imageView lies behind it.
The only modification I have done to the PaintingView so far is to change the background to clear and set the background to clear so that I can display an image behind it. The drawing works great, my only problem is saving.
- (void)saveImageFromGLView:(UIView *)glView {
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
/// This IS being activated with code 0
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
}
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
Adding the above code to the Sample app works fine, the problem is doing it in my new app. The only difference I can tell is that I have not included PaintingWindow - would that be the problem?
It's as if the saveImage method isn't seeing the data for drawings.
The save method should be called within the scope of the OpenGL context.
To solve this you can move your method within the same rendering .m file and call this function from outside.
Also you need to consider OpenGL clear color.
(detail explanation in comments, lol)
I found that changing the CGBitmapInfo into:
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
Results in a transparent background.
Ah, you have to do this at the beginning.
[EAGLContext setCurrentContext:drawContext];
I am looking for a way to get a histogram of an image on the iPhone. The OpenCV library is way too big to be included in my app (OpenCV is about 70MB compiled), but I can use OpenGL. However, I have no idea on how to do either of these.
I have found how to get the pixels of the image, but cannot form a histogram. This seems like it should be simple, but I don't know how to store uint8_t into an array.
Here is the relevant question/answer for finding pixels:
Getting RGB pixel data from CGImage
The uint8_t* is just a pointer to a c array containing the bytes of the given color, i.e. {r, g, b, a} or whatever the color byte layout is for your image buffer.
So, referencing the link you provided, and the definition of histogram:
//Say we're in the inner loop and we have a given pixel in rgba format
const uint8_t* pixel = &bytes[row * bpr + col * bytes_per_pixel];
//Now save to histogram_counts uint32_t[4] planes r,g,b,a
//or you could just do one for brightness
//If you want to do data besides rgba, use bytes_per_pixel instead of 4
for (int i=0; i<4; i++) {
//Increment count of pixels with this value
histogram_counts[i][pixel[i]]++;
}
You can take RGB color of your image with CGRef. Look at the below method which I used for this.
- (UIImage *)processUsingPixels:(UIImage*)inputImage {
// 1. Get the raw pixels of the image
UInt32 * inputPixels;
CGImageRef inputCGImage = [inputImage CGImage];
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
bitsPerComponent, inputBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// 3. Convert the image to Black & White
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
// Average of RGB = greyscale
UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0;
*currentPixel = RGBAMake(averageColor, averageColor, averageColor, A(color));
}
}
// 4. Create a new UIImage
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * processedImage = [UIImage imageWithCGImage:newCGImage];
// 5. Cleanup!
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return processedImage;
}
My app is running out of memory. To resolve this, I free up two very large arrays used in a function that writes a framebuffer to an image. The method looks like this:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
//free(buffer);
//free(buffer2);
return myImage;
}
Note the two called to free(buffer) and free(buffer2) at the end there? Those work fine on the iPad simulator, removing the memory problem and allowing me to generate with impudence. However, they kill the iPad instantly. Like, the first time it executes it. If I remove the free() calls it runs fine, just runs out of memory after a minute or two. So why is the free() call crashing the device?
Note - it's not the call to free() that explicitly crashes the device, it crashes later. But that seems to be the root cause/..
EDIT - Someone's asked about where it exactly crashes. This flow goes on to return the image to another object, which writes it to a file. When calling the 'UIImageJPEGRepresentation' method, it generates an EXT_BAD_ACCESS message. I assume this is because the UIImage I'm passing it to write to the file is corrupt, null or something else. But this only happens when I free those two buffers.
I'd understand if the memory was somehow related to the UIIMage, but it really shouldn't be, especially as it works on the simulator. I wondered if it is down to how iPad handles 'free' calls...
From reading the docs, I believe CGDataProviderCreateWithData will only reference the memory pointed to by buffer2, not copy it. You should keep it allocated until the image is released.
Try this:
static void _glToUIImageRelease (void *info, const void *data, size_t size) {
free(data);
}
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y
First things first, you really should check if malloc failed and returned NULL. However, if that doesn't solve your problem, use a debugger and step through your programm to see exactly where it fails (or at least to get a stack trace). From my experience, odd failures like crashing somewhere in unsuspected areas almost always are buffer overflows corrupting arbitrary data some time before.
Buffer(s) are undersized? Look at the loops.
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
let y == 0 and x == (768*4)-1, the index of buffer2 exceeds allocated size.
Probably outside range before that?