Get colour of pixel from CCRenderTexture - iphone

So, I am trying to find the location of any pixels on the screen that are a specific colour.
The following code works, but is VERY slow, because I have to iterate over every single pixel co-ordinate, and there are a lot.
Is there any way to improve the following code to make it more efficient?
// Detect the position of all red points in the sprite
UInt8 data[4];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
for (int x = 0; x < 960; x++)
{
for (int y = 0; y < 640; y++)
{
ccColor4B *buffer = malloc(sizeof(ccColor4B));
glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
ccColor4B color = buffer[0];
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
[renderTexture end];
[renderTexture release];

You can (and should) read more than just one pixel at a time. The way to make OpenGL fast, is to pack all your stuff into as few operations as possible. That works both ways (reading and writing to the GPU).
Try reading the whole texture in one call, and finding your red pixels from the resulting array. As below.
Also note, that generally speaking it is a good idea to traverse a bitmap row by row, which means reversing the order of for -loops (y [rows] on the outside, with x on the inside)
// Detect the position of all red points in the sprite
ccColor4B *buffer = new ccColor4B[ 960 * 640 ];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
glReadPixels(0, 0, 940, 640, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
[renderTexture release];
int i = 0;
for (int y = 0; y < 640; y++)
{
for (int x = 0; x < 960; x++)
{
ccColor4B color = buffer[i];//the index is equal to y * 940 + x
++i;
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
delete[] buffer;

Don't malloc your buffer every time, just reuse the same buffer; malloc is slow! Please take a look at Apple's Memory Usage Documentation.
I don't know of any algorithms that can do this any faster, but this might help.

Related

drawn area in UIImage not recognized correctly

I am having a strange problem in my project. What I want to do is that, a user will paint or draw using swipe over a image as overlay and I just need to crop the area from the image that is below the painted region. My code is working well only when the UIImage view that is below the paint region is 320 pixel wide i.e. width of iPhone. But If I change the width of the ImageView, I am not getting the desired result.
I am using the following code to construct a CGRect around the painted part.
-(CGRect)detectRectForFaceInImage:(UIImage *)image{
int l,r,t,b;
l = r = t = b = 0;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
BOOL pixelFound = NO;
for (int i = leftX ; i < rightX; i++) {
for (int j = topY; j < bottomY + 20; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Left %d", alpha);
l = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = rightX ; i >= l; i--) {
for (int j = topY; j < bottomY ; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Right %d", alpha);
r = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = topY ; i < bottomY ; i++) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Top %d", alpha);
t = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = bottomY ; i >= t; i--) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Bottom %d", alpha);
b = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
CFRelease(pixelData);
return CGRectMake(l, t, r - l, b-t);
}
In the above code leftX, rightX, topY, bottomY are the extreme values(from CGPoint) in float that is calculated when user swipe their finger on the screen while painting and represents a rectangle which contains the painted area in its bounds (to minimise the loop).
leftX - minimum in X-axis
rightX - maximum in X-axis
topY - min in Y-axis
bottom - max in Y-axis
Here l,r,t,b are the calculated values for actual rectangle.
As expressed earlier, this code work well when the imageview in which paining is done is 320 pixels wide and is spanned throughout the screen width. But If the imageview's width is smaller like 300 and is placed to the center of the screen, the code give false result.
Note: I am scaling the image according to imageview's width.
Below are the NSLog output:
When imageview's width is 320 pixel (These are value for the component of color at matched pixel or non-transparent pixel):
2013-05-17 17:58:17.170 FunFace[12103:907] Left 41
2013-05-17 17:58:17.172 FunFace[12103:907] Right 1
2013-05-17 17:58:17.173 FunFace[12103:907] Top 73
2013-05-17 17:58:17.174 FunFace[12103:907] Bottom 12
When imageview's width is 300 pixel:
2013-05-17 17:55:26.066 FunFace[12086:907] Left 42
2013-05-17 17:55:26.067 FunFace[12086:907] Right 255
2013-05-17 17:55:26.069 FunFace[12086:907] Top 42
2013-05-17 17:55:26.071 FunFace[12086:907] Bottom 255
How can I solve this problem because I need the imageview in center with padding to its both side.
EDIT: Ok looks like my problem is due to image orientation of JPEG images(from camera). Png images are working good and are not affected with change in imageview's width.
But still JPEGs are not working even if I am handling the orientation.
First, I wonder if you're accessing something other than 32-bit RGBA? The index value for data[] is stored in pixelInfo then moves +2 bytes, rather than +3. That would land you on the blue byte. If your intent is to use RGBA, that fact would affect the rest of the results of your code.
Moving on, with an assumption that you were still getting flawed results despite having the correct alpha component value, it seems your "fixed" code would give Left,Right,Top,Bottom NSLog outputs with alpha values less than the full-on 255, something close to 0. In this case, without further code, I'd suggest your problem is within the code you use to scale down the image from your 320x240 source to 300x225 (or perhaps any other scaled dimensions). I could imagine your image having alpha values at the edge of 255 if your "scale" code is performing a crop rather than a scale.

App does not work on real device

Here is the situation.
Device: ipod Touch 3rd generation. iOS 4.1
OS X: Lion
This is a very simple App, a ViewController, there is a UIImageView and a UIButton on it. When clicking the button, it would do some computing, generate a image and load it to the UIImageView. Generally speaking, when you click the button, it would do some image processing working in the click event.
This App works well on the simulator. It would display a picture on the UIImageView correctly when you click the button. It costs about 1~2 seconds, not so long.
I connect my ipod touch to my MBP, and load it from Xcode. I set some break points in one for loop in my code (in click event). There is some strange things.
This for loop is the main loop which do the most of the computing. It would stop at the break point at first, the i is 0 i=0; When I continue the app, the app seems stop, you need wait for a while, then the i becomes 4 or 8 or 9, not the correct number which is 1.
I suspect, if I put the computing work in another thread, not in the UI thread. Does it help? Actually, there is nothing strange code in the click event, however, I can't get the correct images, only a black one. Does anyone meet it before, or please provide your suggestion.
UPDATE
Here is what the button click event doing.
int width=320;
int height=480;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef contextRef = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
//CGColorSpaceRelease( colorSpace );
CGContextClearRect( contextRef, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( contextRef, 0, height - height );
for(int i=0;i<height;i++)
{
for (int j=0; j<width; j++) {
*((char *)(imageData+i*width*4+j*4))=0;
*((char *)(imageData+i*width*4+j*4+1))=0;
*((char *)(imageData+i*width*4+j*4+2))=0;
*((char *)(imageData+i*width*4+j*4+3))=255;
}
}
int xmin=-2;
int xmax=2;
int ymin=-2;
int ymax=2;
int fre[320 * 480]={0};
CGPoint p=CGPointZero;
float x=arc4random()*1.0/ARC4RANDOM_MAX;
float y=arc4random()*1.0/ARC4RANDOM_MAX;
p.x=x*(xmax-xmin)+xmin;
p.y=y*(ymax-ymin)+ymin;
int MIN_ITERATE=10;
int ite_from_start=0;
DataPoint point;
point.p=p;
point.red=0.0;
point.green=0.0;
point.blue=0.0;
IFSFunctions *ifsfunction=[[IFSFunctions alloc] init];
for (int i=0; i<1000000; i++) {
if(p.x<=160 && p.x >=-160 && p.y <= 240 && p.y >= -240)
{
//fre[((int)p.y+240)*320+(int)p.x+160]++;
point=[ifsfunction caculate:point];
int data_x=(int)(320*(point.p.x-xmin)/(xmax-xmin));
int data_y=(int)(480*(point.p.y-ymin)/(ymax-ymin));
if(data_x >=0 && data_x<320 && data_y >=0 && data_y < 480 && ite_from_start < 20000)
{
ite_from_start++;
if(ite_from_start > MIN_ITERATE)
{
*((char *)(imageData+data_y*width*4+data_x*4))=(int)point.red;
*((char *)(imageData+data_y*width*4+data_x*4+1))=(int)point.green;
*((char *)(imageData+data_y*width*4+data_x*4+2))=(int)point.blue;;
}
fre[data_y*width+data_x]++;
}
else
{
ite_from_start=0;
point=[ifsfunction caculate:point];
}
}
}
int max_int=0;
for (int i=0; i<320*480; i++) {
if (fre[i]>max_int) {
max_int=fre[i];
}
}
//NSLog([NSString stringWithFormat:#"The max interation %f",logf(max_int+1)]);
for (int i=0; i<height; i++) {
for(int j=0;j<width;j++){
float intensity=logf(fre[i*width+j]+1.0)/logf(max_int/300+1);
//NSLog([NSString stringWithFormat:#"The %f",intensity]);
float gamma=powf(intensity, 0.25);
*((char *)(imageData+i*width*4+j*4))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4))));
*((char *)(imageData+i*width*4+j*4+1))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4+1))));
*((char *)(imageData+i*width*4+j*4+2))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4+2))));
}
}
CGDataProviderRef dataProvider=CGDataProviderCreateWithData(NULL, imageData, height * width * 4, NULL);
CGImageRef imageRef=CGImageCreate(width, height, 8, 32, 4*width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
CGContextDrawImage(contextRef, CGRectMake( 0, 0, width, height ), imageRef);
imgView.image=[UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
free(imageData);
Best Regards,
If it takes 1~2 seconds on the simulator, it will take a lot more on a device. In fact, the simulator is just the iOS APIs on the Mac. I'd create an NSOperation and execute it on a separated thread. And make sure your for loop looks like this:
for(int i = 0; i < YOURNUMBERHERE; i++)

Identify a percentage of transparent pixels in an area of UIImageView

I'm trying to set up a collision type hit test for a defined of pixels within a UIImageView. I'm only wish to cycle through pixels in a defined area.
Here's what I have so far:
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Is someone able to offer any reason why it isn't working?
I would suggest using ANImageBitmapRep. It allows for easy pixel-level manipulation of images without the hassle of context, linking against other libraries, or raw memory allocation. To create an ANImgaeBitmapRep with the contents of a view, you could do something like this:
BMPoint sizePt = BMPointMake((int)self.frame.size.width,
(int)self.frame.size.height);
ANImageBitmapRep * irep = [[ANImageBitmapRep alloc] initWithSize:sizePt];
CGContextRef ctx = [irep context];
[self.layer renderInContext:context];
[irep setNeedsUpdate:YES];
Then, you can crop out your desired rectangle. Note that coordinates are relative to the bottom left corner of the view:
// assuming aFrame is our frame
CGRect cFrame = CGRectMake(aFrame.origin.x,
self.frame.size.height - (aFrame.origin.y + aFrame.size.height),
aFrame.size.width, aFrame.size.height);
[irep cropFrame:];
Finally, you can find the percentage of alpha in the image using the following:
double totalAlpha;
double totalPixels;
for (int x = 0; x < [irep bitmapSize].x; x++) {
for (int y = 0; y < [irep bitmapSize].y; y++) {
totalAlpha += [irep getPixelAtPoint:BMPointMake(x, y)].alpha;
totalPixels += 1;
}
}
double alphaPct = totalAlpha / totalPixels;
You can then use the alphaPct variable as a percentage from 0 to 1. Note that, to prevent leaks, you must release the ANImageBitmapRep object using release: [irep release].
Hope that I helped. Image data is a fun and interesting field when it comes to iOS development.

CMSampleBuffer from OpenGL for video output with AVAssestWritter

I need to get a CMSampleBuffer for the OpenGL frame. I'm using this:
int s = 1;
UIScreen * screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]){
s = (int)[screen scale];
}
const int w = viewController.view.frame.size.width/2;
const int h = viewController.view.frame.size.height/2;
const NSInteger my_data_length = 4*w*h*s*s;
// allocate array and read pixels into it.
GLubyte * buffer = malloc(my_data_length);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
GLubyte * buffer2 = malloc(my_data_length);
for(int y = 0; y < h*s; y++){
memcpy(buffer2 + (h*s - 1 - y)*4*w*s, buffer + (4*y*w*s), 4*w*s);
}
free(buffer);
CMBlockBufferRef * cm_block_buffer_ref;
CMBlockBufferAccessDataBytes(cm_block_buffer_ref,0,my_data_length,buffer2,*buffer2);
CMSampleBufferRef * cm_buffer;
CMSampleBufferCreate (kCFAllocatorDefault,cm_block_buffer_ref,true,NULL,NULL,NULL,1,1,NULL,0,NULL,cm_buffer);
I get EXEC_BAD_ACCESS for CMSampleBufferCreate.
Any help is appreciated, thank you.
The solution was to use the AVAssetWriterInputPixelBufferAdaptor class.
int s = 1;
UIScreen * screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]){
s = (int)[screen scale];
}
const int w = viewController.view.frame.size.width/2;
const int h = viewController.view.frame.size.height/2;
const NSInteger my_data_length = 4*w*h*s*s;
// allocate array and read pixels into it.
GLubyte * buffer = malloc(my_data_length);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
GLubyte * buffer2 = malloc(my_data_length);
for(int y = 0; y < h*s; y++){
memcpy(buffer2 + (h*s - 1 - y)*4*w*s, buffer + (4*y*w*s), 4*w*s);
}
free(buffer);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes (NULL,w*2,h*2,kCVPixelFormatType_32BGRA,buffer2,4*w*s,NULL,0,NULL,&pixel_buffer);
[av_adaptor appendPixelBuffer: pixel_buffer withPresentationTime:CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate: start_time],30)];
Why is the third parameter to CMSampleBufferCreate() true in your code? According to the documentation:
Parameters
allocator
The allocator to use to allocate memory for the CMSampleBuffer
object. Pass kCFAllocatorDefault to
use the current default allocator.
dataBuffer
This can be NULL, a CMBlockBuffer with no backing memory,
a CMBlockBuffer with backing memory
but no data yet, or a CMBlockBuffer
that already contains the media data.
Only in that last case (or if NULL and
numSamples is 0) should dataReady be
true.
dataReady
Indicates whether dataBuffer already contains the media
data.
Your cm_block_buffer_ref that is being passed in as a buffer contains no data (you should NULL it for safety, I don't believe the compiler does that by default), so you should use false here.
There may be other things wrong with this, but that's the first item that leaps out at me.
Why the double malloc and why not swap in place via a temp buffer?
Something like this:
GLubyte * raw = (GLubyte *) wyMalloc(size);
LOGD("raw address %p", raw);
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, raw);
const size_t end = h/2;
const size_t W = 4*w;
GLubyte row[4*w];
for (int i=0; i <= end; i++) {
void * top = raw + (h - i - 1)*W;
void * bottom = raw + i*W;
memcpy(row, top, W);
memcpy(top, bottom, W);
memcpy(bottom, row, W);
}

Get BoundingBox of a transparent Image?

i load a transparent .png into an UIImage.
How to i calculte the real bounding-box. E.g. if the the real image is smaller than the .png-dimensions.
Thanks for helping
Assuming "bounding box of an image" is simply a rectangle in the image, specified in pixel coordinates.
You want the rectangle of image which contains all pixels with an alpha greater than threshold (it is equivalent to say that all pixels that are not in this rectangle have an alpha lower than threshold). After that you can transform this rectangle in screen coordinate (or whatever you want).
The basic algorithm is to start with a rectangle containing the whole image, then shrink the rectangle horizontally, then vertically (or vertically then horizontally).
I don't know Objective-C, so I put the code in pure C (some functions are just to make the code more clearer):
typedef struct Rectangle
{
unsigned int x1, y1, x2, y2;
} Rectangle;
typedef struct Image
{
unsigned int height,width;
unsigned int* data;
} Image;
unsigned char getPixelAlpha(Image* img, unsigned int x, unsigned int y)
{
unsigned int pixel = 0; // default = fully transparent
if(x >= img->width || y >= img->height)
return pixel; // Consider everything not in the image fully transparent
pixel = img->data[x + y * img->width];
return (unsigned char)((pixel & 0xFF000000) >> 24);
}
void shrinkHorizontally(Image* img, unsigned char threshold, Rectangle* rect)
{
int x, y;
// Shrink from left
for(x = 0; x < (int)img->width; x++)
{
// Find the maximum alpha of the vertical line at x
unsigned char lineAlphaMax = 0;
for(y = 0; y < (int)img->height; y++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the left limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->x1 = x;
break;
}
}
// Shrink from right
for(x = img->width - 1; x >= 0; x--)
{
// Find the maximum alpha of the vertical line at x
unsigned char lineAlphaMax = 0;
for(y = 0; y < (int)img->height; y++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the right limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->x2 = x;
break;
}
}
}
// Almost the same than shrinkHorizontally.
void shrinkVertically(Image* img, unsigned char threshold, Rectangle* rect)
{
int x, y;
// Shrink from up
for(y = 0; y < (int)img->height; y++)
{
// Find the maximum alpha of the horizontal line at y
unsigned char lineAlphaMax = 0;
for(x = 0; x < (int)img->width; x++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the up limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->y1 = x;
break;
}
}
// Shrink from bottom
for(y = img->height- 1; y >= 0; y--)
{
// Find the maximum alpha of the horizontal line at y
unsigned char lineAlphaMax = 0;
for(x = 0; x < (int)img->width; x++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the bottom limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->y2 = x;
break;
}
}
}
// Find the 'real' bounding box
Rectangle findRealBoundingBox(Image* img, unsigned char threshold)
{
Rectangle r = { 0, 0, img->width, img->height };
shrinkHorizontally(img,threshold,&r);
shrinkVertically(img,threshold,&r);
return r;
}
Now that you have the coordinates of the bounding box in pixel in your image, you should be able to transform it in device coordinates.
CGRect myImageViewRect = [myImageView frame];
CGSize myImageSize = [[myImageView image]size];
if(myImageSize.width < myImageViewRect.size.width){
NSLog(#"it's width smaller!");
}
if(myImageSize.height < myImageViewRect.size.height){
NSLog(#"it's height smaller!");
}
If you want the image to resize to the size of the image view you can call
[myImageView sizeToFit];