App does not work on real device - iphone

Here is the situation.
Device: ipod Touch 3rd generation. iOS 4.1
OS X: Lion
This is a very simple App, a ViewController, there is a UIImageView and a UIButton on it. When clicking the button, it would do some computing, generate a image and load it to the UIImageView. Generally speaking, when you click the button, it would do some image processing working in the click event.
This App works well on the simulator. It would display a picture on the UIImageView correctly when you click the button. It costs about 1~2 seconds, not so long.
I connect my ipod touch to my MBP, and load it from Xcode. I set some break points in one for loop in my code (in click event). There is some strange things.
This for loop is the main loop which do the most of the computing. It would stop at the break point at first, the i is 0 i=0; When I continue the app, the app seems stop, you need wait for a while, then the i becomes 4 or 8 or 9, not the correct number which is 1.
I suspect, if I put the computing work in another thread, not in the UI thread. Does it help? Actually, there is nothing strange code in the click event, however, I can't get the correct images, only a black one. Does anyone meet it before, or please provide your suggestion.
UPDATE
Here is what the button click event doing.
int width=320;
int height=480;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef contextRef = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
//CGColorSpaceRelease( colorSpace );
CGContextClearRect( contextRef, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( contextRef, 0, height - height );
for(int i=0;i<height;i++)
{
for (int j=0; j<width; j++) {
*((char *)(imageData+i*width*4+j*4))=0;
*((char *)(imageData+i*width*4+j*4+1))=0;
*((char *)(imageData+i*width*4+j*4+2))=0;
*((char *)(imageData+i*width*4+j*4+3))=255;
}
}
int xmin=-2;
int xmax=2;
int ymin=-2;
int ymax=2;
int fre[320 * 480]={0};
CGPoint p=CGPointZero;
float x=arc4random()*1.0/ARC4RANDOM_MAX;
float y=arc4random()*1.0/ARC4RANDOM_MAX;
p.x=x*(xmax-xmin)+xmin;
p.y=y*(ymax-ymin)+ymin;
int MIN_ITERATE=10;
int ite_from_start=0;
DataPoint point;
point.p=p;
point.red=0.0;
point.green=0.0;
point.blue=0.0;
IFSFunctions *ifsfunction=[[IFSFunctions alloc] init];
for (int i=0; i<1000000; i++) {
if(p.x<=160 && p.x >=-160 && p.y <= 240 && p.y >= -240)
{
//fre[((int)p.y+240)*320+(int)p.x+160]++;
point=[ifsfunction caculate:point];
int data_x=(int)(320*(point.p.x-xmin)/(xmax-xmin));
int data_y=(int)(480*(point.p.y-ymin)/(ymax-ymin));
if(data_x >=0 && data_x<320 && data_y >=0 && data_y < 480 && ite_from_start < 20000)
{
ite_from_start++;
if(ite_from_start > MIN_ITERATE)
{
*((char *)(imageData+data_y*width*4+data_x*4))=(int)point.red;
*((char *)(imageData+data_y*width*4+data_x*4+1))=(int)point.green;
*((char *)(imageData+data_y*width*4+data_x*4+2))=(int)point.blue;;
}
fre[data_y*width+data_x]++;
}
else
{
ite_from_start=0;
point=[ifsfunction caculate:point];
}
}
}
int max_int=0;
for (int i=0; i<320*480; i++) {
if (fre[i]>max_int) {
max_int=fre[i];
}
}
//NSLog([NSString stringWithFormat:#"The max interation %f",logf(max_int+1)]);
for (int i=0; i<height; i++) {
for(int j=0;j<width;j++){
float intensity=logf(fre[i*width+j]+1.0)/logf(max_int/300+1);
//NSLog([NSString stringWithFormat:#"The %f",intensity]);
float gamma=powf(intensity, 0.25);
*((char *)(imageData+i*width*4+j*4))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4))));
*((char *)(imageData+i*width*4+j*4+1))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4+1))));
*((char *)(imageData+i*width*4+j*4+2))=(int)(gamma*(*((char *)(imageData+i*width*4+j*4+2))));
}
}
CGDataProviderRef dataProvider=CGDataProviderCreateWithData(NULL, imageData, height * width * 4, NULL);
CGImageRef imageRef=CGImageCreate(width, height, 8, 32, 4*width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
CGContextDrawImage(contextRef, CGRectMake( 0, 0, width, height ), imageRef);
imgView.image=[UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
free(imageData);
Best Regards,

If it takes 1~2 seconds on the simulator, it will take a lot more on a device. In fact, the simulator is just the iOS APIs on the Mac. I'd create an NSOperation and execute it on a separated thread. And make sure your for loop looks like this:
for(int i = 0; i < YOURNUMBERHERE; i++)

Related

Get colour of pixel from CCRenderTexture

So, I am trying to find the location of any pixels on the screen that are a specific colour.
The following code works, but is VERY slow, because I have to iterate over every single pixel co-ordinate, and there are a lot.
Is there any way to improve the following code to make it more efficient?
// Detect the position of all red points in the sprite
UInt8 data[4];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
for (int x = 0; x < 960; x++)
{
for (int y = 0; y < 640; y++)
{
ccColor4B *buffer = malloc(sizeof(ccColor4B));
glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
ccColor4B color = buffer[0];
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
[renderTexture end];
[renderTexture release];
You can (and should) read more than just one pixel at a time. The way to make OpenGL fast, is to pack all your stuff into as few operations as possible. That works both ways (reading and writing to the GPU).
Try reading the whole texture in one call, and finding your red pixels from the resulting array. As below.
Also note, that generally speaking it is a good idea to traverse a bitmap row by row, which means reversing the order of for -loops (y [rows] on the outside, with x on the inside)
// Detect the position of all red points in the sprite
ccColor4B *buffer = new ccColor4B[ 960 * 640 ];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
glReadPixels(0, 0, 940, 640, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
[renderTexture release];
int i = 0;
for (int y = 0; y < 640; y++)
{
for (int x = 0; x < 960; x++)
{
ccColor4B color = buffer[i];//the index is equal to y * 940 + x
++i;
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
delete[] buffer;
Don't malloc your buffer every time, just reuse the same buffer; malloc is slow! Please take a look at Apple's Memory Usage Documentation.
I don't know of any algorithms that can do this any faster, but this might help.

iPhone App - Display pixel data present in buffer on screen

I have the source code for a video decoder application written in C, which I'm now porting on iphone.
My problem is as follows:
I have RGBA pixel data for a frame in a buffer that I need to display on the screen. My buffer is of type unsigned char. (I cannot change it to any other data type as the source code is too huge and not written by me.)
Most of the links I found on the net say about how to "draw and display pixels" on the screen or how to "display pixels present in an array", but none of then say how to "display pixel data present in a buffer".
I'm planning to use quartz 2D. All I need to do is just display the buffer contents on the screen. No modifications! Although my problem sounds very simple, there isn't any API that I could find to do the same. I couldn't find any appropriate link or document that was useful enough.
Kindly help!
Thanks in advance.
You can use the CGContext data structure to create a CGImage from raw pixel data. I've quickly written a basic example:
- (CGImageRef)drawBufferWidth:(size_t)width height:(size_t)height pixels:(void *)pixels
{
unsigned char (*buf)[width][4] = pixels;
static CGColorSpaceRef csp = NULL;
if (!csp) {
csp = CGColorSpaceCreateDeviceRGB();
}
CGContextRef ctx = CGBitmapContextCreate(
buf,
width,
height,
8, // 8 bits per pixel component
width * 4, // 4 bytes per row
csp,
kCGImageAlphaPremultipliedLast
);
CGImageRef img = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
return img;
}
You can call this method like this (I've used a view controller):
- (void)viewDidLoad
{
[super viewDidLoad];
const size_t width = 320;
const size_t height = 460;
unsigned char (*buf)[width][4] = malloc(sizeof(*buf) * height);
// fill up `buf` here
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
buf[y][x][0] = x * 255 / width;
buf[y][x][1] = y * 255 / height;
buf[y][x][2] = 0;
buf[y][x][3] = 255;
}
}
CGImageRef img = [self drawBufferWidth:320 height:460 pixels:buf];
self.imageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
}

Improve performance of drawRect:

I am drawing cells from a grid with a NSTimer every 0.1 Seconds.
The size is about 96x64 => 6144 cells / images.
If i am drawing images instead of (e.g.) green rectangles it is 4 times slower !
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height));
int cellSize = self.bounds.size.width / WIDTH;
double xOffset = 0;
for (int i = 0; i < WIDTH;i++)
{
for (int j = 0; j < HEIGHT;j++)
{
NSNumber *currentCell = [self.state.board objectAtIndex:(i*HEIGHT)+j];
if (currentCell.intValue == 1)
{
[image1 drawAtPoint:CGPointMake(xOffset + (cellSize * i),cellSize * j )];
}
else if (currentCell.intValue == 0){
[image2 drawAtPoint:CGPointMake(xOffset + (cellSize * i),cellSize * j )];
}
}
}
UIGraphicsPopContext();
}
Any idea how to makes this faster if i want to draw png or jpg in each rectangle?
The images are already scaled to an appropriate size.
a) Don't redraw the images/rects that are outside the view's bounds.
b) Don't redraw the images/rects that are outside the dirtyRect
c) Don't redraw the images/rects that haven't changes since the
previous update.
d) Use a layer to prerender the images, so you don't need to render
them at drawing time.
This scenario is exactly what Instruments is there for. Use it. Anyone here making a suggestion is guessing about what the bottleneck is.
That said, I'm going to guess at what the bottleneck is. You are drawing 6114 images using the CPU (confirm this by using the time profiler. Find your drawRect method, and check where the most time is spent. If it's drawInRect, then that's your problem)
If that's the case, how do we reduce its usage? An easy win would be to only redraw the images we need to draw. CALayers make this easy. Remove your drawRect method, add a sublayer to your view's layer for each image, and set the images as your layers' content properties. Instead of invalidating the view when an image needs to change, just switch the relevant layer's content property to the new image.
Another nice thing about CALayers is that they cache layer content on the GPU, meaning that the redraws that do happen will require less CPU time and won't block the rest of you app as much when they do happen.
If the overhead of that many layers is unacceptable (again, Instruments is your friend), check out CAReplicatorLayer. It's less flexible than having many CALayers, but allows a single image to be replicated many times with minimal overhead.
I tried to improve your code from performance perspective. However, check my comment about bottlenecks, too.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
//UIGraphicsPushContext(context); //not needed UIView does it anyway
//use [UIView backgroundColor] instead of this
//CGContextSetRGBFillColor(context, 0, 0, 0, 1);
//CGContextFillRect(context, CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height));
int cellSize = self.bounds.size.width / WIDTH;
double xOffset = 0;
CGRect cellFrame = CGRectMake(0, 0, cellSize, cellSize);
NSUinteger cellIndex = 0;
for (int i = 0; i < WIDTH; i++) {
cellFrame.origin.x = xOffset;
for (int j = 0; j < HEIGHT; j++, cellIndex++) {
cellFrame.origin.y = 0;
if (CGRectIntersectsRect(rect, cellFrame) {
NSNumber *currentCell = [self.state.board objectAtIndex:cellIndex];
if (currentCell.intValue == 1) {
[image1 drawInRect:cellFrame];
}
else if (currentCell.intValue == 0) {
[image2 drawInRect:cellFrame];
}
}
cellFrame.origin.y += cellSize;
}
cellFrame.origin.x += cellSize;
}
//UIGraphicsPopContext(context); //not needed UIView does it anyway
}
Use CGRectIntersects to check if the rect of your image is inside the dirtyRect to check if you need to draw it.

Identify a percentage of transparent pixels in an area of UIImageView

I'm trying to set up a collision type hit test for a defined of pixels within a UIImageView. I'm only wish to cycle through pixels in a defined area.
Here's what I have so far:
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Is someone able to offer any reason why it isn't working?
I would suggest using ANImageBitmapRep. It allows for easy pixel-level manipulation of images without the hassle of context, linking against other libraries, or raw memory allocation. To create an ANImgaeBitmapRep with the contents of a view, you could do something like this:
BMPoint sizePt = BMPointMake((int)self.frame.size.width,
(int)self.frame.size.height);
ANImageBitmapRep * irep = [[ANImageBitmapRep alloc] initWithSize:sizePt];
CGContextRef ctx = [irep context];
[self.layer renderInContext:context];
[irep setNeedsUpdate:YES];
Then, you can crop out your desired rectangle. Note that coordinates are relative to the bottom left corner of the view:
// assuming aFrame is our frame
CGRect cFrame = CGRectMake(aFrame.origin.x,
self.frame.size.height - (aFrame.origin.y + aFrame.size.height),
aFrame.size.width, aFrame.size.height);
[irep cropFrame:];
Finally, you can find the percentage of alpha in the image using the following:
double totalAlpha;
double totalPixels;
for (int x = 0; x < [irep bitmapSize].x; x++) {
for (int y = 0; y < [irep bitmapSize].y; y++) {
totalAlpha += [irep getPixelAtPoint:BMPointMake(x, y)].alpha;
totalPixels += 1;
}
}
double alphaPct = totalAlpha / totalPixels;
You can then use the alphaPct variable as a percentage from 0 to 1. Note that, to prevent leaks, you must release the ANImageBitmapRep object using release: [irep release].
Hope that I helped. Image data is a fun and interesting field when it comes to iOS development.

free() call works on simulator, makes iPad angry. iPad smash

My app is running out of memory. To resolve this, I free up two very large arrays used in a function that writes a framebuffer to an image. The method looks like this:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
//free(buffer);
//free(buffer2);
return myImage;
}
Note the two called to free(buffer) and free(buffer2) at the end there? Those work fine on the iPad simulator, removing the memory problem and allowing me to generate with impudence. However, they kill the iPad instantly. Like, the first time it executes it. If I remove the free() calls it runs fine, just runs out of memory after a minute or two. So why is the free() call crashing the device?
Note - it's not the call to free() that explicitly crashes the device, it crashes later. But that seems to be the root cause/..
EDIT - Someone's asked about where it exactly crashes. This flow goes on to return the image to another object, which writes it to a file. When calling the 'UIImageJPEGRepresentation' method, it generates an EXT_BAD_ACCESS message. I assume this is because the UIImage I'm passing it to write to the file is corrupt, null or something else. But this only happens when I free those two buffers.
I'd understand if the memory was somehow related to the UIIMage, but it really shouldn't be, especially as it works on the simulator. I wondered if it is down to how iPad handles 'free' calls...
From reading the docs, I believe CGDataProviderCreateWithData will only reference the memory pointed to by buffer2, not copy it. You should keep it allocated until the image is released.
Try this:
static void _glToUIImageRelease (void *info, const void *data, size_t size) {
free(data);
}
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y
First things first, you really should check if malloc failed and returned NULL. However, if that doesn't solve your problem, use a debugger and step through your programm to see exactly where it fails (or at least to get a stack trace). From my experience, odd failures like crashing somewhere in unsuspected areas almost always are buffer overflows corrupting arbitrary data some time before.
Buffer(s) are undersized? Look at the loops.
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
let y == 0 and x == (768*4)-1, the index of buffer2 exceeds allocated size.
Probably outside range before that?