iPhone Difference Handling Images Between Device and Simulator - iphone

I have an image with a transparent border, and I am trying to directly manipulate the image pixels, following the Apple guide found here. Everything works perfectly well when run on the device. However, when I run my code on the simulator, I find that the transparent border of the image slowly turns black with each call to this function. The strange thing is that even if I don't modify the image data, the transparent border still begins to turn black with each call to this function. For example, I see the same problem even if my image manipulation code calls CGBitmapContextGetData but doesn't use the returned data pointer. To make the problem go away on the simulator, I have to comment out the call to CGBitmapContextGetData (and the freeing of the data pointer of course). Example code that still modifies the image on the simulator:
+ (UIImage *) updateImage:(UIImage *)inputImage
{
UIImage *updatedImage;
/* Update colors in image appropriately */
CGImageRef image = [inputImage CGImage];
CGContextRef cgctx = [ColorHandler CreateARGBBitmapContext:image];
if (cgctx == NULL)
{
// error creating context
NSLog(#"Error creating context.\n");
return nil;
}
size_t w = CGImageGetWidth(image);
size_t h = CGImageGetHeight(image);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, image);
// Now we can get a pointer to the image data associated with the bitmap
// context.
void *data = CGBitmapContextGetData(cgctx);
CGImageRef ref = CGBitmapContextCreateImage(cgctx);
updatedImage = [UIImage imageWithCGImage:ref];
// When finished, release the context
CGContextRelease(cgctx);
CGImageRelease(ref);
// Free image data memory for the context
if (data)
{
free(data);
}
return updatedImage;
}
I read the comments and answers here regarding how images are managed differently between the device and simulator, but it hasn't helped me figure out my problem.
The only difference between my CreateARGBBitmapContext and the example one is that I call CGColorSpaceCreateDeviceRGB instead of CGColorSpaceCreateWithName because I am targeting iOS. The image is edited exactly as designed when run on the iOS device.
I am currently doing all image manipulation in the main thread for debugging this issue.
Specs: Mountain Lion, XCode 4.5.2, iOS 6 device, iOS 6 simulator

I was able to solve the issue by allowing Quartz to allocate and manage the memory for the bitmap (Apple doc). To do this, I updated the call to CGBitmapContextCreate in CreateARGBBitmapContext to pass NULL, and I removed all references to bitmapData.
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
Then, in the updateImage method, I removed the freeing of data. Now it seems to work on both device and simulator without any issues.

Related

How to change thin image to fat image in iPhone SDK?

Hi, is there any way to change your image thin size to fat size vice-versa in iPhone SDK?
In this application I want to provide the user a possibility to change its image from regular size to fat size by sliding the slider he can measure its size in iPhone SDK?
I think this can be worked by getting pixels of image i have tried this code to get pixels of image but it just removes colors from the image.
UIImage *image = [UIImage imageNamed:#"foo.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = pixels[r]; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
imgView.image = newImage;
I have also tried by stretching image from this code.
UIImage *stretchImage = [image stretchableImageWithLeftCapWidth:10 topCapHeight:10];
Can anybody help me? I didn't find any framework or SDK that gives me that kind of functionality. I have googled for long time.
As it happens, I'm working on something almost exactly like what you describe for our company right now. We're tackling faces, which is an easier problem to do realistically.
Here is a before image (of yours truly)
And here it is with a "fat face" transformation:
What I did is to import the image into an OpenGL texture, and then apply that texture to a mesh grid. I apply a set of changes to the mesh grid, squeezing certain points closer together and stretching others further apart. Getting a realistic fat face took a lot of fine-tuning, but the results are quite good, I think.
Once I have the mesh calculated, OpenGL does the "heavy lifting" of transforming the image, and very fast. After everything is set up, actually drawing the transformed image is a single call.
Here is the same image, showing the grid lines:
Fat face with grid lines http://imageshack.com/a/img404/8049/afterwithlines.jpg
It took me a couple of weeks of full-time work to get the basic mesh warping working (for a client project that fell through) and then another week or so part time to get the fat face layout fine-tuned. Getting the whole thing into a salable product is still a work in progress.
The algorithm I've come up with is fast enough to apply the fat transformation (and various others) to video input from an iOS camera at the full 30 FPS.
In order to do this sort of image processing you need to understand trig, algebra, pointer math, transformation matrixes, and have a solid understanding of how to write highly optimized code.
I'm using Apple's new Core Image face recognition code to find peoples face features in an image, and use the coordinates of the eyes and mouth as the starting point for my transformations.
Your project is much more ambitious. You would have to do some serious image processing to find all the features, trace their outlines, and then figure out how to transform the image to get a convincing fat effect. You'd need high resolution source images that were posed, lit, and shot under carefully controlled conditions in order to get good results.
It might be too much for the computing power of current iOS devices, and the kind of image recognition you'd need to do to figure out the body parts and how to transform them would be mind-bendingly difficult. You might want to take a look at the open source OpenCV project as a starting point for the image recognition part of the problem.

UIImage returned from UIGraphicsGetImageFromCurrentImageContext leaks

The screenshot of Leak Profiling in Instruments Tool: http://i.stack.imgur.com/rthhI.png
I found my UIImage objects leaking using Instruments tool.
Per Apple's documentation, the object returned from UIGraphicsGetImageFromCurrentImageContext should be autoreleased, I can also see "Autorelease" event when profiling (see the first 2 lines of history of my attached screenshot). However, it seems that the "autorelease" event takes no effect. Why?
EDIT:
Code attached, I use the below code to "mix" two UIImages, also, later on, I use a UIMutableDictionary to cache those UIImage I "mixed". And I'm quite sure that I've called [UIMutableDictionary removeAllObjects] to clear the cache, so those UIImages "should be cleaned"
+ (UIImage*) mixUIImage:(UIImage*)i1 :(UIImage*)i2 :(CGPoint)i1Offset :(CGPoint)i2Offset{
CGFloat width , height;
if (i1) {
width = i1.size.width;
height = i1.size.height;
}else if(i2){
width = i2.size.width;
height = i2.size.height;
}else{
width = 1;
height = 1;
}
// create a new bitmap image context
//
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, i1.scale);
// get context
//
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
// drawing code comes here- look at CGContext reference
// for available operations
//
// this example draws the inputImage into the context
//
[i2 drawInRect:CGRectMake(i2Offset.x, i2Offset.y, width, height)];
[i1 drawInRect:CGRectMake(i1Offset.x, i1Offset.y, width, height)];
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
//
UIGraphicsEndImageContext();
return outputImage;
}
I was getting a strange UIImage memory leak using a retained UIImage image from UIGraphicsGetImageFromCurrentImageContext(). I was calling it in a background thread (in response to a timer event). The problem turned out to be - as mentioned deep in the documentation by apple - "you should only call this function from the main thread of your application". Beware.

OpenGL ES (IPhone) alpha blending looks weird

I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending:
I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions.
I attached an image. In the left is the screenshot from the iphone, and in the right is what it looks like when I make the composition in photoshop. The image is composed by a gradient and a sand image with feathered edges.
Is this the expected behaviour? Is there any way I can avoid the dark borders?
Thanks.
EDIT: I'm uploading the portion of the png containing the sand. The complete png is 512x512 and has other images too.
I'm loading the image using the following code:
NSString *path = [NSString stringWithUTF8String:filePath];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil) NSLog(#"ERROR LOADING TEXTURE IMAGE");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
I need to answer my own question:
I couldn't make it work using ImageIO framework so I added libpng sources to my project and loaded the image using it. It works perfect now, but had I to solve the following problem:
The image was loaded and showed fine in the simulator but was not loading at all on the real device. I found on the web that what's going on is that the pixel ordering in PNG image-format files is converted from RGBA to BGRA, and the color values are pre-multiplied by the alpha channel value as well, by a compression utility 'pngcrush' (for device-specific efficiency reasons, when programming with the UIKit interface).
The utility also renames a header of the file, making the new PNG file unusable by libpng. These changes are done automatically when PNG files are deployed onto the iPhone. While this is fine for the UIKit, libpng (and other non-Apple libraries) generally can't then read the files.
The simple solutions are:
rename your PNG files with a different extension.
for your iPhone
-device- build add the following user-defined setting:
IPHONE_OPTIMIZE_OPTIONS | -skip-PNGs
I did the second and it works perfect on simulator and device now.
Your screenshot and photoshop mockup suggest that the image's color channels are being premultiplied against the alpha channel.
I have no idea what your original source images look like but to me it looks like it is blending correctly. With the blend mode you have you're going to get muggy blends between the layers.
The photoshop version looks like you've got proper transparency for each layer, but not blending. I suppose you could experiement with glAlphaFunc if you didn't want to explicitly set the pixel alphas exactly.
--- Code relating to comment below (removing alpha pre-multiplication) ---
int pixelcount = width * height;
unsigned char* off = pixeldata;
for (int pi=0; pi<pixelcount; ++pi)
{
unsigned char alpha = off[3];
if( alpha!=255 && alpha!=0 )
{
off[0] = ((int)off[0])*255/alpha;
off[1] = ((int)off[1])*255/alpha;
off[2] = ((int)off[2])*255/alpha;
}
off += 4;
}
I am aware this post is ancient, however I had the identical problem and after attempting some of the solutions and agonising for days I discovered that you can solve the pre-multiplied RGBA png issue by using the following blending parameters;
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
The GL_ONE parameter replaced the GL_SRC_ALPHA parameter in my case.
I can now use my RGBA PNGs without the gray alpha edges effect which made my glyph text look nasty.
Edit:
Okay, one more thing, for fading etc (setting the alpha channel in code), you will need to pre-multiply manually when the blending is set as above, like so;
glColor4f(r*a, g*a, b*a, a);

Need example of how to create/manipulate image pixel data with iPhone SDK

Looking for a simple example or link to a tutorial.
Say I have a bunch of values stored in an array. I would like to create an image and update the image data from my array. Assume the array values are intensity data and will be updating a grayscale image. Assume the array values are between 0 and 255 -- or that I will convert it to that range.
This is not for purposes of animation. Rather the image would be updated based on user interaction. This is something I know how to do well in Java, but am very new to iPhone programming. I've googled some information about CGImage and UIImage -- but am confused as to where to start.
Any help would be appreciated.
I have sample code from one of my apps that takes data stored as an array of unsigned char and turns it into a UIImage:
// unsigned char *bitmap; // This is the bitmap data you already have.
// int width, height; // bitmap length should equal width * height
// Create a bitmap context with the image data
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(bitmap, width, height, 8, width, colorspace, kCGImageAlphaNone);
CGImageRef cgImage = nil;
if (context != nil) {
cgImage = CGBitmapContextCreateImage (context);
CGContextRelease(context);
}
CGColorSpaceRelease(colorspace);
// Release the cgImage when done
CGImageRelease(cgImage);
If your colorspace is RGB and you need to account alpha value pass kCGImageAlphaPremultipliedLast as last parameter to CGBitmapContextCreate function.
Don't use kCGImageAlphaLast, this will not work because bitmap contexts do not support alpha that isn't premultiplied.
The books I referenced in this SO answer both contain sample code and demonstrations of image manipulations and updates via user interaction.

fastest way to draw a screen buffer on the iphone

I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.
I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...
for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.
UPDATE:
I have tried the openGL screen sized texture method:
I got 17fps...
I used a 512x512 texture (because it needs to be a power of two)
just the call of
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());
seemed pretty much responsible for ALL the slow down.
commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.
note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.
I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.
I can change my software render to natively render to any pixle format, if it would result in a more direct blit.
fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)
UPDATE 2:
I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below:
is this the quickest way to update the image? any help would be greatly appreciated.
//
// catestAppDelegate.m
// catest
//
// Created by User on 3/14/10.
// Copyright __MyCompanyName__ 2010. All rights reserved.
//
#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"
const void* GetBytePointer(void* info)
{
// this is currently only called once
return info; // info is a pointer to the buffer
}
void ReleaseBytePointer(void*info, const void* pointer)
{
// don't care, just using the one static buffer at the moment
}
size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
// I don't think this ever gets called
memcpy(buffer, ((char*)info) + position, count);
return count;
}
CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };
static CGImageRef cgIm;
static CGDataProviderRef dataProvider;
unsigned char* imageData;
const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;
#implementation catestAppDelegate
#synthesize window;
#synthesize viewController;
- (void)applicationDidFinishLaunching:(UIApplication *)application {
[window makeKeyAndVisible];
const size_t byteRowSize = 320 * 4;
imageData = malloc(imageDataSize);
for(int i=0;i<imageDataSize/4;i++)
((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
dataProvider =
CGDataProviderCreateDirect(imageData, imageDataSize,
&providerCallbacks); // currently global
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault); // also global, probably doesn't need to be
self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!
// CGImageRelease(cgIm); // we should do this at some stage...
// CGDataProviderRelease(dataProvider);
animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:#selector(someRandomFunction) userInfo:nil repeats:YES];
// set up a timer in the attempt to update the image
}
float col = 0;
-(void)someRandomFunction
{
// update the original buffer
for(int i=0;i<imageDataSize;i++)
imageData[i] = (unsigned char)(int)col;
col+=256.0f/60.0f;
// and currently the only way I know how to apply that buffer update to the screen is to
// create a new image and bind it to the layer...???
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
self.window.layer.contents = cgIm;
// and that currently works, updating the screen, but i don't know how well it runs...
}
- (void)dealloc {
[viewController release];
[window release];
[super dealloc];
}
#end
The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage backed by a buffer using CGDataProviderCreateDirect and assign that to a CALayer's contents property.
For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little or kCGImageAlphaNone | kCGBitmapByteOrder32Little bitmap types and double buffer so that the display is never in an inconsistent state.
edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.
edit2: CADisplayLink is a useful class no matter which compositing method you use.
The fastest way is to use IOFrameBuffer/IOSurface, which are private frameworks.
So OpenGL seems to be the only possible way for AppStore apps.
Just to post my comment to #rpetrich's answer in the form of an answer, I will say in my tests I found OpenGL to be the fastest way. I've implemented a simple object (UIView subclass) called EEPixelViewer that does this generically enough that it should work for most people I think.
It uses OpenGL to push pixels in a wide variety of formats (24bpp RGB, 32-bit RGBA, and several YpCbCr formats) to the screen as efficiently as possible. The solution achieves 60fps for most pixel formats on almost every single iOS device, including older ones. Usage is super simple and requires no OpenGL knowledge:
pixelViewer.pixelFormat = kCVPixelFormatType_32RGBA;
pixelViewer.sourceImageSize = CGSizeMake(1024, 768);
EEPixelViewerPlane plane;
plane.width = 1024;
plane.height = 768;
plane.data = pixelBuffer;
plane.rowBytes = plane.width * 4;
[pixelViewer displayPixelBufferPlanes: &plane count: 1 withCompletion:nil];
Repeat the displayPixelBufferPlanes call for each frame (which loads the pixel buffer to the GPU using glTexImage2D), and that's pretty much all there is to it. The code is smart in that it tries to use the GPU for any kind of simple processing required such as permuting the color channels, converting YpCbCr to RGB, etc.
There is also quite a bit of logic for honoring scaling using the UIView's contentMode property, so UIViewContentModeScaleToFit/Fill, etc. all work as expected.
Perhaps you could abstract the methods used in the software renderer to a GPU shader... might get better performance. You'd need to send the encoded "video" data as a texture.
A faster method than both CGDataProvider and glTexSubImage is to use CVOpenGLESTextureCache. The CVOpenGLESTextureCache allows you to directly modify an OpenGL texture in graphics memory without re-uploading.
I used it for a fast animation view you can see here:
https://github.com/justinmeiners/image-sequence-streaming
It is a little tricky to use and I came across it after asking my own question about this topic: How to directly update pixels - with CGImage and direct CGDataProvider