iPhone SDK - Optimize a for loop - iphone

I'm developing an image processing application and I'm looking for an advise to tune my code.
My need is to split the image into blocs (80x80), and for each blocs, calculate the average color.
My first method contains the main loops where the second method is called :
- (NSArray*)getRGBAsFromImage:(UIImage *)image {
int width = image.size.width;
int height = image.size.height;
int blocPerRow = 80;
int blocPerCol = 80;
int pixelPerRowBloc = width / blocPerRow;
int pixelPerColBloc = height / blocPerCol;
int xx,yy;
// Row loop
for (int i=0; i<blocPerRow; i++) {
xx = (i * pixelPerRowBloc) + 1;
// Colon loop
for (int j=0; j<blocPerCol; j++) {
yy = (j * pixelPerColBloc) +1;
[self getRGBAsFromImageBloc:image
atX:xx
andY:yy
withPixelPerRow:pixelPerRowBloc
AndPixelPerCol:pixelPerColBloc];
}
}
// return my NSArray not done yet !
}
My second method browses the pixel bloc and returns a ColorStruct :
- (ColorStruct*)getRGBAsFromImageBloc:(UIImage*)image
atX:(int)xx
andY:(int)yy
withPixelPerRow:(int)pixelPerRow
AndPixelPerCol:(int)pixelPerCol {
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
int red = 0;
int green = 0;
int blue = 0;
int alpha = 0;
int currentAlpha;
// bloc loop
for (int i = 0 ; i < (pixelPerRow*pixelPerCol) ; ++i) {
currentAlpha = rawData[byteIndex + 3];
red += (rawData[byteIndex] ) * currentAlpha;
green += (rawData[byteIndex + 1]) * currentAlpha;
blue += (rawData[byteIndex + 2]) * currentAlpha;
alpha += currentAlpha;
byteIndex += 4;
if ( i == pixelPerRow ) {
byteIndex += (width-pixelPerRow) * 4;
}
}
red /= alpha;
green /= alpha;
blue /= alpha;
ColorStruct *bColorStruct = newColorStruct(red, blue, green);
free(rawData);
return bColorStruct;
}
ColorStruct :
typedef struct {
int red;
int blue;
int green;
} ColorStruct;
with constructor :
ColorStruct *newColorStruct(int red, int blue, int green) {
ColorStruct *ret = malloc(sizeof(ColorStruct));
ret->red = red;
ret->blue = blue;
ret->green = green;
return ret;
}
As you can see, I have three level of loop : the row loop, the colon loop, and the bloc loop.
I have tested my code and it takes about 5 to 6 seconds for an 320x480 pictures.
Any help is welcomed.
Thanks,
Bahaaldine

Seem like a perfect problem to give it the Grand Central Dispatch ?

I think the main problem in this code is there are too many image reads. The entire image is loaded to memory for every(!) block (malloc is expensive too). You should preload image data once (cache it) and then use that memory in getRGBAsFromImageBloc(). Now for 320x480 picture you have 4 x 6 = 24 blocks. So you can speed up you app manyfold by only using caching.

At the end of the day taking an image and performing three multiplies and five additions on each pixel sequentially is always going to be relatively slow.
Luckily, what you're doing can be thought of as a special case of interpolating an image from one size to another - i.e. the average pixel of an image is the same as that image resized to a size of 1x1 (assuming the resizing is using some form of linear interpolation, but that's usually the standard way to do it) and there's a few highly optimized (or at least more optimized than you're likely to get without enormous effort) options for doing that that are part of the iPhone's graphics libraries. At first I'd try using the Quartz methods to resize an image:
CGImageRef sourceImage = yourImage;
int numBytesPerPixel = 4;
u_char* scaledImageData = (u_char*)malloc(numBytesPerPixel);
CGColorSpaceRef colorspace = CGImageGetColorSpace(sourceImage);
CGContextRef context = CGBitmapContextCreate (scaledImageData, 1, 1, 8, numBytesPerPixel, colorspace, kCGImageAlphaNoneSkipFirst);
CGColorSpaceRelease(colorspace);
CGContextDrawImage(context, CGRectMake(0,0,1,1), sourceImage);
int a = scaledImageData[0];
int r = scaledImageData[1];
int g = scaledImageData[2];
int b = scaledImageData[3];
(this just scales the original image down to 1 pixel and doesn't show the cropping of the sub regions but unfortunately I don't have time for that code right now - if you try to implement it and get stuck add a comment and I can show you how you would do that).
If that doesn't work you could always try using OpenGL ES to do this (create a texture out of the part of your image you need to scale, render it to a 1x1 buffer, and test the result from the buffer). This is a lot more complicated but might have some advantages in that it gives you access to the GPU, which might be a lot faster for large images.
Hope that makes sense and helps...
P.S. - Definitely follow y0prst's suggestion and only read the image in once - that is an easy fix that is going to buy you a ton of performance.
P.P.S - I haven't tested the code so usual caveats apply.

You're inspecting every single pixel - something that, it would seem, is going to take roughly the same amount of time no matter how you loop through it (provided you only inspect each pixel once).
I would suggest using a random sampling within the bloc - every "n'th" pixel, which would reduce the loop time (and the accuracy), or allow for an adjustable granularity.
Now, if there is an existing algorithm for computing the average of a group of pixels - that would be something to consider as an alternative.

You can speed things up by not calling a method in the middle of your loop. Just include the code inline.
ADDED: Also, you might try doing the draw image only once, not repeated in a loop, if you have enough memory.
After you do that, you can try hoisting some of the multiplies out of the inner loop as well for a little additional performance (although the Compiler may optimize some of this for you).

Related

pass matlab image to open3d three::Image in a mex script

I am trying to load an image in a mex script and cast it to the corresponding format that the Open3D library uses, i.e. three::Image. I am using the following code:
uint8_t* rgb_image = (uint8_t*) mxGetPr(prhs[3]);
int* dims = (int*) mxGetDimensions(prhs[3]);
int height = dims[0];
int width = dims[2];
int channels = dims[4];
int imsize = height * width;
Image image;
image.PrepareImage(height, width, 3, sizeof(uint8_t)); // parameters: height, width, num_of_channels, bytes_per_channel
memcpy(image.data_.data(), rgb_image, image.data_.size());
The above works well when I give a grayscale image and specify num_of_channels to 1 but not for 3 channel images as you can notice below:
Then I tried to create a function where I am manually looping through the raw data and assigning them to the output image
auto image_ptr = std::make_shared<Image>();
image_ptr->PrepareImage(height, width, channels, sizeof(uint8_t));
for (int i = 0; i < height * width; i++) {
uint8_t *p = (uint8_t *)(image_ptr->data_.data() + i * channels * sizeof(uint8_t));
*p++ = *rgb_image++;
}
But now it seems that the color channels are wrongly assigned:
Any idea how to address this issue. The point is that it seems to be something easy but since my knowledge with C++ and pointers is quite limited I cannot figure it out straight forward.
I found this solution here (Reading image in matlab in a format acceptable to mex) as well but I am not sure how exactly I can use it. To be honest I am quite of confused.
ok the solution was quite straight forward as I was though in first place. It was just playing correctly with the pointers:
std::shared_ptr<Image> CreateRGBImageFromMat(uint8_t *mat_image, int width, int height, int channels)
{
auto open3d_image = std::make_shared<Image>();
open3d_image->PrepareImage(height, width, channels, sizeof(uint8_t));
for (int i = 0; i < height * width; i++) {
uint8_t *p = (uint8_t *)(open3d_image->data_.data() + i * channels * sizeof(uint8_t));
*p++ = *(mat_image + i);
*p++ = *(mat_image + i + height*width);
*p++ = *(mat_image + i + height*width*2);
}
return open3d_image;
}
since the three::Image expects the data in contiguous order row x col x channel while from matlab the image comes in blocks rows x cols x channel_1 (after you transpose the image since matlab is column major). My question though now is whether I can do the same with memcpy() or std::copy() where I can copy the bloc data to contiguous form so that I bypass the for loop.

iOS how to calculate number of pixels/area enclosed by a curve?

I got an arbitrary shaped curve, enclosing some area. I would like to approximate the number of pixels that the curve is enclosing on an iPhone/iPad screen. How can I do so?
A curve is defined as a successive x/y coordinates of points.
A curve is closed.
A curve is drawn by a user's touches (touchesMoved method), and I
have no knowledge of what it looks like
I was thinking of somehow filling the closed curve with color, then calculating the number of pixels of this color in a screenshot of a screen. This means I need to know how to programmatically fill a closed curve with color.
Is there some other way that I'm not thinking of?
Thank you!
Let's do this by creating a Quartz path enclosing your curve. Then we'll create a bitmap context and fill the path in that context. Then we can examine the bitmap and count the pixels that were filled. We'll wrap this all in a convenient function:
static double areaOfCurveWithPoints(const CGPoint *points, size_t count) {
First we need to create the path:
CGPathRef path = createClosedPathWithPoints(points, count);
Then we need to get the bounding box of the path. CGPoint coordinates don't have to be integers, but a bitmap has to have integer dimensions, so we'll get an integral bounding box at least as big as the path's bounding box:
CGRect frame = integralFrameForPath(path);
We also need to decide how wide (in bytes) to make the bitmap:
size_t bytesPerRow = bytesPerRowForWidth(frame.size.width);
Now we can create the bitmap:
CGContextRef gc = createBitmapContextWithFrame(frame, bytesPerRow);
The bitmap is filled with black when it's created. We'll fill the path with white:
CGContextSetFillColorWithColor(gc, [UIColor whiteColor].CGColor);
CGContextAddPath(gc, path);
CGContextFillPath(gc);
Now we're done with the path so we can release it:
CGPathRelease(path);
Next we'll compute the area that was filled:
double area = areaFilledInBitmapContext(gc);
Now we're done with the bitmap context, so we can release it:
CGContextRelease(gc);
Finally, we can return the area we computed:
return area;
}
Well, that was easy! But we have to write all those helper functions. Let's start at the top. Creating the path is trivial:
static CGPathRef createClosedPathWithPoints(const CGPoint *points, size_t count) {
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, count);
CGPathCloseSubpath(path);
return path;
}
Getting the integral bounding box of the path is also trivial:
static CGRect integralFrameForPath(CGPathRef path) {
CGRect frame = CGPathGetBoundingBox(path);
return CGRectIntegral(frame);
}
To choose the bytes per row of the bitmap, we could just use width of the path's bounding box. But I think Quartz likes to have bitmaps that are multiples of a nice power of two. I haven't done any testing on this, so you might want to experiment. For now, we'll round up the width to the next smallest multiple of 64:
static size_t bytesPerRowForWidth(CGFloat width) {
static const size_t kFactor = 64;
// Round up to a multiple of kFactor, which must be a power of 2.
return ((size_t)width + (kFactor - 1)) & ~(kFactor - 1);
}
We create the bitmap context with the computed sizes. We also need to translate the origin of the coordinate system. Why? Because the origin of the path's bounding box might not be at (0, 0).
static CGContextRef createBitmapContextWithFrame(CGRect frame, size_t bytesPerRow) {
CGColorSpaceRef grayscale = CGColorSpaceCreateDeviceGray();
CGContextRef gc = CGBitmapContextCreate(NULL, frame.size.width, frame.size.height, 8, bytesPerRow, grayscale, kCGImageAlphaNone);
CGColorSpaceRelease(grayscale);
CGContextTranslateCTM(gc, -frame.origin.x, -frame.origin.x);
return gc;
}
Finally, we need to write the helper that actually counts the filled pixels. We have to decide how we want to count pixels. Each pixel is represented by one unsigned 8-bit integer. A black pixel is 0. A white pixel is 255. The numbers in between are shades of gray. Quartz anti-aliases the edge of the path when it fills it using gray pixels. So we have to decide how to count those gray pixels.
One way is to define a threshold, like 128. Any pixel at or above the threshold counts as filled; the rest count as unfilled.
Another way is to count the gray pixels as partially filled, and add up that partial filling. So two exactly half-filled pixels get combined and count as a single, entirely-filled pixel. Let's do it that way:
static double areaFilledInBitmapContext(gc) {
size_t width = CGBitmapContextGetWidth(gc);
size_t height = CGBitmapContextGetHeight(gc);
size_t stride = CGBitmapContextGetBytesPerRow(gc);
uint8_t *pixels = CGBitmapContextGetData(gc);
uint64_t coverage = 0;
for (size_t y = 0; y < height; ++y) {
for (size_t x = 0; x < width; ++x) {
coverage += pixels[y * stride + x];
}
}
return (double)coverage / UINT8_MAX;
}
You can find all of the code bundled up in this gist.
I would grab the drawing as a CGIMage ...
(CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
Then, as recommended above use a "Flood Fill" approach to count the pixels.
(Google Flood Fill)

How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?

I basically want to take two images taken from the camera on the iPhone or iPad 2 and compare them to each other to see if they are pretty much the same. Obviously due to light etc the image will never be EXACTLY the same so I would like to check for around 90% compatibility.
All the other questions like this that I saw on here were either not for iOS or were for locating objects in images. I just want to see if two images are similar.
Thank you.
As a quick, simple algorithm, I'd suggest iterating through about 1% of the pixels in each image and either comparing them directly against each other or keeping a running average and then comparing the two average color values at the end.
You can look at this answer for an idea of how to determine the color of a pixel at a given position in an image. You may want to optimize it somewhat to better suit your use-case (repeatedly querying the same image), but it should provide a good starting point.
Then you can use an algorithm roughly like:
float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
for (int xCoord = 0; xCoord < width; xCoord += 10) {
int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
//one or more pixel components differs by 10% or more
numDifferences++;
}
}
}
if (numDifferences / totalCompares <= 0.1f) {
//images are at least 90% identical 90% of the time
}
else {
//images are less than 90% identical 90% of the time
}
Based on aroth's idea, this is my full implementation. It checks if some random pixels are the same. For what I needed it works flawlessly.
- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
if (!CGSizeEqualToSize(image1.size, image2.size))
{
return false;
}
int pixelsWidth = CGImageGetWidth(image1.CGImage);
int pixelsHeight = CGImageGetHeight(image1.CGImage);
int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;
uint32_t pixel1;
CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
uint32_t pixel2;
CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
bool isEqual = true;
for (int i = 0; i < pixelsToCompare; i++)
{
int pixelX = arc4random() % pixelsWidth;
int pixelY = arc4random() % pixelsHeight;
CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);
if (pixel1 != pixel2)
{
isEqual = false;
break;
}
}
CGContextRelease(context1);
CGContextRelease(context2);
return isEqual;
}
Usage:
[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005
According to my performance tests, 0.005 (0.5% of the pixels) is the maximum value you should use. If you need more precision, just compare the whole images
using this. 0.001 seems to be a safe and well-performing value. For large images (like between 0.5 and 2 megapixels or million pixels), I'm using 0.0001 (0.01%) and it works great and incredibly fast, it never makes a mistake.
But of course the mistake-ratio will depend on the type of images you are using. I'm using UIWebView screenshots and 0.0001 performs well, but you can probably use much less if you are comparing real photographs (even just compare one random pixel in fact). If you are dealing with very similar computer designed images you definitely need more precision.
Note: I'm always comparing ARGB images without taking into account the alpha channel. Maybe you'll need to adapt it if that's not exactly your case.

How to save an image as Tiff or PNG with an alpha channel or alpha mask in iPhone SDK?

I have an image with something inside in a white backround. I want to save that image in a format that allows alpha channel or using an alpha mask in a way that the white pixels became transparents. Any light out there?
I don't know of any libraries where this is super easy. But, there's a lot of relevant sample code in the GLImageProcessing example here. (I haven't run the following)
UIImage *some_image = [UIImage imageNamed:#"somethin'.tiff"];
CGImageRef cg_image = some_image.CGImage;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(cg_image));
size_t bpp = CGImageGetBitsPerPixel(CGImage);
uint32_t *stuff = (uint32_t *)CFDataGetBytePtr(data);
int w = CGImageGetWidth(CGImage);
int h = CGImageGetHeight(CGImage);
int N = w * h;
for (int i = 0; i < N; i++ ) {
// do your stuff, test for white, set the alpha mask
stuff[i] = stuff[i] & ((uint32_t)0xFFFFFFFF | alpha_mask);
}
You could instead use this function
UIKIT_EXTERN NSData *UIImagePNGRepresentation(UIImage *image);
and write the data to disk. I hope this helps. Post the solution if you find it...

CGPathRef intersection

Is there a way to find out whether two CGPathRefs are intersected or not. In my case all the CGPaths are having closePath.
For example, I am having two paths. One path is the rectangle which is rotated with some angle and the other path is curved path. Two paths origin will be changing frequently. At some point they may intersect. I want to know when they are intersected. Please let me know if you have any solution.
Thanks in advance
Make one path the clipping path, draw the other path, then search for pixels that survived the clipping process:
// initialise and erase context
CGContextAddPath(context, path1);
CGContextClip(context);
// set fill colour to intersection colour
CGContextAddPath(context, path2);
CGContextFillPath(context);
// search for pixels that match intersection colour
This works because clipping = intersecting.
Don't forget that intersection depends on the definition of interiority, of which there are several. This code uses the winding-number fill rule, you might want the even odd rule or something else again. If interiority doesn't keep you up at night, then this code should be fine.
My previous answer involved drawing transparent curves to an RGBA context. This solution is superior to the old one because it is
simpler
uses a quarter of the memory as an 8bit greyscale context suffices
obviates the need for hairy, difficult-to-debug transparency code
Who could ask for more?
I guess you could ask for a complete implementation, ready to cut'n'paste, but that would spoil the fun and obfuscate an otherwise simple answer.
OLDER, HARDER TO UNDERSTAND AND LESS EFFICIENT ANSWER
Draw both CGPathRefs separately at 50% transparency into a zeroed, CGBitmapContextCreate-ed RGBA memory buffer and check for any pixel values > 128. This works on any platform that supports CoreGraphics (i.e. iOS and OSX).
In pseudocode
// zero memory
CGContextRef context;
context = CGBitmapContextCreate(memory, wide, high, 8, wide*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaPremultipliedLast);
CGContextSetRGBFillColor(context, 1, 1, 1, 0.5); // now everything you draw will be at 50%
// draw your path 1 to context
// draw your path 2 to context
// for each pixel in memory buffer
if(*p > 128) return true; // curves intersect
else p+= 4; // keep looking
Let the resolution of the rasterised versions be your precision and choose the precision to suit your performance needs.
1) There isn't any CGPath API to do this. But, you can do the math to figure it out. Take a look at this wikipedia article on Bezier curves to see how the curves in CGPath are implemented.
2) This is going to be slow on the iPhone I would expect but you could fill both paths into a buffer in difference colors (say, red and blue, with alpha=0.5) and then iterate through the buffer to find any pixels that occur at intersections. This will be extremely slow.
For iOS, the alpha blend seems to be ignored.
Instead, you can do a color blend, which will achieve the same effect, but doesn't need alpha:
CGContextSetBlendMode(context, kCGBlendModeColorDodge);
CGFloat semiTransparent[] = { .5,.5,.5,1};
Pixels in output Image will be:
RGB = 0,0,0 = (0.0f) ... no path
RGB = 64,64,64 = (0.25f) ... one path, no intersection
RGB = 128,128,128 = (0.5f) ... two paths, intersection found
Complete code for drawing:
-(void) drawFirst:(CGPathRef) first second:(CGPathRef) second into:(CGContextRef)context
{
/** setup the context for DODGE (everything gets lighter if it overlaps) */
CGContextSetBlendMode(context, kCGBlendModeColorDodge);
CGFloat semiTransparent[] = { .5,.5,.5,1};
CGContextSetStrokeColor(context, semiTransparent);
CGContextSetFillColor(context, semiTransparent);
CGContextAddPath(context, first);
CGContextFillPath(context);
CGContextStrokePath(context);
CGContextAddPath(context, second);
CGContextFillPath(context);
CGContextStrokePath(context);
}
Complete code for checking output:
[self drawFirst:YOUR_FIRST_PATH second:YOUR_SECOND_PATH into:context];
// Now we can get a pointer to the image data associated with the bitmap
// context.
BOOL result = FALSE;
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
for( int i=0; i<width; i++ )
for( int k=0; k<width; k++ )
{
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((width*round(k))+round(i));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
if( red > 254 )
{
result = TRUE;
break;
}
}
And, finally, here's a slightly modified code from another SO answer ... complete code for creating an RGB space on iOS 4, iOS 5, that will support the above functions:
- (CGContextRef) createARGBBitmapContextWithFrame:(CGRect) frame
{
/** NB: this requires iOS 4 or above - it uses the auto-allocating behaviour of Apple's method, to reduce a potential memory leak in the original StackOverflow version */
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = frame.size.width;
size_t pixelsHigh = frame.size.height;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst
//kCGImageAlphaFirst
);
if (context == NULL)
{
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}