What could be the cause of this error with CGBitmapContextCreate()? - iphone

My application gives these errors on the console:
<Error>: CGBitmapContextCreateImage: invalid context 0xdf12000
Program received signal: “EXC_BAD_ACCESS”.
from the following code:
- (UIImage *)pureBlackAndWhiteImage:(UIImage *)image {
//CGImageRef imageR = [image CGImage];
CGColorSpaceRef colorSpac = CGColorSpaceCreateDeviceGray();
//CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextRef contex = CGBitmapContextCreate(NULL, image.size.width,
image.size.height, 8, 1*image.size.width, colorSpac, kCGImageAlphaNone);
contex = malloc(image.size.height * image.size.width * 4);
CGColorSpaceRelease(colorSpac);
// Draw the image into the grayscale context
//CGContextDrawImage(contex, rect, NULL);
CGImageRef grayscale = CGBitmapContextCreateImage(contex);
CGContextRelease(contex);
NSUInteger width = CGImageGetWidth(grayscale);
NSUInteger height = CGImageGetHeight(grayscale);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CFDataRef datare=CGDataProviderCopyData(CGImageGetDataProvider(grayscale));
unsigned char *dataBitmap=(unsigned char *)CFDataGetBytePtr(datare);
dataBitmap = malloc(height * width * 4);
NSUInteger bytesPerPixel = 1;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(dataBitmap, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaNoneSkipLast);
//unsigned char *dataBitmap = [self bitmapFromImage:image];
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
// if ((dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 3 / 2)) {
if(dataBitmap[i+1]>128 && dataBitmap[i+2]>128 && dataBitmap[i+3]>128)
{
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
//CFDataRef newData=CFDataCreate(NULL,dataBitmap,length);
CGColorSpaceRef colorSpa = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapcontext = CGBitmapContextCreate(dataBitmap,
image.size.width,
image.size.height,
8,
1*image.size.width,
colorSpa,
kCGImageAlphaNone);
CFRelease(colorSpace);
CFRelease(colorSpa);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapcontext);
CFRelease(cgImage);
CFRelease(bitmapcontext);
CGContextRelease(context);
free(dataBitmap);
//CFRelease(imageR);
UIImage *newImage = [UIImage imageWithCGImage:cgImage];
return newImage;
}
What could be causing these errors?

You're trying to create the context using 4 bytes per row, when the width of the image is a lot more than that.
If you have 1 byte per pixel, then instead of 4 you should use 1 * image.size.width, like you do the second and third times you create a bitmap context.
Besides that, I don't think passing imageR as the first argument is a good idea. If you're deploying for iOS 4 or later, you can pass NULL instead.
Otherwise, I think you have to allocate the memory to store the bitmap context data.

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

iOS grayscale/black & white

I am having an issue with some code that changes a UIImage to grayscale. It works right on iPhone/iPod, but on iPad whatever is already drawn gets all stretched and skewed in the process.
It also sometimes crashes only on iPad on the line
imageRef = CGBitmapContextCreateImage (ctx);
Here is the code:
CGContextRef ctx;
CGImageRef imageRef = [self.drawImage.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = 0;
int grayScale;
for(int ii = 0 ; ii < width * height ; ++ii)
{
grayScale = (rawData[byteIndex] + rawData[byteIndex + 1] + rawData[byteIndex + 2]) / 3;
rawData[byteIndex] = (char)grayScale;
rawData[byteIndex+1] = (char)grayScale;
rawData[byteIndex+2] = (char)grayScale;
//rawData[byteIndex+3] = 255;
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);
Consider leveraging the CIColorMonochrome filter available in iOS 6.
void ToMonochrome()
{
var mono = new CIColorMonochrome();
mono.Color = CIColor.FromRgb(1, 1, 1);
mono.Intensity = 1.0f;
var uiImage = new UIImage("Images/photo.jpg");
mono.Image = CIImage.FromCGImage(uiImage.CGImage);
DisplayFilterOutput(mono, ImageView);
}
static void DisplayFilterOutput(CIFilter filter, UIImageView imageView)
{
CIImage output = filter.OutputImage;
if (output == null)
return;
var context = CIContext.FromOptions(null);
var renderedImage = context.CreateCGImage(output, output.Extent);
var finalImage = new UIImage(renderedImage);
imageView.Image = finalImage;
}
Figured it out with some help elsewhere
Got rid of an extra context being used and changed the bitmap format
CGContextRef ctx;
CGImageRef imageRef = [self.drawImage.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
ctx = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), imageRef);
int byteIndex = 0;
int grayScale;
for(int ii = 0 ; ii < width * height ; ++ii)
{
grayScale = (rawData[byteIndex] + rawData[byteIndex + 1] + rawData[byteIndex + 2]) / 3;
rawData[byteIndex] = (char)grayScale;
rawData[byteIndex+1] = (char)grayScale;
rawData[byteIndex+2] = (char)grayScale;
//rawData[byteIndex+3] = 255;
byteIndex += 4;
}
imageRef = CGBitmapContextCreateImage(ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);

"<Error>: CGContextDrawImage: invalid context 0x0"

I have this code, which I found on here, which finds the color of a pixel in an image:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
But at the line CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); it gives an error in NSLog of <Error>: CGContextDrawImage: invalid context 0x0. It doesn't crash the app, but obviously I'd prefer not to have an error there.
Any suggests for this?
This is usually due to the CGBitmapContextCreate failing due to an unsupported combination of flags, bitsPerComponent, etc. Try removing the kCGBitmapByteOrder32Big flag; there's an Apple doc that lists all the possible context formats - look for "Supported Pixel Formats".
Duh, I figured it out. So silly.
When I first enter the view, the UIImageView is blank, so the method is called against an empty UIIMageView. Makes sense that it would crash with "invalid context". Of course! The UIIMageView is empty. How can it get the width and height of an image that isn't there?
If I comment out the method, pick an image, and then put the method back in, it works. Makes sense.
I just put in an if/else statement to only call the method if the image view isn't empty.

How to crop the image in iPhone

I want to do the same thing as asked in this question.
In my App i want to crop the image like we do image cropping in FaceBook can any one guide me with the link of good tutorial or with any sample code. The Link which i have provided will completely describe my requirement.
You may create new image with any properties. Here is my function, witch do that. you just need to use your own parameters of new image. In my case, image is not cropped, I just making some effect, moving pixels from there original place to another. But if you initialize new image with another height and width, you can just copy from any range of pixels of old image you need, to new one:
-(UIImage *)Color:(UIImage *)img
{
int R;
float m_width = img.size.width;
float m_height = img.size.height;
if (m_width>m_height) R = m_height*0.9;
else R = m_width*0.9;
int m_wint = (int)m_width; //later, we will need this parameters in float and int. you may just use "(int)" and "(float)" before variables later, and do not implement another ones
int m_hint = (int)m_height;
CGRect imageRect;
//cheking image orientation. we will work with image pixel-by-pixel, so we need to make top side at the top.
if(img.imageOrientation==UIImageOrientationUp
|| img.imageOrientation==UIImageOrientationDown)
{
imageRect = CGRectMake(0, 0, m_wint, m_hint);
}
else
{
imageRect = CGRectMake(0, 0, m_hint, m_wint);
}
uint32_t *rgbImage = (uint32_t *) malloc(m_wint * m_hint * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_wint, m_hint, 8, m_wint *sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextTranslateCTM(context, 0, m_hint);
CGContextScaleCTM(context, 1.0, -1.0);
switch (img.imageOrientation) {
case UIImageOrientationRight:
{
CGContextRotateCTM(context, M_PI / 2);
CGContextTranslateCTM(context, 0, -m_wint);
}break;
case UIImageOrientationLeft:
{
CGContextRotateCTM(context, - M_PI / 2);
CGContextTranslateCTM(context, -m_hint, 0);
}break;
case UIImageOrientationUp:
{
CGContextTranslateCTM(context, m_wint, m_hint);
CGContextRotateCTM(context, M_PI);
}
default:
break;
}
CGContextDrawImage(context, imageRect, img.CGImage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//here is new image. you can change m_wint and m_hint as you whant
uint8_t *result = (uint8_t *) calloc(m_wint * m_hint * sizeof(uint32_t), 1);
for(int y = 0; y < m_hint; y++) //new m_hint here
{
float fy=y;
double yy = (m_height*( asinf(m_height/(2*R))-asin(((m_height/2)-fy)/R) )) /
(2*asin(m_height/(2*R))); // (xx, yy) - coordinates of pixel of OLD image
for(int x = 0; x < m_wint; x++) //new m_wint here
{
float fx=x;
double xx = (m_width*( asin(m_width/(2*R))-asin(((m_width/2)-fx)/R) )) /
(2*asin(m_width/(2*R)));
uint32_t rgbPixel=rgbImage[(int)yy * m_wint + (int)xx];
int intRedSource = (rgbPixel>>24)&255;
int intGreenSource = (rgbPixel>>16)&255;
int intBlueSource = (rgbPixel>>8)&255;
result[(y * (int)m_wint + x) * 4] = 0;
result[(y * (int)m_wint + x) * 4 + 1] = intBlueSource;
result[(y * (int)m_wint + x) * 4 + 2] = intGreenSource;
result[(y * (int)m_wint + x) * 4 + 3] = intRedSource;
}
}
free(rgbImage);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_wint, m_hint, 8, m_wint * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast ); //new m_wint and m_hint as well
CGImageRef image1 = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image1];
CGImageRelease(image1);
#try {
free(result);
}
#catch (NSException * e) {
NSLog(#"proc. Exception: %#", e);
}
return resultUIImage;
}
CGRect rectImage = CGRectMake(p1.x,p1.y, p2.x - p1.x, p4.y - p1.y);
//Create bitmap image from original image data,
//using rectangle to specify desired crop area
CGImageRef imageRef = CGImageCreateWithImageInRect([imageForCropping CGImage], rectImage);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
imageView1 = [[UIImageView alloc] initWithFrame:CGRectMake(p1.x, p1.y,p2.x-p1.x p4.y-p1.y)];
imageView1.image = croppedImage;
[self.view addSubview:imageView1];
CGImageRelease(imageRef);

iPhone Image Processing--matrix convolution

I am implementing a matrix convolution blur on the iPhone. The following code converts the UIImage supplied as an argument of the blur function into a CGImageRef, and then stores the RGBA values in a standard C char array.
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
Then the pixels values stored in the pixels array are convolved, and stored in another array.
unsigned char *results = malloc((height) * (width) * 4);
Finally, these augmented pixel values are changed back into a CGImageRef, converted to a UIImage, and the returned at the end of the function with the following code.
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
NSLog(#"edges found");
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;
This works perfectly, once. Then, once the image is put through the filter again, very odd, unprecedented pixel values representing input pixel values that don't exist, are returned. Is there any reason why this should work the first time, but then not afterward? Beneath is the entirety of the function.
-(UIImage*) blur:(UIImage*)imgRef {
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
height = imgRef.size.height;
width = imgRef.size.width;
float matrix[] = {1,1,1,1,1,1,1,1,1};
float divisor = 9;
float shift = 0;
unsigned char *results = malloc((height) * (width) * 4);
for(int y = 1; y < height; y++){
for(int x = 1; x < width; x++){
float red = 0;
float green = 0;
float blue = 0;
int multiplier=1;
if(y>0 && x>0){
int index = (y-1)*width + x;
red = matrix[0]*multiplier*(float)pixels[4*(index-1)] +
matrix[1]*multiplier*(float)pixels[4*(index)] +
matrix[2]*multiplier*(float)pixels[4*(index+1)];
green = matrix[0]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[1]*multiplier*(float)pixels[4*(index)+1] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+1];
blue = matrix[0]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[1]*multiplier*(float)pixels[4*(index)+2] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+2];
index = (y)*width + x;
red = red+ matrix[3]*multiplier*(float)pixels[4*(index-1)] +
matrix[4]*multiplier*(float)pixels[4*(index)] +
matrix[5]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[3]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[4]*multiplier*(float)pixels[4*(index)+1] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[3]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[4]*multiplier*(float)pixels[4*(index)+2] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+2];
index = (y+1)*width + x;
red = red+ matrix[6]*multiplier*(float)pixels[4*(index-1)] +
matrix[7]*multiplier*(float)pixels[4*(index)] +
matrix[8]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[6]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[7]*multiplier*(float)pixels[4*(index)+1] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[6]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[7]*multiplier*(float)pixels[4*(index)+2] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+2];
red = red/divisor+shift;
green = green/divisor+shift;
blue = blue/divisor+shift;
if(red<0){
red=0;
}
if(green<0){
green=0;
}
if(blue<0){
blue=0;
}
if(red>255){
red=255;
}
if(green>255){
green=255;
}
if(blue>255){
blue=255;
}
int realPos = 4*(y*imgRef.size.width + x);
results[realPos] = red;
results[realPos + 1] = green;
results[realPos + 2] = blue;
results[realPos + 3] = 1;
}else {
int realPos = 4*((y)*(imgRef.size.width) + (x));
results[realPos] = 0;
results[realPos + 1] = 0;
results[realPos + 2] = 0;
results[realPos + 3] = 1;
}
}
}
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;}
THANKS!!!
The problem was that I was assuming the alpha value, needed to calculate it like the RGB values.