CoreGraphics Pixel Manipulation Blue - iphone

I'm manipulating pixels to turn the greyscale and all appears well, except at the bottom of the image I have blue colored pixels. This appears more the smaller in dimensions the image is and disappears after a certain point. Can anyone see what I'm doing wrong?
CGImageRef imageRef = image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CFDataRef dataref = CopyImagePixels(imageRef);
unsigned char *rawData = (unsigned char *)CFDataGetBytePtr(dataref);
int byteIndex = 0;
for (int ii = 0 ; ii < width * height ; ++ii)
{
int red = (int)rawData[byteIndex];
int blue = (int)rawData[byteIndex+1];
int green = (int)rawData[byteIndex+2];
int r, g, b;
r = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
g = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
b = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
rawData[byteIndex] = clamp(r, 0, 255);
rawData[byteIndex+1] = clamp(g, 0, 255);
rawData[byteIndex+2] = clamp(b, 0, 255);
rawData[byteIndex+3] = 255;
byteIndex += 4;
}
CGContextRef ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef),
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (ctx);
CFRelease(dataref);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(ctx);
Example of problem: http://iforce.co.nz/i/3rei1wba.utm.jpg

There's a reason that no-one has answered - the code posted in your question seems absolutely fine!
I've made a test project here : https://github.com/oneblacksock/stack_overflow_answer_6188863 and when I run it with your code in, it works perfectly!
The only bits that are different from your problem are the CopyPixelData and the clamp functions - perhaps your problem is in these?
Download my test project and see what I've done - try it with an image you know is broken and let me know how you get on!
Sam

The problem is I am assuming CGImageGetWidth(imageRef) == CGImageGetBytesPerRow(imageRef) - which isn't always the case. This was pointed out to me on the Apple developer forums and is correct. I've changed to use the length of the dataref and now it works as expected.
NSUInteger length = CFDataGetLength(dataref);

Related

Malformed OpenGL texture after calling CGBitmapContextCreateImage

I'm trying to use the contents of the UIView as an OpenGL texture. Here's how I obtain it:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
char *rawImage = malloc(4 * s.width * s.height);
CGContextRef bitmapContext = CGBitmapContextCreate(rawImage,
s.width,
s.height,
8,
4 * s.width,
colorSpace,
kCGImageAlphaNoneSkipFirst);
[v.layer renderInContext:bitmapContext];
// converting ARGB to BGRA
for (int i = 0; i < s.width * s.height; i++) {
int p = i * 4;
char a = rawImage[p];
char r = rawImage[p + 1];
char g = rawImage[p + 2];
char b = rawImage[p + 3];
rawImage[p] = b;
rawImage[p + 1] = g;
rawImage[p + 2] = r;
rawImage[p + 3] = a;
}
CFRelease(colorSpace);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
This is the UIView I start with (note the small triangles at the top left corner):
This is what I get on an OpenGL surface after taking a snapshot:
It's clear that coordinates are mangled but I can't tell in which way and what do I do wrong. Is it a row byte alignment that goes wrong?
UPDATE: if I don't do color components swizzling (omitting ARGB to BGRA loop), here's the resulting picture:
Look like an alignment issue. I suggest putting a
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
Also you might mess up the alignment when doing that component swizzling, which BTW is completely unneccessary, as modern OpenGL directly support BGRA format.
Solved: the picture above is what you're getting when you apply a texture that has the same ratio as the surface, but swap surface's s and t coordinates. Color components swizzling is still needed, though.

Totally confusing where to change the code in openCV

In the following links it gives the result as shown below image
https://github.com/BloodAxe/opencv-ios-template-project/downloads
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
i changed the code to
COLOR_RGB2GRAY to COLOR_BGR2BGRA it give me a error says "OpenCV Error: Unsupported format or combination of formats () in cvCanny"
(or)
CGColorSpaceCreateDeviceGray to CGColorSpaceCreateDeviceRGB
I am Totally confusing where to change the code...
I need the output as "white color with black lines" instead of "black color with gray lines
Please Guide me
Thanks a lot in advance
In OpenCVClientViewController.mm include this method (copied from https://stackoverflow.com/a/6672628/) then the image will be converted as shown below:
-(void)inverColors
{
NSLog(#"inverColors called ");
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.imageView.image.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
self.imageView.image= returnImage;
}
// Called when the user changes either of the threshold sliders
- (IBAction)sliderChanged:(id)sender
{
self.highLabel.text = [NSString stringWithFormat:#"%.0f", self.highSlider.value];
self.lowLabel.text = [NSString stringWithFormat:#"%.0f", self.lowSlider.value];
[self processFrame];
}

Histogram of Image in iPhone

I am looking for a way to get a histogram of an image on the iPhone. The OpenCV library is way too big to be included in my app (OpenCV is about 70MB compiled), but I can use OpenGL. However, I have no idea on how to do either of these.
I have found how to get the pixels of the image, but cannot form a histogram. This seems like it should be simple, but I don't know how to store uint8_t into an array.
Here is the relevant question/answer for finding pixels:
Getting RGB pixel data from CGImage
The uint8_t* is just a pointer to a c array containing the bytes of the given color, i.e. {r, g, b, a} or whatever the color byte layout is for your image buffer.
So, referencing the link you provided, and the definition of histogram:
//Say we're in the inner loop and we have a given pixel in rgba format
const uint8_t* pixel = &bytes[row * bpr + col * bytes_per_pixel];
//Now save to histogram_counts uint32_t[4] planes r,g,b,a
//or you could just do one for brightness
//If you want to do data besides rgba, use bytes_per_pixel instead of 4
for (int i=0; i<4; i++) {
//Increment count of pixels with this value
histogram_counts[i][pixel[i]]++;
}
You can take RGB color of your image with CGRef. Look at the below method which I used for this.
- (UIImage *)processUsingPixels:(UIImage*)inputImage {
// 1. Get the raw pixels of the image
UInt32 * inputPixels;
CGImageRef inputCGImage = [inputImage CGImage];
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
bitsPerComponent, inputBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// 3. Convert the image to Black & White
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
// Average of RGB = greyscale
UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0;
*currentPixel = RGBAMake(averageColor, averageColor, averageColor, A(color));
}
}
// 4. Create a new UIImage
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * processedImage = [UIImage imageWithCGImage:newCGImage];
// 5. Cleanup!
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return processedImage;
}

Why am I getting these weird results when editing the image?

I'm trying to do something very simple:
1) Draw a UIImage into a CG bitmap context
2) Get a pointer to the data of the image
3) iterate over all pixels and just set all R G B components to 0 and alpha to 255. The result should appear pure black.
This is the original image I am using. 200 x 40 pixels, PNG-24 ARGB premultiplied alpha (All alpha values == 255):
This is the result (screenshot from Simulator), when I do not modify the pixels. Looks good:
This is the result, when I do the modifications! It looks like if the modification was incomplete. But the for-loops went over EVERY single pixel. The counter proves it: Console reports modifiedPixels = 8000 which is exactly 200 x 40 pixels. It looks always exactly the same.
Note: The PNG image I use has no alpha < 255. So no transparent pixels.
This is how I create the context. Nothing special...
int bitmapBytesPerRow = (width * 4);
int bitmapByteCount = (bitmapBytesPerRow * imageHeight);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc(bitmapByteCount);
bitmapContext = CGBitmapContextCreate(bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
CGImageAlphaPremultipliedFirst);
Next, I draw the image into that bitmapContext, and obtain the data like this:
void *data = CGBitmapContextGetData(bitmapContext);
This is the code which iterates over the pixels to modify them:
size_t bytesPerRow = CGImageGetBytesPerRow(img);
NSInteger modifiedPixels = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
long int offset = bytesPerRow * y + 4 * x;
// ARGB
unsigned char alpha = data[offset];
unsigned char red = data[offset+1];
unsigned char green = data[offset+2];
unsigned char blue = data[offset+3];
data[offset] = 255;
data[offset+1] = 0;
data[offset+2] = 0;
data[offset+3] = 0;
modifiedPixels++;
}
}
When done, I obtain a new UIImage from the bitmap context and display it in a UIImageView, to see the result:
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [[UIImage alloc] initWithCGImage:imageRef];
Question:
What am I doing wrong?
Is this happening because I modify the data while iterating over it? Must I duplicate it?
Use CGBitmapContextGetBytesPerRow(bitmapContext) to get bytesPerRow instead getting from image (image has only 3 bytes per pixels if it hasn't alpha informations)
Might you're getting wrong height or width.... and by the way 240x40=9600 not 8000 so that's for sure that you're not iterating over each and every pixel.

iPhone App Green Screen Replacement

Q: I'm looking to use the iPhone camera to take a photo and then replace the green screen in that photo with another photo.
What's the best way to dive into this? I couldn't find many resources online.
Thanks in advance!
Conceptually, all that you need to do is loop through the pixel data of the photo taken by the phone, and for each pixel that is not within a certain range of green, copy the pixel into the same location on your background image.
Here is an example I modified from keremic's answer to another stackoverflow question.
NOTE: This is untested and just intended to give you an idea of a technique that will work
//Get data into C array
CGImageRef image = [UIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel_ * width;
NSUInteger bitsPerComponent = 8;
unsigned char *data = malloc(height * width * bytesPerPixel);
// you will need to copy your background image into resulting_image_data.
// which I am not showing here
unsigned char *resulting_image_data = malloc(height * width * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
//loop through each pixel
for(int row = 0; row < height; row++){
for(int col = 0; col < width*bytesPerPixel; col=col+4){
red = data[col];
green = data[col + 1];
blue = data[col + 2];
alpha = data[col + 3];
// if the pixel is within a shade of green
if(!(green > 250 && red < 10 && blue < 10)){
//copy them over to the background image
resulting_image_data[row*col] = red;
resulting_image_data[row*col+1] = green;
resulting_image_data[row*col+2] = blue;
resulting_image_data[row*col+3] = alpha;
}
}
}
//covert resulting_image_data into a UIImage
Have a look at compiling OpenCV for iPhone - not an easy task, but it gives you access to a whole library of really great image processing tools.
I'm using openCV for an app I'm developing at the moment (not all that dissimilar to yours) - for what you're trying to do, openCV would be a great solution, although it requires a bit of learning etc. Once you've got OpenCV working, the actual task of removing green shouldn't be too hard.
Edit: This link will be a helpful resource if you do decide to use OpenCV: Compiling OpenCV for iOS