Getting pixel data from UIImageView -- works on simulator, not device - iphone

Based on the responses to a previous question, I've created a category on UIImageView for extracting pixel data. This works fine in the simulator, but not when deployed to the device. I should say not always -- the odd thing is that it does fetch the correct pixel colour if point.x == point.y; otherwise, it gives me pixel data for a pixel on the other side of that line, as if mirrored. (So a tap on a pixel in the lower-right corner of the image gives me the pixel data for a corresponding pixel in the upper-left, but tapping on a pixel in the lower-left corner returns the correct pixel colour). The touch coordinates (CGPoint) are correct.
What am I doing wrong?
Here's my code:
#interface UIImageView (PixelColor)
- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point;
#end
#implementation UIImageView (PixelColor)
- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point
{
UIColor* color = nil;
CGImageRef cgImage = [self.image CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = height - (NSUInteger)floor(point.y);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 red = data[offset];
UInt8 blue = data[offset+1];
UInt8 green = data[offset+2];
UInt8 alpha = data[offset+3];
CFRelease(bitmapData);
color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
}
return color;
}

I think R B G is wrong. You have:
UInt8 red = data[offset];
UInt8 blue = data[offset+1];
UInt8 green = data[offset+2];
But don't you really mean R G B? :
UInt8 red = data[offset];
UInt8 green = data[offset+1];
UInt8 blue = data[offset+2];
But even with that fixed there's still a problem as it turns out Apple byte swaps (great article) the R and B values when on the device, but not when on the simulator.
I had a similar simulator/device issue with a PNG's pixel buffer returned by CFDataGetBytePtr.
This resolved the issue for me:
#if TARGET_IPHONE_SIMULATOR
UInt8 red = data[offset];
UInt8 green = data[offset + 1];
UInt8 blue = data[offset + 2];
#else
//on device
UInt8 blue = data[offset]; //notice red and blue are swapped
UInt8 green = data[offset + 1];
UInt8 red = data[offset + 2];
#endif
Not sure if this will fix your issue, but your misbehaving code looks close to what mine looked like before I fixed it.
One last thing: I believe the simulator will let you access your pixel buffer data[] even after CFRelease(bitmapData) is called. On the device this is not the case in my experience. Your code shouldn't be affected, but in case this helps someone else I thought I'd mention it.

You could try the following alternative approach:
create a CGBitmapContext
draw the image into the context
call CGBitmapContextGetData on the context to get the underlying data
work out your offset into the raw data (based on how you created the bitmap context)
extract the value
This approach works for me on the simulator and device.

It looks like that in the code posted in the original questions instead of:
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = height - (NSUInteger)floor(point.y);
It should be:
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = (NSUInteger)floor(point.y);

Related

Malformed OpenGL texture after calling CGBitmapContextCreateImage

I'm trying to use the contents of the UIView as an OpenGL texture. Here's how I obtain it:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
char *rawImage = malloc(4 * s.width * s.height);
CGContextRef bitmapContext = CGBitmapContextCreate(rawImage,
s.width,
s.height,
8,
4 * s.width,
colorSpace,
kCGImageAlphaNoneSkipFirst);
[v.layer renderInContext:bitmapContext];
// converting ARGB to BGRA
for (int i = 0; i < s.width * s.height; i++) {
int p = i * 4;
char a = rawImage[p];
char r = rawImage[p + 1];
char g = rawImage[p + 2];
char b = rawImage[p + 3];
rawImage[p] = b;
rawImage[p + 1] = g;
rawImage[p + 2] = r;
rawImage[p + 3] = a;
}
CFRelease(colorSpace);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
This is the UIView I start with (note the small triangles at the top left corner):
This is what I get on an OpenGL surface after taking a snapshot:
It's clear that coordinates are mangled but I can't tell in which way and what do I do wrong. Is it a row byte alignment that goes wrong?
UPDATE: if I don't do color components swizzling (omitting ARGB to BGRA loop), here's the resulting picture:
Look like an alignment issue. I suggest putting a
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
Also you might mess up the alignment when doing that component swizzling, which BTW is completely unneccessary, as modern OpenGL directly support BGRA format.
Solved: the picture above is what you're getting when you apply a texture that has the same ratio as the surface, but swap surface's s and t coordinates. Color components swizzling is still needed, though.

Totally confusing where to change the code in openCV

In the following links it gives the result as shown below image
https://github.com/BloodAxe/opencv-ios-template-project/downloads
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
i changed the code to
COLOR_RGB2GRAY to COLOR_BGR2BGRA it give me a error says "OpenCV Error: Unsupported format or combination of formats () in cvCanny"
(or)
CGColorSpaceCreateDeviceGray to CGColorSpaceCreateDeviceRGB
I am Totally confusing where to change the code...
I need the output as "white color with black lines" instead of "black color with gray lines
Please Guide me
Thanks a lot in advance
In OpenCVClientViewController.mm include this method (copied from https://stackoverflow.com/a/6672628/) then the image will be converted as shown below:
-(void)inverColors
{
NSLog(#"inverColors called ");
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.imageView.image.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
self.imageView.image= returnImage;
}
// Called when the user changes either of the threshold sliders
- (IBAction)sliderChanged:(id)sender
{
self.highLabel.text = [NSString stringWithFormat:#"%.0f", self.highSlider.value];
self.lowLabel.text = [NSString stringWithFormat:#"%.0f", self.lowSlider.value];
[self processFrame];
}

CGBitmapContextCreate and interlaced? image

I'm converting some image drawing code from Cairo to Quartz and I'm slowly making progress and learning Quartz along the way but I've run into a problem with the image format.
In the Cairo version it works like this:
unsigned short *d = (unsigned short*)imageSurface->get_data();
int stride = imageSurface->get_stride() >> 1;
int height = imageHeight;
int width = imageWidth;
do {
d = *p++; // p = raw image data
width --;
if( width == 0 ) {
height --;
width = imageWidth;
d += stride;
}
} while( height );
Now this produces the image as expected on the Cairo::ImageSurface. I've converted this over to how use Quartz and it is making progress but I'm not sure where I'm going wrong:
NSInteger pixelLen = (width * height) * 8;
unsigned char *d = (unsigned char*)malloc(pixelLen);
unsigned char *rawPixels = d;
int height = imageHeight;
int width = imageWidth;
do {
d = *p++; // p = raw image data
width --;
if( width == 0 ) {
height --;
width = imageWidth;
}
} while( height );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawPixels, imageWidth, imageHeight, 8, tileSize * sizeof(int), colorSpace, kCGBitmapByteOrderDefault);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
Now this is obviously heading in the right direction as it produces something that looks a bit like the desired image but it creates 4 copies of the image in a row, each with different pixels filled in so I'm assuming this is an interlaced image (I don't know a great deal about image formats) and that I need to somehow combine them somehow to create a complete image but I don't know how to do that with Quartz.
I think the stride has something to do with the problem but from what I understand this is the byte distance from one row of pixels to another which would not be relevant in the context of Quartz?
It sounds like stride would correspond to rowBytes or bytesPerRow. This value is important because it is not necessarily equal to width * bytesPerPixel because rows might be padded to optimized offsets.
It's not completely obvious what the Cairo code is doing, and it doesn't look quite correct either. Either way, without the stride part, your loop makes no sense because it makes an exact copy of the bytes.
The loop in the Cairo code is copying a row of bytes, then jumping over the next row of data.

Why am I getting these weird results when editing the image?

I'm trying to do something very simple:
1) Draw a UIImage into a CG bitmap context
2) Get a pointer to the data of the image
3) iterate over all pixels and just set all R G B components to 0 and alpha to 255. The result should appear pure black.
This is the original image I am using. 200 x 40 pixels, PNG-24 ARGB premultiplied alpha (All alpha values == 255):
This is the result (screenshot from Simulator), when I do not modify the pixels. Looks good:
This is the result, when I do the modifications! It looks like if the modification was incomplete. But the for-loops went over EVERY single pixel. The counter proves it: Console reports modifiedPixels = 8000 which is exactly 200 x 40 pixels. It looks always exactly the same.
Note: The PNG image I use has no alpha < 255. So no transparent pixels.
This is how I create the context. Nothing special...
int bitmapBytesPerRow = (width * 4);
int bitmapByteCount = (bitmapBytesPerRow * imageHeight);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc(bitmapByteCount);
bitmapContext = CGBitmapContextCreate(bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
CGImageAlphaPremultipliedFirst);
Next, I draw the image into that bitmapContext, and obtain the data like this:
void *data = CGBitmapContextGetData(bitmapContext);
This is the code which iterates over the pixels to modify them:
size_t bytesPerRow = CGImageGetBytesPerRow(img);
NSInteger modifiedPixels = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
long int offset = bytesPerRow * y + 4 * x;
// ARGB
unsigned char alpha = data[offset];
unsigned char red = data[offset+1];
unsigned char green = data[offset+2];
unsigned char blue = data[offset+3];
data[offset] = 255;
data[offset+1] = 0;
data[offset+2] = 0;
data[offset+3] = 0;
modifiedPixels++;
}
}
When done, I obtain a new UIImage from the bitmap context and display it in a UIImageView, to see the result:
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [[UIImage alloc] initWithCGImage:imageRef];
Question:
What am I doing wrong?
Is this happening because I modify the data while iterating over it? Must I duplicate it?
Use CGBitmapContextGetBytesPerRow(bitmapContext) to get bytesPerRow instead getting from image (image has only 3 bytes per pixels if it hasn't alpha informations)
Might you're getting wrong height or width.... and by the way 240x40=9600 not 8000 so that's for sure that you're not iterating over each and every pixel.

Changing the color of a pixel

i am able to read a particular pixel at given CGPoint but i have been looking to change the color of a pixel and it would be highly appreciated if anyone can help me out with a code snippet.
My Code is:
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
red = data[offset+1];
green = data[offset+2];
blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f)
green:(green/255.0f) blue:(blue/255.0f)
alpha:(alpha/255.0f)];
}
It's not clear what you are trying to do. If you want to change the pixel's color in the original CGImageRef then you would use something like:
// Set the color of the pixel to 50% grey + 50% alpha
data[offset+0] = 128;
data[offset+1] = 128;
data[offset+2] = 128;
data[offset+3] = 128;
// Create a CGBitmapImageContext
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, CGImageGetBitsPerComponent(), width * 4, CGImageGetColorSpace(), kCGImageAlphPremultipliedFirst);
// Draw the bitmap context back to your original context
CGContextDrawImage(bitmapContext, CGMakeRect(...), cgctx);
You should make all of your changes to the data* at once and then write the modified bitmap buffer back to the original context.