I am having a strange problem in my project. What I want to do is that, a user will paint or draw using swipe over a image as overlay and I just need to crop the area from the image that is below the painted region. My code is working well only when the UIImage view that is below the paint region is 320 pixel wide i.e. width of iPhone. But If I change the width of the ImageView, I am not getting the desired result.
I am using the following code to construct a CGRect around the painted part.
-(CGRect)detectRectForFaceInImage:(UIImage *)image{
int l,r,t,b;
l = r = t = b = 0;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
BOOL pixelFound = NO;
for (int i = leftX ; i < rightX; i++) {
for (int j = topY; j < bottomY + 20; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Left %d", alpha);
l = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = rightX ; i >= l; i--) {
for (int j = topY; j < bottomY ; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Right %d", alpha);
r = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = topY ; i < bottomY ; i++) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Top %d", alpha);
t = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = bottomY ; i >= t; i--) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(#"Bottom %d", alpha);
b = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
CFRelease(pixelData);
return CGRectMake(l, t, r - l, b-t);
}
In the above code leftX, rightX, topY, bottomY are the extreme values(from CGPoint) in float that is calculated when user swipe their finger on the screen while painting and represents a rectangle which contains the painted area in its bounds (to minimise the loop).
leftX - minimum in X-axis
rightX - maximum in X-axis
topY - min in Y-axis
bottom - max in Y-axis
Here l,r,t,b are the calculated values for actual rectangle.
As expressed earlier, this code work well when the imageview in which paining is done is 320 pixels wide and is spanned throughout the screen width. But If the imageview's width is smaller like 300 and is placed to the center of the screen, the code give false result.
Note: I am scaling the image according to imageview's width.
Below are the NSLog output:
When imageview's width is 320 pixel (These are value for the component of color at matched pixel or non-transparent pixel):
2013-05-17 17:58:17.170 FunFace[12103:907] Left 41
2013-05-17 17:58:17.172 FunFace[12103:907] Right 1
2013-05-17 17:58:17.173 FunFace[12103:907] Top 73
2013-05-17 17:58:17.174 FunFace[12103:907] Bottom 12
When imageview's width is 300 pixel:
2013-05-17 17:55:26.066 FunFace[12086:907] Left 42
2013-05-17 17:55:26.067 FunFace[12086:907] Right 255
2013-05-17 17:55:26.069 FunFace[12086:907] Top 42
2013-05-17 17:55:26.071 FunFace[12086:907] Bottom 255
How can I solve this problem because I need the imageview in center with padding to its both side.
EDIT: Ok looks like my problem is due to image orientation of JPEG images(from camera). Png images are working good and are not affected with change in imageview's width.
But still JPEGs are not working even if I am handling the orientation.
First, I wonder if you're accessing something other than 32-bit RGBA? The index value for data[] is stored in pixelInfo then moves +2 bytes, rather than +3. That would land you on the blue byte. If your intent is to use RGBA, that fact would affect the rest of the results of your code.
Moving on, with an assumption that you were still getting flawed results despite having the correct alpha component value, it seems your "fixed" code would give Left,Right,Top,Bottom NSLog outputs with alpha values less than the full-on 255, something close to 0. In this case, without further code, I'd suggest your problem is within the code you use to scale down the image from your 320x240 source to 300x225 (or perhaps any other scaled dimensions). I could imagine your image having alpha values at the edge of 255 if your "scale" code is performing a crop rather than a scale.
Related
In the following links it gives the result as shown below image
https://github.com/BloodAxe/opencv-ios-template-project/downloads
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
i changed the code to
COLOR_RGB2GRAY to COLOR_BGR2BGRA it give me a error says "OpenCV Error: Unsupported format or combination of formats () in cvCanny"
(or)
CGColorSpaceCreateDeviceGray to CGColorSpaceCreateDeviceRGB
I am Totally confusing where to change the code...
I need the output as "white color with black lines" instead of "black color with gray lines
Please Guide me
Thanks a lot in advance
In OpenCVClientViewController.mm include this method (copied from https://stackoverflow.com/a/6672628/) then the image will be converted as shown below:
-(void)inverColors
{
NSLog(#"inverColors called ");
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.imageView.image.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
self.imageView.image= returnImage;
}
// Called when the user changes either of the threshold sliders
- (IBAction)sliderChanged:(id)sender
{
self.highLabel.text = [NSString stringWithFormat:#"%.0f", self.highSlider.value];
self.lowLabel.text = [NSString stringWithFormat:#"%.0f", self.lowSlider.value];
[self processFrame];
}
I'm trying to set up a collision type hit test for a defined of pixels within a UIImageView. I'm only wish to cycle through pixels in a defined area.
Here's what I have so far:
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Is someone able to offer any reason why it isn't working?
I would suggest using ANImageBitmapRep. It allows for easy pixel-level manipulation of images without the hassle of context, linking against other libraries, or raw memory allocation. To create an ANImgaeBitmapRep with the contents of a view, you could do something like this:
BMPoint sizePt = BMPointMake((int)self.frame.size.width,
(int)self.frame.size.height);
ANImageBitmapRep * irep = [[ANImageBitmapRep alloc] initWithSize:sizePt];
CGContextRef ctx = [irep context];
[self.layer renderInContext:context];
[irep setNeedsUpdate:YES];
Then, you can crop out your desired rectangle. Note that coordinates are relative to the bottom left corner of the view:
// assuming aFrame is our frame
CGRect cFrame = CGRectMake(aFrame.origin.x,
self.frame.size.height - (aFrame.origin.y + aFrame.size.height),
aFrame.size.width, aFrame.size.height);
[irep cropFrame:];
Finally, you can find the percentage of alpha in the image using the following:
double totalAlpha;
double totalPixels;
for (int x = 0; x < [irep bitmapSize].x; x++) {
for (int y = 0; y < [irep bitmapSize].y; y++) {
totalAlpha += [irep getPixelAtPoint:BMPointMake(x, y)].alpha;
totalPixels += 1;
}
}
double alphaPct = totalAlpha / totalPixels;
You can then use the alphaPct variable as a percentage from 0 to 1. Note that, to prevent leaks, you must release the ANImageBitmapRep object using release: [irep release].
Hope that I helped. Image data is a fun and interesting field when it comes to iOS development.
I'm trying to do something very simple:
1) Draw a UIImage into a CG bitmap context
2) Get a pointer to the data of the image
3) iterate over all pixels and just set all R G B components to 0 and alpha to 255. The result should appear pure black.
This is the original image I am using. 200 x 40 pixels, PNG-24 ARGB premultiplied alpha (All alpha values == 255):
This is the result (screenshot from Simulator), when I do not modify the pixels. Looks good:
This is the result, when I do the modifications! It looks like if the modification was incomplete. But the for-loops went over EVERY single pixel. The counter proves it: Console reports modifiedPixels = 8000 which is exactly 200 x 40 pixels. It looks always exactly the same.
Note: The PNG image I use has no alpha < 255. So no transparent pixels.
This is how I create the context. Nothing special...
int bitmapBytesPerRow = (width * 4);
int bitmapByteCount = (bitmapBytesPerRow * imageHeight);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc(bitmapByteCount);
bitmapContext = CGBitmapContextCreate(bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
CGImageAlphaPremultipliedFirst);
Next, I draw the image into that bitmapContext, and obtain the data like this:
void *data = CGBitmapContextGetData(bitmapContext);
This is the code which iterates over the pixels to modify them:
size_t bytesPerRow = CGImageGetBytesPerRow(img);
NSInteger modifiedPixels = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
long int offset = bytesPerRow * y + 4 * x;
// ARGB
unsigned char alpha = data[offset];
unsigned char red = data[offset+1];
unsigned char green = data[offset+2];
unsigned char blue = data[offset+3];
data[offset] = 255;
data[offset+1] = 0;
data[offset+2] = 0;
data[offset+3] = 0;
modifiedPixels++;
}
}
When done, I obtain a new UIImage from the bitmap context and display it in a UIImageView, to see the result:
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [[UIImage alloc] initWithCGImage:imageRef];
Question:
What am I doing wrong?
Is this happening because I modify the data while iterating over it? Must I duplicate it?
Use CGBitmapContextGetBytesPerRow(bitmapContext) to get bytesPerRow instead getting from image (image has only 3 bytes per pixels if it hasn't alpha informations)
Might you're getting wrong height or width.... and by the way 240x40=9600 not 8000 so that's for sure that you're not iterating over each and every pixel.
i load a transparent .png into an UIImage.
How to i calculte the real bounding-box. E.g. if the the real image is smaller than the .png-dimensions.
Thanks for helping
Assuming "bounding box of an image" is simply a rectangle in the image, specified in pixel coordinates.
You want the rectangle of image which contains all pixels with an alpha greater than threshold (it is equivalent to say that all pixels that are not in this rectangle have an alpha lower than threshold). After that you can transform this rectangle in screen coordinate (or whatever you want).
The basic algorithm is to start with a rectangle containing the whole image, then shrink the rectangle horizontally, then vertically (or vertically then horizontally).
I don't know Objective-C, so I put the code in pure C (some functions are just to make the code more clearer):
typedef struct Rectangle
{
unsigned int x1, y1, x2, y2;
} Rectangle;
typedef struct Image
{
unsigned int height,width;
unsigned int* data;
} Image;
unsigned char getPixelAlpha(Image* img, unsigned int x, unsigned int y)
{
unsigned int pixel = 0; // default = fully transparent
if(x >= img->width || y >= img->height)
return pixel; // Consider everything not in the image fully transparent
pixel = img->data[x + y * img->width];
return (unsigned char)((pixel & 0xFF000000) >> 24);
}
void shrinkHorizontally(Image* img, unsigned char threshold, Rectangle* rect)
{
int x, y;
// Shrink from left
for(x = 0; x < (int)img->width; x++)
{
// Find the maximum alpha of the vertical line at x
unsigned char lineAlphaMax = 0;
for(y = 0; y < (int)img->height; y++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the left limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->x1 = x;
break;
}
}
// Shrink from right
for(x = img->width - 1; x >= 0; x--)
{
// Find the maximum alpha of the vertical line at x
unsigned char lineAlphaMax = 0;
for(y = 0; y < (int)img->height; y++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the right limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->x2 = x;
break;
}
}
}
// Almost the same than shrinkHorizontally.
void shrinkVertically(Image* img, unsigned char threshold, Rectangle* rect)
{
int x, y;
// Shrink from up
for(y = 0; y < (int)img->height; y++)
{
// Find the maximum alpha of the horizontal line at y
unsigned char lineAlphaMax = 0;
for(x = 0; x < (int)img->width; x++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the up limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->y1 = x;
break;
}
}
// Shrink from bottom
for(y = img->height- 1; y >= 0; y--)
{
// Find the maximum alpha of the horizontal line at y
unsigned char lineAlphaMax = 0;
for(x = 0; x < (int)img->width; x++)
{
unsigned char alpha = getPixelAlpha(img,x,y);
if(alpha > lineAlphaMax)
lineAlphaMax = alpha;
}
// If at least on pixel of the line if more opaque than 'threshold'
// then we found the bottom limit of the rectangle
if(lineAlphaMax >= threshold)
{
rect->y2 = x;
break;
}
}
}
// Find the 'real' bounding box
Rectangle findRealBoundingBox(Image* img, unsigned char threshold)
{
Rectangle r = { 0, 0, img->width, img->height };
shrinkHorizontally(img,threshold,&r);
shrinkVertically(img,threshold,&r);
return r;
}
Now that you have the coordinates of the bounding box in pixel in your image, you should be able to transform it in device coordinates.
CGRect myImageViewRect = [myImageView frame];
CGSize myImageSize = [[myImageView image]size];
if(myImageSize.width < myImageViewRect.size.width){
NSLog(#"it's width smaller!");
}
if(myImageSize.height < myImageViewRect.size.height){
NSLog(#"it's height smaller!");
}
If you want the image to resize to the size of the image view you can call
[myImageView sizeToFit];
Based on the responses to a previous question, I've created a category on UIImageView for extracting pixel data. This works fine in the simulator, but not when deployed to the device. I should say not always -- the odd thing is that it does fetch the correct pixel colour if point.x == point.y; otherwise, it gives me pixel data for a pixel on the other side of that line, as if mirrored. (So a tap on a pixel in the lower-right corner of the image gives me the pixel data for a corresponding pixel in the upper-left, but tapping on a pixel in the lower-left corner returns the correct pixel colour). The touch coordinates (CGPoint) are correct.
What am I doing wrong?
Here's my code:
#interface UIImageView (PixelColor)
- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point;
#end
#implementation UIImageView (PixelColor)
- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point
{
UIColor* color = nil;
CGImageRef cgImage = [self.image CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = height - (NSUInteger)floor(point.y);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 red = data[offset];
UInt8 blue = data[offset+1];
UInt8 green = data[offset+2];
UInt8 alpha = data[offset+3];
CFRelease(bitmapData);
color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
}
return color;
}
I think R B G is wrong. You have:
UInt8 red = data[offset];
UInt8 blue = data[offset+1];
UInt8 green = data[offset+2];
But don't you really mean R G B? :
UInt8 red = data[offset];
UInt8 green = data[offset+1];
UInt8 blue = data[offset+2];
But even with that fixed there's still a problem as it turns out Apple byte swaps (great article) the R and B values when on the device, but not when on the simulator.
I had a similar simulator/device issue with a PNG's pixel buffer returned by CFDataGetBytePtr.
This resolved the issue for me:
#if TARGET_IPHONE_SIMULATOR
UInt8 red = data[offset];
UInt8 green = data[offset + 1];
UInt8 blue = data[offset + 2];
#else
//on device
UInt8 blue = data[offset]; //notice red and blue are swapped
UInt8 green = data[offset + 1];
UInt8 red = data[offset + 2];
#endif
Not sure if this will fix your issue, but your misbehaving code looks close to what mine looked like before I fixed it.
One last thing: I believe the simulator will let you access your pixel buffer data[] even after CFRelease(bitmapData) is called. On the device this is not the case in my experience. Your code shouldn't be affected, but in case this helps someone else I thought I'd mention it.
You could try the following alternative approach:
create a CGBitmapContext
draw the image into the context
call CGBitmapContextGetData on the context to get the underlying data
work out your offset into the raw data (based on how you created the bitmap context)
extract the value
This approach works for me on the simulator and device.
It looks like that in the code posted in the original questions instead of:
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = height - (NSUInteger)floor(point.y);
It should be:
NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = (NSUInteger)floor(point.y);