Identify a percentage of transparent pixels in an area of UIImageView - iphone

I'm trying to set up a collision type hit test for a defined of pixels within a UIImageView. I'm only wish to cycle through pixels in a defined area.
Here's what I have so far:
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Is someone able to offer any reason why it isn't working?

I would suggest using ANImageBitmapRep. It allows for easy pixel-level manipulation of images without the hassle of context, linking against other libraries, or raw memory allocation. To create an ANImgaeBitmapRep with the contents of a view, you could do something like this:
BMPoint sizePt = BMPointMake((int)self.frame.size.width,
(int)self.frame.size.height);
ANImageBitmapRep * irep = [[ANImageBitmapRep alloc] initWithSize:sizePt];
CGContextRef ctx = [irep context];
[self.layer renderInContext:context];
[irep setNeedsUpdate:YES];
Then, you can crop out your desired rectangle. Note that coordinates are relative to the bottom left corner of the view:
// assuming aFrame is our frame
CGRect cFrame = CGRectMake(aFrame.origin.x,
self.frame.size.height - (aFrame.origin.y + aFrame.size.height),
aFrame.size.width, aFrame.size.height);
[irep cropFrame:];
Finally, you can find the percentage of alpha in the image using the following:
double totalAlpha;
double totalPixels;
for (int x = 0; x < [irep bitmapSize].x; x++) {
for (int y = 0; y < [irep bitmapSize].y; y++) {
totalAlpha += [irep getPixelAtPoint:BMPointMake(x, y)].alpha;
totalPixels += 1;
}
}
double alphaPct = totalAlpha / totalPixels;
You can then use the alphaPct variable as a percentage from 0 to 1. Note that, to prevent leaks, you must release the ANImageBitmapRep object using release: [irep release].
Hope that I helped. Image data is a fun and interesting field when it comes to iOS development.

Related

Image Circular Wrap in iOS

I have a problem - I want to create a circular wrap function which will wrap an image as depicted below:
This is available in OSX however is not available on iOS.
My logic so far has been:
Split the image up into x sections and for each section:
Rotate alpha degrees
Scale the image in the x axis to create a diamond shaped 'warped' effect of the image
Rotate back 90 - atan((h / 2) / (w / 2))
Translate the offset
My problem is that this seems inaccurate and I have been unable to mathematically figure out how to do this correctly - any help would be massively appreciated.
Link to OSX docs for CICircularWrap:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CICircularWrap
Since CICircularWrap is not supported on iOS (EDIT: it is now - check answer below), one has to code his own effect for now. Probably the simplest way is to compute the transformation from polar to cartesian coordinate systems and then interpolate from the source image. I've come up with this simple (and frankly quite slow - it can be much optimised) algorithm:
#import <QuartzCore/QuartzCore.h>
CGContextRef CreateARGBBitmapContext (size_t pixelsWide, size_t pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
CGImageRef circularWrap(CGImageRef inImage,CGFloat bottomRadius, CGFloat topRadius, CGFloat startAngle, BOOL clockWise, BOOL interpolate)
{
if(topRadius < 0 || bottomRadius < 0) return NULL;
// Create the bitmap context
int w = (int)CGImageGetWidth(inImage);
int h = (int)CGImageGetHeight(inImage);
//result image side size (always a square image)
int resultSide = 2*MAX(topRadius, bottomRadius);
CGContextRef cgctx1 = CreateARGBBitmapContext(w,h);
CGContextRef cgctx2 = CreateARGBBitmapContext(resultSide,resultSide);
if (cgctx1 == NULL || cgctx2 == NULL)
{
return NULL;
}
// Get image width, height. We'll use the entire image.
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx1, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
int *data1 = CGBitmapContextGetData (cgctx1);
int *data2 = CGBitmapContextGetData (cgctx2);
int resultImageSize = resultSide*resultSide;
double temp;
for(int *p = data2, pos = 0;pos<resultImageSize;p++,pos++)
{
*p = 0;
int x = pos%resultSide-resultSide/2;
int y = -pos/resultSide+resultSide/2;
CGFloat phi = modf(((atan2(x, y)+startAngle)/2.0/M_PI+0.5),&temp);
if(!clockWise) phi = 1-phi;
phi*=w;
CGFloat r = ((sqrtf(x*x+y*y))-topRadius)*h/(bottomRadius-topRadius);
if(phi>=0 && phi<w && r>=0 && r<h)
{
if(!interpolate || phi >= w-1 || r>=h-1)
{
//pick the closest pixel
*p = data1[(int)r*w+(int)phi];
}
else
{
double dphi = modf(phi, &temp);
double dr = modf(r, &temp);
int8_t* c00 = (int8_t*)(data1+(int)r*w+(int)phi);
int8_t* c01 = (int8_t*)(data1+(int)r*w+(int)phi+1);
int8_t* c10 = (int8_t*)(data1+(int)r*w+w+(int)phi);
int8_t* c11 = (int8_t*)(data1+(int)r*w+w+(int)phi+1);
//interpolate components separately
for(int component = 0; component < 4; component++)
{
double avg = ((*c00 & 0xFF)*(1-dphi)+(*c01 & 0xFF)*dphi)*(1-dr)+((*c10 & 0xFF)*(1-dphi)+(*c11 & 0xFF)*dphi)*dr;
*p += (((int)(avg))<<(component*8));
c00++; c10++; c01++; c11++;
}
}
}
}
CGImageRef result = CGBitmapContextCreateImage(cgctx2);
// When finished, release the context
CGContextRelease(cgctx1);
CGContextRelease(cgctx2);
// Free image data memory for the context
if (data1) free(data1);
if (data2) free(data2);
return result;
}
Use the circularWrap function with parameters:
CGImageRef inImage the source image
CGFloat bottomRadius the bottom side of the source image will transform into a circle with this radius
CGFloat topRadius the same for the top side of the source image, this can be larger or smaler than the bottom radius. (results in wraping around top/bottom of the image)
CGFloat startAngle the angle in which the left and right sides of the source image will transform. BOOL clockWise direction of rendering
BOOL interpolate a simple anti-aliasing algorithm. Only the inside of the image is interpolated
some samples (top left is the source image):
with code:
image1 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,0,300,0,YES,NO)];
image2 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,100,300,M_PI_2,NO,YES)];
image3 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,300,200,M_PI_4,NO,YES)];
image4 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,250,300,0,NO,NO)];
enjoy! :)
Apple have added CICircularWrap to iOS 9
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CICircularWrap
Wraps an image around a transparent circle.
Localized Display Name
Circular Wrap Distortion
Availability
Available in OS X v10.5 and later and in iOS 9 and later.

iPhone App - Display pixel data present in buffer on screen

I have the source code for a video decoder application written in C, which I'm now porting on iphone.
My problem is as follows:
I have RGBA pixel data for a frame in a buffer that I need to display on the screen. My buffer is of type unsigned char. (I cannot change it to any other data type as the source code is too huge and not written by me.)
Most of the links I found on the net say about how to "draw and display pixels" on the screen or how to "display pixels present in an array", but none of then say how to "display pixel data present in a buffer".
I'm planning to use quartz 2D. All I need to do is just display the buffer contents on the screen. No modifications! Although my problem sounds very simple, there isn't any API that I could find to do the same. I couldn't find any appropriate link or document that was useful enough.
Kindly help!
Thanks in advance.
You can use the CGContext data structure to create a CGImage from raw pixel data. I've quickly written a basic example:
- (CGImageRef)drawBufferWidth:(size_t)width height:(size_t)height pixels:(void *)pixels
{
unsigned char (*buf)[width][4] = pixels;
static CGColorSpaceRef csp = NULL;
if (!csp) {
csp = CGColorSpaceCreateDeviceRGB();
}
CGContextRef ctx = CGBitmapContextCreate(
buf,
width,
height,
8, // 8 bits per pixel component
width * 4, // 4 bytes per row
csp,
kCGImageAlphaPremultipliedLast
);
CGImageRef img = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
return img;
}
You can call this method like this (I've used a view controller):
- (void)viewDidLoad
{
[super viewDidLoad];
const size_t width = 320;
const size_t height = 460;
unsigned char (*buf)[width][4] = malloc(sizeof(*buf) * height);
// fill up `buf` here
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
buf[y][x][0] = x * 255 / width;
buf[y][x][1] = y * 255 / height;
buf[y][x][2] = 0;
buf[y][x][3] = 255;
}
}
CGImageRef img = [self drawBufferWidth:320 height:460 pixels:buf];
self.imageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
}

Improve performance of drawRect:

I am drawing cells from a grid with a NSTimer every 0.1 Seconds.
The size is about 96x64 => 6144 cells / images.
If i am drawing images instead of (e.g.) green rectangles it is 4 times slower !
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height));
int cellSize = self.bounds.size.width / WIDTH;
double xOffset = 0;
for (int i = 0; i < WIDTH;i++)
{
for (int j = 0; j < HEIGHT;j++)
{
NSNumber *currentCell = [self.state.board objectAtIndex:(i*HEIGHT)+j];
if (currentCell.intValue == 1)
{
[image1 drawAtPoint:CGPointMake(xOffset + (cellSize * i),cellSize * j )];
}
else if (currentCell.intValue == 0){
[image2 drawAtPoint:CGPointMake(xOffset + (cellSize * i),cellSize * j )];
}
}
}
UIGraphicsPopContext();
}
Any idea how to makes this faster if i want to draw png or jpg in each rectangle?
The images are already scaled to an appropriate size.
a) Don't redraw the images/rects that are outside the view's bounds.
b) Don't redraw the images/rects that are outside the dirtyRect
c) Don't redraw the images/rects that haven't changes since the
previous update.
d) Use a layer to prerender the images, so you don't need to render
them at drawing time.
This scenario is exactly what Instruments is there for. Use it. Anyone here making a suggestion is guessing about what the bottleneck is.
That said, I'm going to guess at what the bottleneck is. You are drawing 6114 images using the CPU (confirm this by using the time profiler. Find your drawRect method, and check where the most time is spent. If it's drawInRect, then that's your problem)
If that's the case, how do we reduce its usage? An easy win would be to only redraw the images we need to draw. CALayers make this easy. Remove your drawRect method, add a sublayer to your view's layer for each image, and set the images as your layers' content properties. Instead of invalidating the view when an image needs to change, just switch the relevant layer's content property to the new image.
Another nice thing about CALayers is that they cache layer content on the GPU, meaning that the redraws that do happen will require less CPU time and won't block the rest of you app as much when they do happen.
If the overhead of that many layers is unacceptable (again, Instruments is your friend), check out CAReplicatorLayer. It's less flexible than having many CALayers, but allows a single image to be replicated many times with minimal overhead.
I tried to improve your code from performance perspective. However, check my comment about bottlenecks, too.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
//UIGraphicsPushContext(context); //not needed UIView does it anyway
//use [UIView backgroundColor] instead of this
//CGContextSetRGBFillColor(context, 0, 0, 0, 1);
//CGContextFillRect(context, CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height));
int cellSize = self.bounds.size.width / WIDTH;
double xOffset = 0;
CGRect cellFrame = CGRectMake(0, 0, cellSize, cellSize);
NSUinteger cellIndex = 0;
for (int i = 0; i < WIDTH; i++) {
cellFrame.origin.x = xOffset;
for (int j = 0; j < HEIGHT; j++, cellIndex++) {
cellFrame.origin.y = 0;
if (CGRectIntersectsRect(rect, cellFrame) {
NSNumber *currentCell = [self.state.board objectAtIndex:cellIndex];
if (currentCell.intValue == 1) {
[image1 drawInRect:cellFrame];
}
else if (currentCell.intValue == 0) {
[image2 drawInRect:cellFrame];
}
}
cellFrame.origin.y += cellSize;
}
cellFrame.origin.x += cellSize;
}
//UIGraphicsPopContext(context); //not needed UIView does it anyway
}
Use CGRectIntersects to check if the rect of your image is inside the dirtyRect to check if you need to draw it.

Totally confusing where to change the code in openCV

In the following links it gives the result as shown below image
https://github.com/BloodAxe/opencv-ios-template-project/downloads
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
i changed the code to
COLOR_RGB2GRAY to COLOR_BGR2BGRA it give me a error says "OpenCV Error: Unsupported format or combination of formats () in cvCanny"
(or)
CGColorSpaceCreateDeviceGray to CGColorSpaceCreateDeviceRGB
I am Totally confusing where to change the code...
I need the output as "white color with black lines" instead of "black color with gray lines
Please Guide me
Thanks a lot in advance
In OpenCVClientViewController.mm include this method (copied from https://stackoverflow.com/a/6672628/) then the image will be converted as shown below:
-(void)inverColors
{
NSLog(#"inverColors called ");
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.imageView.image.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
self.imageView.image= returnImage;
}
// Called when the user changes either of the threshold sliders
- (IBAction)sliderChanged:(id)sender
{
self.highLabel.text = [NSString stringWithFormat:#"%.0f", self.highSlider.value];
self.lowLabel.text = [NSString stringWithFormat:#"%.0f", self.lowSlider.value];
[self processFrame];
}

Pixel-Position of Cursor in UITextView

Is there a way of getting the position (CGPoint) of the cursor (blinking bar) in an UITextView (preferable relative to its content). I don’t mean the location as an NSRange. I need something around:
- (CGPoint)cursorPosition;
It should be a non-private API way.
Requires iOS 5
CGPoint cursorPosition = [textview caretRectForPosition:textview.selectedTextRange.start].origin;
Remember to check that selectedTextRange is not nil before calling this method. You should also use selectedTextRange.empty to check that it is the cursor position and not the beginning of a text range. So:
if (textview.selectedTextRange.empty) {
// get cursor position and do stuff ...
}
SWIFT 4 version:
if let cursorPosition = textView.selectedTextRange?.start {
// cursorPosition is a UITextPosition object describing position in the text (text-wise description)
let caretPositionRectangle: CGRect = textView.caretRect(for: cursorPosition)
// now use either the whole rectangle, or its origin (caretPositionRectangle.origin)
}
textView.selectedTextRange?.start returns a text position of the cursor, and we then simply use textView.caretRect(for:) to get its pixel position in textView.
It's painful, but you can use the UIStringDrawing additions to NSString to do it. Here's the general algorithm I used:
CGPoint origin = textView.frame.origin;
NSString* head = [textView.text substringToIndex:textView.selectedRange.location];
CGSize initialSize = [head sizeWithFont:textView.font constrainedToSize:textView.contentSize];
NSUInteger startOfLine = [head length];
while (startOfLine > 0) {
/*
* 1. Adjust startOfLine to the beginning of the first word before startOfLine
* 2. Check if drawing the substring of head up to startOfLine causes a reduction in height compared to initialSize.
* 3. If so, then you've identified the start of the line containing the cursor, otherwise keep going.
*/
}
NSString* tail = [head substringFromIndex:startOfLine];
CGSize lineSize = [tail sizeWithFont:textView.font forWidth:textView.contentSize.width lineBreakMode:UILineBreakModeWordWrap];
CGPoint cursor = origin;
cursor.x += lineSize.width;
cursor.y += initialSize.height - lineSize.height;
return cursor;
}
I used [NSCharacterSet whitespaceAndNewlineCharacterSet] to find word boundaries.
This can also be done (presumably more efficiently) using CTFrameSetter in CoreText, but that is not available in iPhone OS 3.1.3, so if you're targeting the iPhone you will need to stick to UIStringDrawing.
Yes — as in there's a method to get the cursor position. Just use
CGRect caretRect = [textView rectContainingCaretSelection];
return caretRect.origin;
No — as in this method is private. There's no public API for this.
I try to mark a selected text, i.e. I receive a NSRange and want to draw a yellow rectangle behind that text. Is there another way?
I can advise you some trick:
NSRange selectedRange = myTextView.selectedRange;
[myTextView select:self];
UIMenuController* sharedMenu = [UIMenuController sharedMenuController];
CGRect menuFrame = [sharedMenu menuFrame];
[sharedMenu setMenuVisible:NO];
myTextView.selectedRange = selectedRange
Using this code, you can know get the position of the cut/copy/past menu and there place your yellow rectangle.
I did not find a way to get the menu position witout forcing it to appear by a simulated select operation.
Regards
Assayag
Take a screenshot of the UITextView, then search the pixel data for colors that match the color of the cursor.
-(CGPoint)positionOfCursorForTextView:(UITextView)textView {
//get CGImage from textView
UIGraphicsBeginImageContext(textView.bounds.size);
[textView.layer renderInContext:UIGraphicsGetCurrentContext()];
CGImageRef textImageRef = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
//get raw pixel data
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
uint8_t * textBuffer = (uint8_t*)malloc(Width * Height * 4);
NSUInteger bytesPerRow = 4 * Width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textBuffer, Width, Height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, Width, Height), textImageRef);
CGContextRelease(context);
//search
for(int y = 0; y < Height; y++)
{
for(int x = 0; x < Width * 4; x += 4)
{
int red = textBuffer[y * 4 * (NSInteger)Width + x];
int green = textBuffer[y * 4 * (NSInteger)Width + x + 1];
int blue = textBuffer[y * 4 * (NSInteger)Width + x + 2];
int alpha = textBuffer[y * 4 * (NSInteger)Width + x + 3];
if(COLOR IS CLOSE TO COLOR OF CURSOR)
{
free(textBuffer);
CGImageRelease(textImageRef);
return CGPointMake(x/4, y);
}
}
}
free(textBuffer);
CGImageRelease(textImageRef);
return CGPointZero;
}