How would I draw to a mutable in-memory bitmap? - iphone

I'm trying to write a simple painting app for iOS as a first non-trivial project. Basically, on every touch event, I need to open a graphics context on the bitmap, draw something over top of where the user left off, and close it.
UIImage is immutable, so it's not exactly suitable for my purposes; I'd have to build a new bitmap and draw the old one into the new one. I can't imagine that performing well. Is there any sort of mutable bitmap class in UIKit, or am I going to have to go down to CGImageRef?

If you're willing to venture away from cocoa, I would strongly recommend using OpenGL for this purpose. Apple provide a great sample app (GLPaint) that demonstrates this. Tackling the learning curve of OpenGL will certainly pay off in terms of appearance, performance, and sheer power & flexibility.
However, if you're not up for that then another approach is to create a new CALayer subclass overriding drawInContext:, and store each drawing stroke (path and line properties) there. You can then add each 'strokeLayer' to the drawing view's layer hierarchy, and force a redraw each frame. CGLayers can also be used to increase performance (which is likely to become a big issue - when a user paints a long stroke you will see frame rates drop off very rapidly). In fact you will likely end up using a CGLayer to draw to in any case. Here is a bit of code for a drawRect: method which might help illustrate this approach:
- (void)drawRect:(CGRect)rect {
// Setup the layer and it's context to use as a drawing buffer.
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef drawingBuffer = CGLayerCreateWithContext(context, self.bounds.size, NULL);
CGContextRef bufferContext = CGLayerGetContext(drawingBuffer);
// Draw all sublayers into the drawing buffer, and display the buffer.
[self.layer renderInContext:bufferContext];
CGContextDrawLayerAtPoint(context, CGPointZero, drawingBuffer);
CGLayerRelease(drawingBuffer);
}
As far as mutability goes, the most obvious thing to do would be to draw the background colour over the painting strokes. This way an eraser stroke would be exactly the same as a painting stroke, just a different colour.
You mentioned using a bitmap image, and this is really beginning to hint at OpenGL render-to-texture, where a series of point sprites (forming a line) can be drawn onto a mutable texture at very high framerates. I don't want to put a damper on things, but you will inevitably hit a performance bottleneck using Core Graphics / Quartz to do your drawing in this fashion.
I hope this helps.

You don't need to recreate offscreen context every time a new stroke is made. You might accumulate the strokes somewhere (NSMutableArray) and when a certain limit is reached, you would flatten those accumulated strokes by first drawing background to offscreen context and then the strokes you've accumulated on top of it. the resulting offscreen drawing would become a new background, so you can empty the array containing the strokes and start over. that way you take kind of a hybrid approach between storing all the strokes in memory + redrawing them every time and constantly recreating offscreen bitmap.
There's entire chapter (7) in this book http://www.deitel.com/Books/iPhone/iPhoneforProgrammersAnAppDrivenApproach/tabid/3526/Default.aspx devoted to creating a simple painting app. there you can find a link to code examples. the approach taken is storing the strokes in memory, but here are modified versions of MainView.h and .m files that take the approach I've described, !!! BUT PLEASE PAY ATTENTION TO COPYRIGHT NOTES AT THE BOTTOM OF BOTH FILES !!!:
// MainView.m
// View for the frontside of the Painter app.
#import "MainView.h"
const NSUInteger kThreshold = 2;
#implementation MainView
#synthesize color; // generate getters and setters for color
#synthesize lineWidth; // generate getters and setters for lineWidth
CGContextRef CreateBitmapContext(NSUInteger w, NSUInteger h);
void * globalBitmapData = NULL;
// method is called when the view is created in a nib file
- (id)initWithCoder:(NSCoder*)decoder
{
// if the superclass initializes properly
if (self = [super initWithCoder:decoder])
{
// initialize squiggles and finishedSquiggles
squiggles = [[NSMutableDictionary alloc] init];
finishedSquiggles = [[NSMutableArray alloc] init];
// the starting color is black
color = [[UIColor alloc] initWithRed:0 green:0 blue:0 alpha:1];
lineWidth = 5; // default line width
flattenedImage_ = NULL;
} // end if
return self; // return this objeoct
} // end method initWithCoder:
// clears all the drawings
- (void)resetView
{
[squiggles removeAllObjects]; // clear the dictionary of squiggles
[finishedSquiggles removeAllObjects]; // clear the array of squiggles
[self setNeedsDisplay]; // refresh the display
} // end method resetView
// draw the view
- (void)drawRect:(CGRect)rect
{
// get the current graphics context
CGContextRef context = UIGraphicsGetCurrentContext();
if(flattenedImage_)
{
CGContextDrawImage(context, CGRectMake(0,0,CGRectGetWidth(self.bounds), CGRectGetHeight(self.bounds)), flattenedImage_);
}
// draw all the finished squiggles
for (Squiggle *squiggle in finishedSquiggles)
[self drawSquiggle:squiggle inContext:context];
// draw all the squiggles currently in progress
for (NSString *key in squiggles)
{
Squiggle *squiggle = [squiggles valueForKey:key]; // get squiggle
[self drawSquiggle:squiggle inContext:context]; // draw squiggle
} // end for
} // end method drawRect:
// draws the given squiggle into the given context
- (void)drawSquiggle:(Squiggle *)squiggle inContext:(CGContextRef)context
{
// set the drawing color to the squiggle's color
UIColor *squiggleColor = squiggle.strokeColor; // get squiggle's color
CGColorRef colorRef = [squiggleColor CGColor]; // get the CGColor
CGContextSetStrokeColorWithColor(context, colorRef);
// set the line width to the squiggle's line width
CGContextSetLineWidth(context, squiggle.lineWidth);
NSMutableArray *points = [squiggle points]; // get points from squiggle
// retrieve the NSValue object and store the value in firstPoint
CGPoint firstPoint; // declare a CGPoint
[[points objectAtIndex:0] getValue:&firstPoint];
// move to the point
CGContextMoveToPoint(context, firstPoint.x, firstPoint.y);
// draw a line from each point to the next in order
for (int i = 1; i < [points count]; i++)
{
NSValue *value = [points objectAtIndex:i]; // get the next value
CGPoint point; // declare a new point
[value getValue:&point]; // store the value in point
// draw a line to the new point
CGContextAddLineToPoint(context, point.x, point.y);
} // end for
CGContextStrokePath(context);
} // end method drawSquiggle:inContext:
// called whenever the user places a finger on the screen
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSArray *array = [touches allObjects]; // get all the new touches
// loop through each new touch
for (UITouch *touch in array)
{
// create and configure a new squiggle
Squiggle *squiggle = [[Squiggle alloc] init];
[squiggle setStrokeColor:color]; // set squiggle's stroke color
[squiggle setLineWidth:lineWidth]; // set squiggle's line width
// add the location of the first touch to the squiggle
[squiggle addPoint:[touch locationInView:self]];
// the key for each touch is the value of the pointer
NSValue *touchValue = [NSValue valueWithPointer:touch];
NSString *key = [NSString stringWithFormat:#"%#", touchValue];
// add the new touch to the dictionary under a unique key
[squiggles setValue:squiggle forKey:key];
[squiggle release]; // we are done with squiggle so release it
} // end for
} // end method touchesBegan:withEvent:
// called whenever the user drags a finger on the screen
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSArray *array = [touches allObjects]; // get all the moved touches
// loop through all the touches
for (UITouch *touch in array)
{
// get the unique key for this touch
NSValue *touchValue = [NSValue valueWithPointer:touch];
// fetch the squiggle this touch should be added to using the key
Squiggle *squiggle = [squiggles valueForKey:
[NSString stringWithFormat:#"%#", touchValue]];
// get the current and previous touch locations
CGPoint current = [touch locationInView:self];
CGPoint previous = [touch previousLocationInView:self];
[squiggle addPoint:current]; // add the new point to the squiggle
// Create two points: one with the smaller x and y values and one
// with the larger. This is used to determine exactly where on the
// screen needs to be redrawn.
CGPoint lower, higher;
lower.x = (previous.x > current.x ? current.x : previous.x);
lower.y = (previous.y > current.y ? current.y : previous.y);
higher.x = (previous.x < current.x ? current.x : previous.x);
higher.y = (previous.y < current.y ? current.y : previous.y);
// redraw the screen in the required region
[self setNeedsDisplayInRect:CGRectMake(lower.x-lineWidth,
lower.y-lineWidth, higher.x - lower.x + lineWidth*2,
higher.y - lower.y + lineWidth * 2)];
} // end for
} // end method touchesMoved:withEvent:
// called when the user lifts a finger from the screen
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
// loop through the touches
for (UITouch *touch in touches)
{
// get the unique key for the touch
NSValue *touchValue = [NSValue valueWithPointer:touch];
NSString *key = [NSString stringWithFormat:#"%#", touchValue];
// retrieve the squiggle for this touch using the key
Squiggle *squiggle = [squiggles valueForKey:key];
// remove the squiggle from the dictionary and place it in an array
// of finished squiggles
[finishedSquiggles addObject:squiggle]; // add to finishedSquiggles
[squiggles removeObjectForKey:key]; // remove from squiggles
if([finishedSquiggles count] > kThreshold)
{
CGContextRef context = CreateBitmapContext(CGRectGetWidth(self.bounds), CGRectGetHeight(self.bounds));
if(flattenedImage_)
{
CGContextDrawImage(context, CGRectMake(0,0,CGRectGetWidth(self.bounds), CGRectGetHeight(self.bounds)), flattenedImage_);
}
for (Squiggle *squiggle in finishedSquiggles)
[self drawSquiggle:squiggle inContext:context];
CGImageRef imgRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
if(flattenedImage_ != NULL)
CFRelease(flattenedImage_);
flattenedImage_ = imgRef;
[finishedSquiggles removeAllObjects];
}
} // end for
} // end method touchesEnded:withEvent:
// called when a motion event, such as a shake, ends
- (void)motionEnded:(UIEventSubtype)motion withEvent:(UIEvent *)event
{
// if a shake event ended
if (event.subtype == UIEventSubtypeMotionShake)
{
// create an alert prompting the user about clearing the painting
NSString *message = #"Are you sure you want to clear the painting?";
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:
#"Clear painting" message:message delegate:self
cancelButtonTitle:#"Cancel" otherButtonTitles:#"Clear", nil];
[alert show]; // show the alert
[alert release]; // release the alert UIAlertView
} // end if
// call the superclass's moetionEnded:withEvent: method
[super motionEnded:motion withEvent:event];
} // end method motionEnded:withEvent:
// clear the painting if the user touched the "Clear" button
- (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:
(NSInteger)buttonIndex
{
// if the user touched the Clear button
if (buttonIndex == 1)
[self resetView]; // clear the screen
} // end method alertView:clickedButtonAtIndex:
// determines if this view can become the first responder
- (BOOL)canBecomeFirstResponder
{
return YES; // this view can be the first responder
} // end method canBecomeFirstResponder
// free MainView's memory
- (void)dealloc
{
[squiggles release]; // release the squiggles NSMutableDictionary
[finishedSquiggles release]; // release finishedSquiggles
[color release]; // release the color UIColor
[super dealloc];
} // end method dealloc
#end
CGContextRef CreateBitmapContext(NSUInteger w, NSUInteger h)
{
CGContextRef context = NULL;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (w * 4);
bitmapByteCount = (bitmapBytesPerRow * h);
if(globalBitmapData == NULL)
globalBitmapData = malloc( bitmapByteCount );
memset(globalBitmapData, 0, sizeof(globalBitmapData));
if (globalBitmapData == NULL)
{
return nil;
}
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (globalBitmapData,w,h,8,bitmapBytesPerRow,
colorspace,kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorspace);
return context;
}
/**************************************************************************
* (C) Copyright 2010 by Deitel & Associates, Inc. All Rights Reserved. *
* *
* DISCLAIMER: The authors and publisher of this book have used their *
* best efforts in preparing the book. These efforts include the *
* development, research, and testing of the theories and programs *
* to determine their effectiveness. The authors and publisher make *
* no warranty of any kind, expressed or implied, with regard to these *
* programs or to the documentation contained in these books. The authors *
* and publisher shall not be liable in any event for incidental or *
* consequential damages in connection with, or arising out of, the *
* furnishing, performance, or use of these programs. *
* *
* As a user of the book, Deitel & Associates, Inc. grants you the *
* nonexclusive right to copy, distribute, display the code, and create *
* derivative apps based on the code for noncommercial purposes only--so *
* long as you attribute the code to Deitel & Associates, Inc. and *
* reference www.deitel.com/books/iPhoneFP/. If you have any questions, *
* or specifically would like to use our code for commercial purposes, *
* contact deitel#deitel.com. *
*************************************************************************/
// MainView.h
// View for the frontside of the Painter app.
// Implementation in MainView.m
#import <UIKit/UIKit.h>
#import "Squiggle.h"
#interface MainView : UIView
{
NSMutableDictionary *squiggles; // squiggles in progress
NSMutableArray *finishedSquiggles; // finished squiggles
UIColor *color; // the current drawing color
float lineWidth; // the current drawing line width
CGImageRef flattenedImage_;
} // end instance variable declaration
// declare color and lineWidth as properties
#property(nonatomic, retain) UIColor *color;
#property float lineWidth;
// draw the given Squiggle into the given graphics context
- (void)drawSquiggle:(Squiggle *)squiggle inContext:(CGContextRef)context;
- (void)resetView; // clear all squiggles from the view
#end // end interface MainView
/**************************************************************************
* (C) Copyright 2010 by Deitel & Associates, Inc. All Rights Reserved. *
* *
* DISCLAIMER: The authors and publisher of this book have used their *
* best efforts in preparing the book. These efforts include the *
* development, research, and testing of the theories and programs *
* to determine their effectiveness. The authors and publisher make *
* no warranty of any kind, expressed or implied, with regard to these *
* programs or to the documentation contained in these books. The authors *
* and publisher shall not be liable in any event for incidental or *
* consequential damages in connection with, or arising out of, the *
* furnishing, performance, or use of these programs. *
* *
* As a user of the book, Deitel & Associates, Inc. grants you the *
* nonexclusive right to copy, distribute, display the code, and create *
* derivative apps based on the code for noncommercial purposes only--so *
* long as you attribute the code to Deitel & Associates, Inc. and *
* reference www.deitel.com/books/iPhoneFP/. If you have any questions, *
* or specifically would like to use our code for commercial purposes, *
* contact deitel#deitel.com. *
*************************************************************************/
So you would basically replace the original versions of those files in the project to get desired behavior

Related

How to check if UILabel text was touched?

I want to check if my UILabel was touched. But i need even more than that. Was the text touched? Right now I only get true/false if the UILabel frame was touched using this:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if (CGRectContainsPoint([self.currentLetter frame], [touch locationInView:self.view]))
{
NSLog(#"HIT!");
}
}
Is there any way to check this? As soon as I touch somewhere outside the letter in the UILabel I want false to get returned.
I want to know when the actual black rendered "text pixles" has been touched.
Thanks!
tl;dr: You can hit test the path of the text. Gist is available here.
The approach I would go with is to check if the tap point is inside the path of the text or not. Let me give you a overview of the steps before going into detail.
Subclass UILabel
Use Core Text to get the CGPath of the text
Override pointInside:withEvent: to be able to determine if a point should be considered inside or not.
Use any "normal" touch handling like for example a tap gesture recognizer to know when a hit was made.
The big advantage of this approach is that it follows the font precisely and that you can modify the path to grow the "hittable" area like seen below. Both the black and the orange parts are tappable but only the black parts will be drawn in the label.
Subclass UILabel
I created a subclass of UILabel called TextHitTestingLabel and added a private property for the text path.
#interface TextHitTestingLabel (/*Private stuff*/)
#property (assign) CGPathRef textPath;
#end
Since iOS labels can have either a text or an attributedText so I subclassed both these methods and made them call a method to update the text path.
- (void)setText:(NSString *)text {
[super setText:text];
[self textChanged];
}
- (void)setAttributedText:(NSAttributedString *)attributedText {
[super setAttributedText:attributedText];
[self textChanged];
}
Also, a label can be created from a NIB/Storyboard in which case the text will be set right away. In that case I check for the initial text in awake from nib.
- (void)awakeFromNib {
[self textChanged];
}
Use Core Text to get the path of the text
Core Text is a low level framework that gives you full control over the text rendering. You have to add CoreText.framework to your project and import it to your file
#import <CoreText/CoreText.h>
The first thing I do inside textChanged is to get the text. Depending on if it's iOS 6 or earlier I also have to check the attributed text. A label will only have one of these.
// Get the text
NSAttributedString *attributedString = nil;
if ([self respondsToSelector:#selector(attributedText)]) { // Available in iOS 6
attributedString = self.attributedText;
}
if (!attributedString) { // Either earlier than iOS6 or the `text` property was set instead of `attributedText`
attributedString = [[NSAttributedString alloc] initWithString:self.text
attributes:#{NSFontAttributeName: self.font}];
}
Next I create a new mutable path for all the letter glyphs.
// Create a mutable path for the paths of all the letters.
CGMutablePathRef letters = CGPathCreateMutable();
Core Text "magic"
Core Text works with lines of text and glyphs and glyph runs. For example, if I have the text: "Hello" with attributes like this " Hel lo " (spaces added for clarity). Then that is going to be one line of text with two glyph runs: one bold and one regular. The first glyph run contains 3 glyphs and the second run contains 2 glyphs.
I enumerate all the glyph runs and their glyphs and get the path with CTFontCreatePathForGlyph(). Each individual glyph path is then added to the mutable path.
// Create a line from the attributed string and get glyph runs from that line
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)attributedString);
CFArrayRef runArray = CTLineGetGlyphRuns(line);
// A line with more then one font, style, size etc will have multiple fonts.
// "Hello" formatted as " *Hel* lo " (spaces added for clarity) is two glyph
// runs: one italics and one regular. The first run contains 3 glyphs and the
// second run contains 2 glyphs.
// Note that " He *ll* o " is 3 runs even though "He" and "o" have the same font.
for (CFIndex runIndex = 0; runIndex < CFArrayGetCount(runArray); runIndex++)
{
// Get the font for this glyph run.
CTRunRef run = (CTRunRef)CFArrayGetValueAtIndex(runArray, runIndex);
CTFontRef runFont = CFDictionaryGetValue(CTRunGetAttributes(run), kCTFontAttributeName);
// This glyph run contains one or more glyphs (letters etc.)
for (CFIndex runGlyphIndex = 0; runGlyphIndex < CTRunGetGlyphCount(run); runGlyphIndex++)
{
// Read the glyph itself and it position from the glyph run.
CFRange glyphRange = CFRangeMake(runGlyphIndex, 1);
CGGlyph glyph;
CGPoint position;
CTRunGetGlyphs(run, glyphRange, &glyph);
CTRunGetPositions(run, glyphRange, &position);
// Create a CGPath for the outline of the glyph
CGPathRef letter = CTFontCreatePathForGlyph(runFont, glyph, NULL);
// Translate it to its position.
CGAffineTransform t = CGAffineTransformMakeTranslation(position.x, position.y);
// Add the glyph to the
CGPathAddPath(letters, &t, letter);
CGPathRelease(letter);
}
}
CFRelease(line);
The core text coordinate system is upside down compared to the regular UIView coordinate system so I then flip the path to match what we see on screen.
// Transform the path to not be upside down
CGAffineTransform t = CGAffineTransformMakeScale(1, -1); // flip 1
CGSize pathSize = CGPathGetBoundingBox(letters).size;
t = CGAffineTransformTranslate(t, 0, -pathSize.height); // move down
// Create the final path by applying the transform
CGPathRef finalPath = CGPathCreateMutableCopyByTransformingPath(letters, &t);
// Clean up all the unused path
CGPathRelease(letters);
self.textPath = finalPath;
And now I have a complete CGPath for the text of the label.
Override pointInside:withEvent:
To customize what points the label consider as inside itself I override point inside and have it check if the point is inside the text path. Other parts of UIKit is going to call this method for hit testing.
// Override -pointInside:withEvent to determine that ourselves.
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
// Check if the points is inside the text path.
return CGPathContainsPoint(self.textPath, NULL, point, NO);
}
Normal touch handling
Now everything is setup to work with normal touch handling. I added a tap recognizer to my label in a NIB and connected it to a method in my view controller.
- (IBAction)labelWasTouched:(UITapGestureRecognizer *)sender {
NSLog(#"LABEL!");
}
That is all it takes. If you scrolled all the way down here and don't want to take the different pieces of code and paste them together I have the entire .m file in a Gist that you can download and use.
A note, most fonts are very, very thin compared to the precision of a touch (44px) and your users will most likely be very frustrated when the touches are considered "misses". That being said: happy coding!
Update:
To be slightly nicer to the user you can stroke the text path that you use for hit testing. This gives a larger area that hit tappable but still gives the feeling that you are tapping the text.
CGPathRef endPath = CGPathCreateMutableCopyByTransformingPath(letters, &t);
CGMutablePathRef finalPath = CGPathCreateMutableCopy(endPath);
CGPathRef strokedPath = CGPathCreateCopyByStrokingPath(endPath, NULL, 7, kCGLineCapRound, kCGLineJoinRound, 0);
CGPathAddPath(finalPath, NULL, strokedPath);
// Clean up all the unused paths
CGPathRelease(strokedPath);
CGPathRelease(letters);
CGPathRelease(endPath);
self.textPath = finalPath;
Now the orange area in the image below is going to be tappable as well. This still feels like you are touching the text but is less annoying to the users of your app.
If you want you can take this even further to make it even easier to hit the text but at some point it is going to feel like the entire label is tappable.
The problem, as I understand it, is to detect when a tap (touch) happens on one of the glyphs that comprise the text in a UILabel. If a touch lands outside the path of any of the glyphs then it isn't counted.
Here's my solution. It assumes a UILabel* ivar named _label, and a UITapGestureRecognizer associated with the view containing the label.
- (IBAction) onTouch: (UITapGestureRecognizer*) tgr
{
CGPoint p = [tgr locationInView: _label];
// in case the background of the label isn't transparent...
UIColor* labelBackgroundColor = _label.backgroundColor;
_label.backgroundColor = [UIColor clearColor];
// get a UIImage of the label
UIGraphicsBeginImageContext( _label.bounds.size );
CGContextRef c = UIGraphicsGetCurrentContext();
[_label.layer renderInContext: c];
UIImage* i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// restore the label's background...
_label.backgroundColor = labelBackgroundColor;
// draw the pixel we're interested in into a 1x1 bitmap
unsigned char pixel = 0x00;
c = CGBitmapContextCreate(&pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(c);
[i drawAtPoint: CGPointMake(-p.x, -p.y)];
UIGraphicsPopContext();
CGContextRelease(c);
if ( pixel != 0 )
{
NSLog( #"touched text" );
}
}
You can use a UIGestureRecognizer:
http://developer.apple.com/library/ios/#documentation/EventHandling/Conceptual/EventHandlingiPhoneOS/GestureRecognizer_basics/GestureRecognizer_basics.html
Specifically, I guess you'd like to use the UITapGestureRecognizer. If you want to recognize when the text frame is touched, then the easiest would be to make the size of your frame to fit the text with [yourLabel sizeToFit].
Anyway, to do so I will go to use a UIButton, it's the easiest option.
In case you need to detect only when the actual text and not the entire UITextField frame is tapped then it becomes much more difficult. One approach is detecting the darkness of the pixel the user tapped, but this involves some ugly code. Anyway, depending on the expected interaction within your application in can work out. Check this SO question:
iOS -- detect the color of a pixel?
I would take in consideration that not all the rendered pixel will be 100% black, so I would play with a threshold to achieve better results.
I think he wants to know whether the letter within the label is touched, not other parts of the label. Since you are willing to use a transparent image to achieve this, I would suggest that, for example you have the letter "A" with transparent background, if the color of the letter if monotonous, let's say red in this case, you could grab a CGImage of the UIImage, get the provider and render it as bitmap and sample whether the color of the point being touched is red. For other colors, you could simply sample that color using an online image editor and grab its RGB value and check against that.
You could use an UIButton instead of a label :
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIButton *tmpButton = [[UIButton alloc] initWithFrame:CGRectMake(50, 50, 100, 20)];
[tmpButton setTitle:#"KABOYA" forState:UIControlStateNormal];
[tmpButton setTitleColor:[UIColor blackColor] forState:UIControlStateNormal];
[tmpButton addTarget:self
action:#selector(buttonPressed:)
forControlEvents:UIControlEventTouchUpInside];
[self.view addSubview:tmpButton];
}
When the Button is pressed do something here :
-(void)buttonPressed:(UIButton *)sender {
NSLog(#"Pressed !");
}
I hope it helped ;)
Assuming UILabel instance which you want to track is userInteractionEnabled.
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
UIView *touchView = touch.view;
if([touchView isKindOfClass:[UILabel class]]){
NSLog(#"Touch event occured in Label %#",touchView);
}
}
First of all create and attach tap gesture recognizer and allow user interactions:
UITapGestureRecognizer * tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapGesture:)];
[self.label addGestureRecognizer:tapRecognizer];
self.label.userInteractionEnabled = YES;
Now implement -tapGesture:
- (void)tapGesture:(UITapGestureRecognizer *)recognizer
{
// Determine point touched
CGPoint point = [recognizer locationInView:self.label];
// Render UILabel in new context
UIGraphicsBeginImageContext(self.label.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.label.layer renderInContext:context];
// Getting RGBA of concrete pixel
int bpr = CGBitmapContextGetBytesPerRow(context);
unsigned char * data = CGBitmapContextGetData(context);
if (data != NULL)
{
int offset = bpr*round(point.y) + 4*round(point.x);
int red = data[offset+0];
int green = data[offset+1];
int blue = data[offset+2];
int alpha = data[offset+3];
NSLog(#"%d %d %d %d", alpha, red, green, blue);
if (alpha == 0)
{
// Here is tap out of text
}
else
{
// Here is tap right into text
}
}
UIGraphicsEndImageContext();
}
This will works on UILabel with transparent background, if this is not what you want you can compare alpha, red, green, blue with self.label.backgroundColor...
Create the Label in viewDidLoad or through IB and add tapGesture using below code with selector then when you tap on label log will be printed(which is in singletap:)
- (void)viewDidLoad
{
[super viewDidLoad];
UILabel * label = [[UILabel alloc] initWithFrame:CGRectMake(30, 0, 150, 35)];
label.userInteractionEnabled = YES;
label.backgroundColor = [UIColor greenColor];
label.text = #"label";
label.textAlignment = NSTextAlignmentCenter;
UITapGestureRecognizer * single = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singletap:)];
[label addGestureRecognizer:single];
single.numberOfTapsRequired = 1;
[self.view addSubview:label];
}
-(void) singletap:(id)sender
{
NSLog(#"single tap");
//do your stuff here
}
If your found it please mark it positive
happy coding

Drawing CGImage into Bitmap context, with UIBezierPath drawn into the context beforehand

this is my first question so please bear with me!
Im trying to write up a simple drawing app basically, I was using Core Graphics before, and the only problem was it was too slow, and when I drew with my finger it lagged, a hell of alot!
So, now I'm trying to use UIBezier paths to draw, as I understood to be alot faster, which it is!
When I was using Core Graphics, to keep the drawing speed up I was drawing to a custom bitmap context I created, which was constantly being updated as I drew.
So, I drew to my custom Bitmap context, then a CGImageRef was set to what was drawn in that context using -
cacheImage = CGBitmapContextCreateImage(imageContext);
and that was then drawn back into the bitmap context using -
CGContextDrawImage(imageContext, self.bounds, cacheImage); )
I also did this so when I changed the colour of the line being drawn, the rest of the drawing stayed as it was previously drawn, if that makes sense.
Now the problem Ive come across is this.
Im trying to draw the UIBezier path to my image context using -
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext); //invalid context here so added to solve
}
CGContextScaleCTM(imageContext, 1, -1); // using this as UIBezier
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
with path being my UIBezierPath. All the path set up is done in touches began and touched moved then calling [self setNeedsDisplay]; to call drawRect.
What's happening is when I draw, its either not drawing the CGImageRef to the context properly, or it is, but when its capturing the cache image its capturing a white background from somewhere, instead of just the path, and so its pasting over the entire image with the last path drawn together with a white background fill, so you cant see the last path that was drawn to build the image up, even though the views background colour is clearColor.
I really hope I'm making sense, I've just spent too many hours on this and Its drained me completely. Heres the drawing method I'm using -
This to create the image context -
-(CGContextRef) myCreateBitmapContext: (int) pixelsWide:(int) pixelsHigh{
imageContext = NULL;
CGColorSpaceRef colorSpace; // creating a colorspaceref
void * bitmapData; // creating bitmap data
int bitmapByteCount; // the bytes per count
int bitmapBytesPerRow; // number of bytes per row
bitmapBytesPerRow = (pixelsWide * 4); // calculating how many bytes per row the context needs
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); // how many bytes there are in total
colorSpace = CGColorSpaceCreateDeviceRGB(); // setting the colourspaceRef
bitmapData = malloc( bitmapByteCount ); // calculating the data
if (bitmapData == NULL)
{
//NSLog(#"Memory not allocated!");
return NULL;
}
imageContext = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,kCGImageAlphaPremultipliedLast);
if (imageContext== NULL)
{
free (bitmapData);
NSLog(#"context not created allocated!");
return NULL;
}
CGColorSpaceRelease( colorSpace ); //releasing the colorspace
CGContextSetRGBFillColor(imageContext, 1.0, 1.0, 1.0, 0.0); // filling the bitmap with a white background
CGContextFillRect(imageContext, self.bounds);
CGContextSetShouldAntialias(imageContext, YES);
return imageContext;
}
And heres my drawing -
-(void)drawRect:(CGRect)rect
{
DataClass *data = [DataClass sharedInstance];
[data.lineColor setStroke];
[path setLineWidth:data.lineWidth];
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext);
}
CGContextScaleCTM(imageContext, 1, -1); // this one
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
DataClass *data = [DataClass sharedInstance];
CGContextSetStrokeColorWithColor(imageContext, [data.lineColor CGColor]);
ctr = 0;
UITouch *touch2 = [touches anyObject];
pts[0] = [touch2 locationInView:self];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint p = [touch locationInView:self];
ctr++;
pts[ctr] = p;
if (ctr == 4)
{
pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); // move the endpoint to the middle of the line joining the second control point of the first Bezier segment and the first control point of the second Bezier segment
[path moveToPoint:pts[0]];
[path addCurveToPoint:pts[3] controlPoint1:pts[1] controlPoint2:pts[2]]; // add a cubic Bezier from pt[0] to pt[3], with control points pt[1] and pt[2]
//[data.lineColor setStroke];
[self setNeedsDisplay];
// replace points and get ready to handle the next segment
pts[0] = pts[3];
pts[1] = pts[4];
ctr = 1;
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
[path removeAllPoints];
[self setNeedsDisplay];
ctr = 0;
}
'path' is my UIBezierPath
'cacheImage' is a CGImageRef
'imageContext' is a CGContextRef
Any Help is much appreciated! And if you can think of a better way to do it, please let me know! I do however need the cache image to have a transparent background, so its just the paths visible, as I'm going to apply something later on when I get this working!
EDIT Also I'm removing the points everytime to keep the drawing speed up, just so you know!
Thanks in advance :)
Well, this is a big question. One potential lead would be to verify that you draw exactly what you need (not the whole image all the time), and to divide the invariant bitmap from those regions/rects which actively mutate across multiple layers.

How to implement undo in a drawing app

Below is the code snippet of painting. I can do undo in a vector drawing just like storing points and remove the highest one from mutable array then redrawing.However, it does not function properly in a raster drawing.
If I use UIGraphicsGetCurrentContext() as a context reference, undo works well. But the context of CGBitmapContextCreate() does not when issue undo action.
- (id)initWithFrame:(CGRect)frame {
objArray = [[NSMutableArray alloc] init];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
canvas = CGBitmapContextCreate(NULL, drawImage.frame.size.width, drawImage.frame.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
}
return self;
}
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef imgRef = CGBitmapContextCreateImage(canvas);
CGRect r = self.bounds;
CGContextDrawImage(context, CGRectMake(0, 0, r.size.width, r.size.height), imgRef);
if(ok) {
for (int i = 0; i < [objArray count]; i++) {
CGPoint point = [[[objArray objectAtIndex: i] objectAtIndex:0] CGPointValue];
CGContextMoveToPoint(canvas, point.x, point.y);
for (int j = 0; j < [[objArray objectAtIndex:i] count]; j++) {
point = [[[objArray objectAtIndex: i] objectAtIndex:j] CGPointValue];
CGContextAddLineToPoint(canvas, point.x, point.y);
CGContextStrokePath(**canvas**);
CGContextMoveToPoint(**canvas**, point.x, point.y);
}
}
}
CGImageRelease(imgRef);
}
- (void)undo:(id) sender {
NSLog(#"click");
if([objArray count] > 0)
[objArray removeLastObject];
ok = YES;
[self setNeedsDisplay];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSMutableArray *points = [NSMutableArray array];
UITouch *touch = nil;
if (touchPoint) {
touch = [touches member:touchPoint];
}
end = [touch locationInView:self];
[points addObject:[NSValue valueWithCGPoint:start]];
[points addObject:[NSValue valueWithCGPoint:end]];
[objArray addObject:points];
CGContextMoveToPoint(**canvas**, start.x, start.y);
CGContextAddLineToPoint(**canvas**, end.x, end.y);
CGContextSetLineCap(**canvas**, kCGLineCapRound);
CGContextSetLineWidth(**canvas**, 40.0);
CGContextStrokePath(**canvas**);
start = end;
[self setNeedsDisplay];
}
With raster drawing you're changing the pixels in the canvas each time, there are no objects like there are in a vector drawing.
As a result the only "state" you have is the canvas itself. In order to allow for undo you actually need to save a copy of the canvas before each change. Right before you make the change you'll copy the old bitmap context and then make the change. If the user chooses to undo then you'll just copy the saved context over the normal one. If you want to allow for multiple undos you'll have to save multiple copies.
Obviously this can become memory intensive. Technically you don't actually have to save the whole canvas, just the part that has changes on it, with a record of the position of the changed section. If changes are small then you'll save quite a bit of memory, but some changes can affect the whole canvas, not saving anything.
You could potentially save even more memory with algorithms that store changed pixels, but the processing overhead isn't likely worth it.
Assumming you're storing the image in an Image object, create a stack:
Stack undoStack = ...
Stack redoStack = ...
The high memory solution
As the user makes changes you to the image, you can store the next image (w the changes), and the next and the next and so on. When the user wants to undo, you restore the images by popping from the undoStack and pushing onto the redo stack:
void undo(){
redoStack.push(undoStack.pop());
}
To redo, use the same process, but backwards.
The low memory solution
The concent is the same as above, but now instead of storing the whole image, you can XOR the modified image with the previous one (or with the original one) and store only the pixels that have changed and coordinates at which these changes occur. You might even consider quad tree packing of this new XORed image to save memory if the changes are not great.

My vector sprite renders in different locations in simulator and device

I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.

Drawing Pixels - Objective-C/Cocoa

I am trying to draw individual pixels in xcode to be outputted to the iphone. I do not know any OpenGL or Quartz coding but I do know a bit about Core Graphics. I was thinking about drawing small rectangles with width and height of one, but do not know how to implement this into code and how to get this to show in the view. Any help is greatly appreciated.
For a custom UIView subclass that allows plotting dots of a fixed size and color:
// Make a UIView subclass
#interface PlotView : UIView
#property (nonatomic) CGContextRef context;
#property (nonatomic) CGLayerRef drawingLayer; // this is the drawing surface
- (void) plotPoint:(CGPoint) point; //public method for plotting
- (void) clear; // erases drawing surface
#end
// implementation
#define kDrawingColor ([UIColor yellowColor].CGColor)
#define kLineWeight (1.5)
#implementation PlotView
#synthesize context = _context, drawingLayer = _drawingLayer;
- (id) initPlotViewWithFrame:(CGRect) frame; {
self = [super initWithFrame:frame];
if (self) {
// this is total boilerplate, it rarely needs to change
self.backgroundColor = [UIColor clearColor];
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat width = frame.size.width;
CGFloat height = frame.size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (4 * width);
self.context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGSize size = frame.size;
self.drawingLayer = CGLayerCreateWithContext(self.context, size, NULL);
}
return self;
}
// override drawRect to put drawing surface onto screen
// you don't actually call this directly, the system will call it
- (void) drawRect:(CGRect) rect; {
// this creates a new blank image, then gets the surface you've drawn on, and stamps it down
// at some point, the hardware will render this onto the screen
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGImageRef image = CGBitmapContextCreateImage(self.context);
CGRect bounds = [self bounds];
CGContextDrawImage(currentContext, bounds, image);
CGImageRelease(image);
CGContextDrawLayerInRect(currentContext, bounds, self.drawingLayer);
}
// simulate plotting dots by drawing a very short line with rounded ends
// if you need to draw some other kind of shape, study this part, along with the docs
- (void) plotPoint:(CGPoint) point; {
CGContextRef layerContext = CGLayerGetContext(self.drawingLayer); // get ready to draw on your drawing surface
// prepare to draw
CGContextSetLineWidth(layerContext, kLineWeight);
CGContextSetLineCap(layerContext, kCGLineCapRound);
CGContextSetStrokeColorWithColor(layerContext, kDrawingColor);
// draw onto surface by building a path, then stroking it
CGContextBeginPath(layerContext); // start
CGFloat x = point.x;
CGFloat y = point.y;
CGContextMoveToPoint(layerContext, x, y);
CGContextAddLineToPoint(layerContext, x, y);
CGContextStrokePath(layerContext); // finish
[self setNeedsDisplay]; // this tells system to call drawRect at a time of it's choosing
}
- (void) clear; {
CGContextClearRect(CGLayerGetContext(self.drawingLayer), [self bounds]);
[self setNeedsDisplay];
}
// teardown
- (void) dealloc; {
CGContextRelease(_context);
CGLayerRelease(_drawingLayer);
[super dealloc];
}
If you want to be able to draw pixels that are cumulatively added to some previously drawn pixels, then you will need to create your own bitmap graphics context, backed by your own bitmap memory. You can then set individual pixels in the bitmap memory, or draw short lines or small rectangles in your graphics context. To display your drawing context, first convert it to an CGImageRef. Then you can either draw this image to a subclassed UIView in the view's drawRect, or assign the image to the contents of the UIView's CALayer.
Look up: CGBitmapContextCreate and CGBitmapContextCreateImage in Apple's documentation.
ADDED:
I wrote up a longer explanation of why you might need to do this when drawing pixels in an iOS app, plus some source code snippets, on my blog: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
All drawing needs to go into the - (void)drawRect:(CGRect)rect method. [self setNeedsDisplay] flags the code for a redraw. Problem is your redrawing nothing.