iOS: CGContextRef drawRect does not agree with input - iphone

Background : I would like to draw blocks when the user touch up somewhere. If the block is there, I want to erase it. I manage the blocks by using NSMutableArrayto keep track of points where the block should go. Every time user touches, it will determine if the touch place already contained a block or not and manage the array accordingly.
Problem : I got a very weird feedback from this. First of all, everything in the array works as I wanted. The problem comes when the user wanted to erase a block. While the array is maintained correctly, the drawing seems to ignore the change in the array. It will not remove anything but the last dot. And even that flashes toggles on and off when the user clicked elsewhere.
Here is the code :
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawrect current array %#",pointArray);
for (NSValue *pointValue in pointArray){
CGPoint point = [pointValue CGPointValue];
[self drawSquareAt:point];
}
}
- (void) drawSquareAt:(CGPoint) point{
float x = point.x * scale;
float y = point.y * scale;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, x, y);
CGContextAddLineToPoint(context, x+scale, y);
CGContextAddLineToPoint(context, x+scale, y+scale);
CGContextAddLineToPoint(context, x, y+scale);
CGContextAddLineToPoint(context, x, y);
CGContextSetFillColorWithColor(context, [UIColor darkGrayColor].CGColor);
CGContextFillPath(context);
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *aTouch = [touches anyObject];
CGPoint point = [aTouch locationInView:self];
point = CGPointMake( (int) (point.x/scale), (int) (point.y/scale));
NSLog(#"Touched at %#", [NSArray arrayWithObject: [NSValue valueWithCGPoint:point]]);
NSValue *pointValue = [NSValue valueWithCGPoint:point];
int i = [pointArray indexOfObject:pointValue];
NSLog(#"Index at %i",i);
if (i < [pointArray count]){
[pointArray removeObjectAtIndex:i];
NSLog(#"remove");
}else {
[pointArray addObject:pointValue];
NSLog(#"add");
}
NSLog(#"Current array : %#", pointArray);
[self setNeedsDisplay];
}
scale is defined as 16.
pointArray is a member variable of the view.
To Test : You can drop this into any UIView and add that to the viewController to see the effect.
Question : How do I get the drawing to agree with the array?
Update + Explanation: I am aware of the cost of this approach but it is only created for me to get a quick figure. It will not be used in the real application, thus, please do not get hung up about how expensive it is. I only created this capability to get a value in NSString (#"1,3,5,1,2,6,2,5,5,...") of a figure I draw. This will become more efficient when I am actually using it with no redrawing. please stick to the question asked. Thank you.

I don't see anywhere where you are actually clearing what you drew previously. Unless you explicitly clear (such as by filling with UIRectFill() - which, as an aside, is a more convenient way to draw rectangles than filling an explicit path), Quartz is going to just draw over your old content, which will cause unexpected behavior on attempts at erasure.
So... what happens if you put at the beginning of -drawRect::
[[UIColor whiteColor] setFill]; // Or whatever your background color is
UIRectFill([self bounds]);
(This is of course horrendously inefficient, but per your comment, I am disregarding that fact.)
(As a separate aside, you probably should wrap your drawing code in a CGContextSaveGState()/CGContextRestoreGState() pair to avoid tainting the graphics context of any calling code.)
EDIT: I always forget about this property since I usually want to draw more complex backgrounds anyway, but you can likely achieve similar results by setting clearsContextBeforeDrawing:YES on the UIView.

This approach seems a little weird to me because every time the touchesEnded method is called you need to redraw (which is an expensive operation) and also need keep track of the squares. I suggest you subclass an UIView and implement the drawRect: method, so the view knows how to draw itself and implement the touchesEnded method in your view controller, where you can check if you have touched a squareView then remove it from view controller's view otherwise create a squareView and add it as subview to the view controller's view.

Related

Why is CGContextDrawImage in drawRect initially slow?

Consider this simple UIView subclass in an ARC-enabled iOS app that draws on the view when the screen is touched:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint location = [[touches anyObject] locationInView:self];
int padding = brushSize / 2;
CGRect brushRect = CGRectMake(
location.x - padding, location.y - padding,
brushSize, brushSize
);
[self setNeedsDisplayInRect:brushRect];
}
- (void)drawRect:(CGRect)rect {
NSDate *start = [NSDate date];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, rect, brushImage);
NSLog(#"%f", [start timeIntervalSinceNow]);
}
In this example, brushImage is a square bitmap, and brushSize is an int describing the width and height of the bitmap.
I consistently get log output like this:
-0.131658
-0.133998
-0.143314
-0.007132
-0.006968
-0.007444
-0.006733
-0.008574
-0.007163
-0.006560
-0.006958
The first calls to drawRect take much longer to complete than subsequent calls, and drawRect continues to be consistently fast thereafter. It's only on the first calls after the view is initialized that drawRect is slow.
I see similar results in both the simulator and all physical devices I've tried. The initial calls always take much longer to complete.
Why is drawRect / CGContextDrawImage so slow on the initial calls? And more importantly, how can I fix this?
Check the rect. I bet the first three are fullscreen updates. I ran into this issue before. IOS will forcibly update the first few times as fullscreen no matter what you do (probably for its GL buffering system). If you leave it alone for about 5 seconds, and then draw something again you will get the same behavior. I never figured out what the actual cause was, and I doubt I ever will :(.
See the question I originally asked: UIGraphicsGetCurrentContext() short lifetime

how to draw new lines without old ones disappering?

I want to create a simple tool for drawing. The purpose is to draw a line that follows the accelerometer of the iPhone & iPad, so if the user tilts the device a line will be draw in the direction the device was moved.
I am able to register acceleration and drawing lines. My problem is that as soon as I draw a line the old one disappears. One possible solution would be to save to points already drawn and then re-draw everything, but I would think there are better solutions?
All help is appreciated!
My drawRect is at the moment like this:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 20.0);
CGContextSetStrokeColorWithColor(context, [UIColor yellowColor].CGColor);
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
A different method is responsible for refreshing. This method is also called from the uiviewcontroller with certain intervals. Right now it shows a "trail" (or what I should call it) in the direction the device was moved. Not exactly what I am looking for:
- (void)drawNewLine:(CGPoint)to {
// calculate trail behind current point
float pointDifferenceX = ((toPoint.x - to.x) * 9);
float pointDifferenceY = ((toPoint.y - to.y) * 9);
fromPoint = CGPointMake(toPoint.x + pointDifferenceX, toPoint.y + pointDifferenceY);
toPoint = to;
[self setNeedsDisplay];
}
I can think of two options:
Either save all points and redraw the lines whenever the screen needs to be refreshed (as you mentioned)
Draw the lines into an off-screen pixelmap and refresh the screen from there
In either case, respect the Hollywood principle: Don't call, you will be called. That means don't just draw to the screen but wait for until drawRect: of your UIView is called. (You can trigger this by calling setNeedsDisplay.)

UI design - Best way to deal with many UIButtons

I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}

iPhone Programming setNeedsDisplay not working

Implemented my own drawRect method and I'm trying to redraw the shape from a Controller class and I can't figure out how to correctly implement setNeedsDisplay to redraw the UIView. Please help!!
I know the code is ugly, but here is the custom method:
- (void)drawRect:(CGRect)rect{
// Drawing code
NSArray *kyle = [self pointsForPolygonInRect:rect numberOfSides:[pshape numberOfSides]];
CGContextRef c = UIGraphicsGetCurrentContext();
int counter = [kyle count];
NSLog(#"counter: %d",counter);
int i = 0;
BOOL first = YES;
NSValue *kylevalue;
CGPoint thePoint;
for (i = 0; i < counter; i++) {
kylevalue = [kyle objectAtIndex:i];
thePoint = [kylevalue CGPointValue];
if (first) { //start.
CGContextMoveToPoint(c, thePoint.x, thePoint.y+5.0);
first = NO;
} else { //do the rest
CGContextAddLineToPoint(c, thePoint.x, thePoint.y+5.0);
}
}
CGContextClosePath(c); //solid color
CGContextDrawPath(c, kCGPathFillStroke);
}
I've had similar problems with redrawing not working as I expected either. It seems as though setNeedsDisplay does not force children to redraw, for those you need to call their setNeedsDisplay method.
For my needs I wrote a category to redraw the entire screen by calling setNeedsDisplay on every single view. This can of course easily be modified to start from a specific view as opposed to all windows.
#implementation UIApplication (Extensions)
+ (void) redrawScreen {
NSMutableSet* todo = [NSMutableSet set];
NSMutableSet* done = [NSMutableSet set];
[todo addObjectsFromArray:[[UIApplication sharedApplication] windows]];
while ([todo count] > 0) {
UIView* view = [todo anyObject];
[view setNeedsDisplay];
[done addObject:view];
[todo removeObject:view];
if (view.subviews) {
NSMutableSet* subviews = [NSMutableSet setWithArray:view.subviews];
[subviews minusSet:done];
[todo unionSet:subviews];
}
}
}
#end
Hope this is of help to someone
I'm not sure I understand your question. Calling -setNeedsDisplay on a view causes it to be redrawn via its -drawRect: method.
just a few thoughts
1. have you verified that your method is getting called?
2. have you verified that your array of points is in fact populated with more than one point?
3. are the points actually in the viewable area of the frame?
4. I don't see you setting the stroke line width, color or fill colors.
Sometimes reason can be very simple: File's owner has no connection to UIView object. i.e. it's Outlet is not setup properly.
Use IB, ctrl button and drag method :)
If your plot has the drawRect method, look at your viewController, at the code where it connects to your plot. Is that hooked up properly? Is the little circle by the property hollow? or filled in. That little circle must be filled in if the property is connected properly. If it is hollow, bring up the story board.
Find the plot object in the story board. Control drag from the plot object in the story board over to the empty circle on the view controller. Now you've hooked up the outlet. Try running it again, and see if your drawRect is called now.

Line is erased when drawing shapes

I am trying to make an application for drawing shapes on screen by touching it.
I can draw a line from one point to another- but it erases on each new draw.
Here is my code:
CGPoint location;
CGContextRef context;
CGPoint drawAtPoint;
CGPoint lastPoint;
-(void)awakeFromNib{
//[self addSubview:noteView];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
location = [touch locationInView:touch.view];
[self setNeedsDisplayInRect:CGRectMake(0, 0, 320, 480)];
}
- (void)drawRect:(CGRect)rect {
context = UIGraphicsGetCurrentContext();
[[UIColor blueColor] set];
CGContextSetLineWidth(context,10);
drawAtPoint.x =location.x;
drawAtPoint.y =location.y;
CGContextAddEllipseInRect(context,CGRectMake(drawAtPoint.x, drawAtPoint.y, 2, 2));
CGContextAddLineToPoint(context,lastPoint.x, lastPoint.y);
CGContextStrokePath(context);
lastPoint.x =location.x;
lastPoint.y =location.y;
}
Appreciate your help-
Nir.
As you have discovered, -drawRect is where you display the contents of a view. You will only 'see' on screen what you draw here.
This is much more low-level than something like Flash where you might add a movieclip containing a line to the stage and some time later add another movieclip containing a line to the stage and now you see - two lines!
You will need to do some work and prdobably set up something like..
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
location = [touch locationInView:touch.view];
[self addNewLineFrom:lastPoint to:location];
lastPoint = location;
[self setNeedsDisplayInRect:CGRectMake(0, 0, 320, 480)];
}
- (void)drawRect:(CGRect)rect {
context = UIGraphicsGetCurrentContext();
for( Line *eachLine in lineArray )
[eachLine drawInContext:context];
}
I think you can see how to flesh this out to what you need.
Another way to approach this is to use CALayers. Using this approach you dont draw inside - - (void)drawRect at all - you add and remove layers, draw what you like inside them, and the view will handle compositing them together and drawing to the screen as needed. Probably more what you are looking for.
Every time that drawRect is called, you start with a blank slate. If you don't keep track of everything you have drawn before in order to draw it again, then you end up only drawing the latest swipe of your finger, and not any of the old ones. You will have to keep track of all of your finger swipes in order to redraw them every time drawRect is called.
Instead of redrawing every line, you can draw into an image and then just display the image in your drawRect: method. The image will accumulate the lines for you. Of course, this method makes undo more difficult to implement.
From the iPhone Application Programming Guide:
Use the UIGraphicsBeginImageContext
function to create a new image-based
graphics context. After creating this
context, you can draw your image
contents into it and then use the
UIGraphicsGetImageFromCurrentImageContext
function to generate an image based on
what you drew. (If desired, you can
even continue drawing and generate
additional images.) When you are done
creating images, use the
UIGraphicsEndImageContext function to
close the graphic context. If you
prefer using Core Graphics, you can
use the CGBitmapContextCreate function
to create a bitmap graphics context
and draw your image contents into it.
When you finish drawing, use the
CGBitmapContextCreateImage function to
create a CGImageRef from the bitmap
context. You can draw the Core
Graphics image directly or use this it
to initialize a UIImage object.