iPhone Programming setNeedsDisplay not working - iphone

Implemented my own drawRect method and I'm trying to redraw the shape from a Controller class and I can't figure out how to correctly implement setNeedsDisplay to redraw the UIView. Please help!!
I know the code is ugly, but here is the custom method:
- (void)drawRect:(CGRect)rect{
// Drawing code
NSArray *kyle = [self pointsForPolygonInRect:rect numberOfSides:[pshape numberOfSides]];
CGContextRef c = UIGraphicsGetCurrentContext();
int counter = [kyle count];
NSLog(#"counter: %d",counter);
int i = 0;
BOOL first = YES;
NSValue *kylevalue;
CGPoint thePoint;
for (i = 0; i < counter; i++) {
kylevalue = [kyle objectAtIndex:i];
thePoint = [kylevalue CGPointValue];
if (first) { //start.
CGContextMoveToPoint(c, thePoint.x, thePoint.y+5.0);
first = NO;
} else { //do the rest
CGContextAddLineToPoint(c, thePoint.x, thePoint.y+5.0);
}
}
CGContextClosePath(c); //solid color
CGContextDrawPath(c, kCGPathFillStroke);
}

I've had similar problems with redrawing not working as I expected either. It seems as though setNeedsDisplay does not force children to redraw, for those you need to call their setNeedsDisplay method.
For my needs I wrote a category to redraw the entire screen by calling setNeedsDisplay on every single view. This can of course easily be modified to start from a specific view as opposed to all windows.
#implementation UIApplication (Extensions)
+ (void) redrawScreen {
NSMutableSet* todo = [NSMutableSet set];
NSMutableSet* done = [NSMutableSet set];
[todo addObjectsFromArray:[[UIApplication sharedApplication] windows]];
while ([todo count] > 0) {
UIView* view = [todo anyObject];
[view setNeedsDisplay];
[done addObject:view];
[todo removeObject:view];
if (view.subviews) {
NSMutableSet* subviews = [NSMutableSet setWithArray:view.subviews];
[subviews minusSet:done];
[todo unionSet:subviews];
}
}
}
#end
Hope this is of help to someone

I'm not sure I understand your question. Calling -setNeedsDisplay on a view causes it to be redrawn via its -drawRect: method.

just a few thoughts
1. have you verified that your method is getting called?
2. have you verified that your array of points is in fact populated with more than one point?
3. are the points actually in the viewable area of the frame?
4. I don't see you setting the stroke line width, color or fill colors.

Sometimes reason can be very simple: File's owner has no connection to UIView object. i.e. it's Outlet is not setup properly.
Use IB, ctrl button and drag method :)

If your plot has the drawRect method, look at your viewController, at the code where it connects to your plot. Is that hooked up properly? Is the little circle by the property hollow? or filled in. That little circle must be filled in if the property is connected properly. If it is hollow, bring up the story board.
Find the plot object in the story board. Control drag from the plot object in the story board over to the empty circle on the view controller. Now you've hooked up the outlet. Try running it again, and see if your drawRect is called now.

Related

iOS: CGContextRef drawRect does not agree with input

Background : I would like to draw blocks when the user touch up somewhere. If the block is there, I want to erase it. I manage the blocks by using NSMutableArrayto keep track of points where the block should go. Every time user touches, it will determine if the touch place already contained a block or not and manage the array accordingly.
Problem : I got a very weird feedback from this. First of all, everything in the array works as I wanted. The problem comes when the user wanted to erase a block. While the array is maintained correctly, the drawing seems to ignore the change in the array. It will not remove anything but the last dot. And even that flashes toggles on and off when the user clicked elsewhere.
Here is the code :
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawrect current array %#",pointArray);
for (NSValue *pointValue in pointArray){
CGPoint point = [pointValue CGPointValue];
[self drawSquareAt:point];
}
}
- (void) drawSquareAt:(CGPoint) point{
float x = point.x * scale;
float y = point.y * scale;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, x, y);
CGContextAddLineToPoint(context, x+scale, y);
CGContextAddLineToPoint(context, x+scale, y+scale);
CGContextAddLineToPoint(context, x, y+scale);
CGContextAddLineToPoint(context, x, y);
CGContextSetFillColorWithColor(context, [UIColor darkGrayColor].CGColor);
CGContextFillPath(context);
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *aTouch = [touches anyObject];
CGPoint point = [aTouch locationInView:self];
point = CGPointMake( (int) (point.x/scale), (int) (point.y/scale));
NSLog(#"Touched at %#", [NSArray arrayWithObject: [NSValue valueWithCGPoint:point]]);
NSValue *pointValue = [NSValue valueWithCGPoint:point];
int i = [pointArray indexOfObject:pointValue];
NSLog(#"Index at %i",i);
if (i < [pointArray count]){
[pointArray removeObjectAtIndex:i];
NSLog(#"remove");
}else {
[pointArray addObject:pointValue];
NSLog(#"add");
}
NSLog(#"Current array : %#", pointArray);
[self setNeedsDisplay];
}
scale is defined as 16.
pointArray is a member variable of the view.
To Test : You can drop this into any UIView and add that to the viewController to see the effect.
Question : How do I get the drawing to agree with the array?
Update + Explanation: I am aware of the cost of this approach but it is only created for me to get a quick figure. It will not be used in the real application, thus, please do not get hung up about how expensive it is. I only created this capability to get a value in NSString (#"1,3,5,1,2,6,2,5,5,...") of a figure I draw. This will become more efficient when I am actually using it with no redrawing. please stick to the question asked. Thank you.
I don't see anywhere where you are actually clearing what you drew previously. Unless you explicitly clear (such as by filling with UIRectFill() - which, as an aside, is a more convenient way to draw rectangles than filling an explicit path), Quartz is going to just draw over your old content, which will cause unexpected behavior on attempts at erasure.
So... what happens if you put at the beginning of -drawRect::
[[UIColor whiteColor] setFill]; // Or whatever your background color is
UIRectFill([self bounds]);
(This is of course horrendously inefficient, but per your comment, I am disregarding that fact.)
(As a separate aside, you probably should wrap your drawing code in a CGContextSaveGState()/CGContextRestoreGState() pair to avoid tainting the graphics context of any calling code.)
EDIT: I always forget about this property since I usually want to draw more complex backgrounds anyway, but you can likely achieve similar results by setting clearsContextBeforeDrawing:YES on the UIView.
This approach seems a little weird to me because every time the touchesEnded method is called you need to redraw (which is an expensive operation) and also need keep track of the squares. I suggest you subclass an UIView and implement the drawRect: method, so the view knows how to draw itself and implement the touchesEnded method in your view controller, where you can check if you have touched a squareView then remove it from view controller's view otherwise create a squareView and add it as subview to the view controller's view.

core graphics, how to draw lines at runtime?

The task is, to draw paths at runtime on custom maps which im using in a Scrollview, and then i will have to draw paths at runtime whenever the location coordinates (lat, long) updates. The problem what im trying to solve here is that i have made a class 'graphics' which is a subclass of UIView, in which i code the drawing in the 'drawrect:' method. So when im adding the graphics as subview of the scrollview over image, the line draws, but i need to keep drawing the line as though it were paths. I need to draw the lines at runtime, need to keep updating the points(x,y) of 'CGContextStrokeLineSegments' method. The code:
ViewController:
- (void)loadView {
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CGRect fullScreenRect=[[UIScreen mainScreen] applicationFrame];
scrollView=[[UIScrollView alloc] initWithFrame:fullScreenRect];
graph = [[graphics alloc] initWithFrame:fullScreenRect];
scrollView.contentSize=CGSizeMake(320,480);
UIImageView *tempImageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"fortuneCenter.png"]];
self.view=scrollView;
[scrollView addSubview:tempImageView2];
scrollView.userInteractionEnabled = YES;
scrollView.bounces = NO;
[scrollView addSubview:graph];
}
Graphics.m:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point [2] = { CGPointMake(160, 100), CGPointMake(160,300)};
CGContextSetRGBStrokeColor(context, 255, 0, 255, 1);
CGContextStrokeLineSegments(context, point, 2);
}
So how can i draw the lines at runtime. Im just simulating right now, so im not using the realtime data (coordinates). Just want to simulate by using dummy data (coordinates of x,y). Lets say have a button, whenever i press it it updates the coordinates so path extends.
The easiest way would be to add an instance variable representing the points to the UIView subclass.
Then, every time the path changes, update the ivar appropriately and call -setNeedsDisplay or setNeedsDisplayInRect on the custom UIView (or even on its superview). The runtime will then redraw the new path.
You just need to make CGPoint point[] dynamically resizable, from the looks of it.
You can use malloc, a std::vector, or even NSMutableData to store the points you add. Then you pass that array to CGContextStrokeLineSegments.
If 2 points is all you will need, move CGPoint point[2] to an ivar so you may store the positions, then (as Rich noted) invalidate rects appropriately when these values (or the array) are changed.
This subject comes up every now and then, so I created a longer blog post on the general concepts involved with one potential solution, creating and using your own graphics context, here: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html

Draw a Path - iPhone

I am animating an object along a path on the iPhone. The code works great using CAKeyframeAnimation. For debugging purposes, I would like to draw the line on the screen. I can't seem to do that using the current CGContextRef. Everywhere I look it says you need to subclass the UIView then override the drawRect method to draw on the screen. Is there anyway around this? If not, how do I pass data to the drawRect method so it knows what do draw?
EDIT:
Ok. I ended up subclassing UIView and implementing the drawing in the drawRect method. I can get it to draw to the screen by creating another method in the subclassed view (called drawPath) that sets an instance variable then calls setNeedsDisplay. That in turn fires the drawRect method which uses the instance variable to draw to the screen. Is this the best practice? What happens if I want to draw 20+ paths. I shouldn't have to create properties for all of these.
In your drawRect method of UIView put some code like this
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 0,0 , 0.0);
CGContextSetLineWidth(context, 3.f);
CGContextBeginPath(context);
CGContextMoveToPoint(context,x1,y1);
CGContextAddLineToPoint(context, x2 , y2);
CGContextStrokePath(context);
If you want to use Core Graphics drawing use CALayer. If you do not want to subclass it, delegate drawing to the view.
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
CALayer* myLayer = [CALayer new];
myLayer.delegate = self;
self.layer = myLayer;
}
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
// draw your path here
}
Do not forget to call [self.layer setNeedsDisplay]; when you need to redraw it.

Why animating custom CALayer properties causes other properties to be nil during animation?

I have a custom CALayer (say CircleLayer), containing custom properties (radius and tint). The layer renders itself in its drawInContext: method.
- (void)drawInContext:(CGContextRef)ctx {
NSLog(#"Drawing layer, tint is %#, radius is %#", self.tint, self.radius);
CGPoint centerPoint = CGPointMake(CGRectGetWidth(self.bounds)/2, CGRectGetHeight(self.bounds)/2);
CGContextMoveToPoint(ctx, centerPoint.x, centerPoint.y);
CGContextAddArc(ctx, centerPoint.x, centerPoint.y, [self.radius doubleValue], radians(0), radians(360), 0);
CGContextClosePath(ctx);
/* Filling it */
CGContextSetFillColorWithColor(ctx, self.tint.CGColor);
CGContextFillPath(ctx);
}
I want the radius to be animatable so I've implemented
+ (BOOL)needsDisplayForKey:(NSString *)key {
if ([key isEqualToString:#"radius"]) {
return YES;
}
return [super needsDisplayForKey:key];
}
And the animation is performed like this:
CABasicAnimation *theAnimation=[CABasicAnimation animationWithKeyPath:#"radius"];
theAnimation.duration=2.0;
theAnimation.fromValue=[NSNumber numberWithDouble:100.0];
theAnimation.toValue=[NSNumber numberWithDouble:50.0];
theAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
[circleLayer addAnimation:theAnimation forKey:#"animateRadius"];
circleLayer.radius = [NSNumber numberWithDouble:50.0];
drawInContext: gets called as expected during the animation to redraw the circle, however the tint is set to nil as soon as the animation starts and gets back to its original value when the animation ends.
I've concluded that if I want to animate a custom property and want other properties to keep their value during the animation, I have to animate them too, which I find not being convenient at all.
The purpose is not to grow/shrink a circle, I know I can use transformation for this. It is only to illustrate with a simple example the problem of animating a single custom property without having to animate all the other ones.
I've made a simple project illustrating the issue, which you can find here:
Sample project illustrating the issue
There is probably something I didn't get on how CoreAnimation works, I've performed intensive searching but I'm stuck with no clue. Anyone knows?
If I understood your question correctly, it goes like this. When you add an animation to a CALayer, it creates a so-called presentation copy of that layer using initWithLayer:. The presentation layer contains actual animated state for each animation frame, while the original layer has the final state. The problem with animating your own properties is that CALayer does not copy them all in initWithLayer:. If that's your case, your should override initWithLayer: and set up all the properties you need for animation, that is, both tint and radius.
+ (BOOL)needsDisplayForKey:(NSString *)key {
if ([key isEqualToString:#"radius"] || [key isEqualToString:#"tint"]) {
return YES;
}
return [super needsDisplayForKey:key];
}
The animation may require all properties of the context to respond to a refresh.

UI design - Best way to deal with many UIButtons

I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}