This Is a problem that I've been leaving and coming back to for a while now. I've never really nailed the problem.
What I've been trying to do use CADisplayLink to dynamically draw pie chart style progress. My code works fine when I have 1 - 4 uiviews updating simultaneously. When I add any more than that the drawing of the pies becomes very jerky.
I want to explain what I have been trying in the hope that somebody could point out the inefficiencies and suggest a better drawing method.
I create 16 uiviews and add a CAShapeLayer subview to each one. This is where I want to draw my pie slices.
I precalcuate 360 CGPaths representing 0 to 360 degrees of a circle and store them in an array to try and improve performance.
In a master View I start a displaylink,loop through all my other views, calculate how much of a full pie it should show, then find the right path and assign it to my shapelayer.
-(void)makepieslices
{
pies=[[NSMutableArray alloc]initWithCapacity:360];
float progress=0;
for(int i=0;i<=360;i++)
{
progress= (i* M_PI)/180;
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
_pies[i]=thePath;
}
}
- (void)updatePath:(CADisplayLink *)dLink {
for (int idx=0; idx<[spinnydelegates count]; idx++) {
id<SyncSpinUpdateDelegate> delegate = [spinnydelegates objectAtIndex:idx];
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[delegate updatePath:dLink];
});
}
}
- (void)updatePath:(CADisplayLink *)dLink {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
currentarc=[engineref getsyncpercentForPad:cid pad:pid];
int progress;
progress = roundf(currentarc*360);
dispatch_async(dispatch_get_main_queue(), ^{
shapeLayer_.path = _pies[progress];
});
});
}
This technique just straight out isnt working for me when trying to simultaneously update more than 4 or 5 pies at the same time. 16 screen updates at the same time sounds like it should really not be that big of a deal for the ipad to me. So this leads me to think I doing something very very fundamentally wrong.
I'd really appreciate if somebody could tell me why this technique results in jittery screen updates and also if they could suggest a different technique that I could go an investigate that will allow me to perform 16 simultaneous shapelayer updates smoothly.
EDIT Just to give you an idea of how bad performance is, when I have all 16 pies drawing the cpu goes up to 20%
*EDIT *
This is based on studevs advice but I don't see anything been drawn. segmentLayer is a CGLayerRef as a property of my pieview.
-(void)makepies
{
self.layerobjects=[NSMutableArray arrayWithCapacity:360];
CGFloat progress=0;
CGContextRef context=UIGraphicsGetCurrentContext();
for(int i =0;i<360;i++)
{
progress= (i*M_PI)/180.0f;
CGLayerRef segmentlayer=CGLayerCreateWithContext(context, CGSizeMake(30, 30), NULL);
CGContextRef layerContext=CGLayerGetContext(segmentlayer);
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
[layerobjects addObject:(id)segmentlayer];
CGLayerRelease(segmentlayer);
}
}
-(void)updatePath
{
int progress;
currentarc=[engineref getsyncpercent];
progress = roundf(currentarc*360);
//shapeLayer_.path = _pies[progress];
self.pieView.segmentLayer=(CGLayerRef)[layerobjects objectAtIndex:progress];
[self.pieView setNeedsDisplay];
}
-(void)drawRect:(CGRect)rect
{
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
I think one of the first things you should look to do is buffer your segments (currently represented by CGPath objects) offscreen using CGLayer objects. From the docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same
background. Draw the background scene to a layer and then draw the
layer whenever you need it. One added benefit is that you don’t need
to know color space or device-dependent information to draw to a
layer.
Repeated drawing. For example, you might want to create a
pattern that consists of the same item drawn over and over. Draw the
item to a layer and then repeatedly draw the layer, as shown in Figure
12-1. Any Quartz object that you draw repeatedly—including CGPath,
CGShading, and CGPDFPage objects—benefits from improved performance if
you draw it to a CGLayer. Note that a layer is not just for onscreen
drawing; you can use it for graphics contexts that aren’t
screen-oriented, such as a PDF graphics context.
Create a UIView subclass that draws the pie. Give it an instance variable for that pie's current progress, and override drawRect: to draw the layer representing that progress. The view needs to first get a reference the required CGLayer object, so implement a delegate with the method:
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
It will then become the delegate's job to return an existing CGLayerRef, or if it doesn't exist yet, create it. Since the CGLayer can only be created from within drawRect:, this delegate method should be called from PieView's drawRect: method. PieView should look something like this:
PieView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#class PieView;
#protocol PieViewDelegate <NSObject>
#required
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
#end
#interface PieView : UIView
#property(nonatomic, weak) id <PieViewDelegate> delegate;
#property(nonatomic) NSInteger progress;
#end
PieView.m
#import "PieView.h"
#implementation PieView
#synthesize delegate, progress;
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef segmentLayer = [delegate pieView:self segmentLayerForProgress:self.progress context:context];
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
#end
Your PieView's delegate (most likely your view controller) then implements:
NSString *const SegmentCacheKey = #"SegmentForProgress:";
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context
{
// First, try to retrieve the layer from the cache
NSString *cacheKey = [SegmentCacheKey stringByAppendingFormat:#"%d", progress];
CGLayerRef segmentLayer = (__bridge_retained CGLayerRef)[segmentsCache objectForKey:cacheKey];
if (!segmentLayer) { // If the layer hasn't been created yet
CGFloat progressAngle = (progress * M_PI) / 180.0f;
// Create the layer
segmentLayer = CGLayerCreateWithContext(context, layerSize, NULL);
CGContextRef layerContext = CGLayerGetContext(segmentLayer);
// Draw the segment
CGContextSetFillColorWithColor(layerContext, [[UIColor blueColor] CGColor]);
CGContextMoveToPoint(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f);
CGContextAddArc(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f, layerSize.width / 2.0f, 0.0f, progressAngle, NO);
CGContextClosePath(layerContext);
CGContextFillPath(layerContext);
// Cache the layer
[segmentsCache setObject:(__bridge_transfer id)segmentLayer forKey:cacheKey];
}
return segmentLayer;
}
So for each pie, create a new PieView and set it's delegate. When you need to update a pie, update the PieView's progress property and call setNeedsDisplay.
I'm using an NSCache here since there are a lot of graphics being stored, and it could take up a lot of memory. You could also limit the number of segments being drawn - 100 is probably plenty. Also, I agree with other comments/answers that you might try updating the views less often, as this will consume less CPU and battery power (60fps is probably not necessary).
I did some crude testing of this method on an iPad (1st gen) and managed to get well over 50 pies updating at 30fps.
dubbeat: ...CADisplayLink...
Justin: do you need to draw at the display's refresh rate?
dubbeat: The progress of the pie drawing is supposed to represent the progress of an mp3s playback progress so I guess at the displays refresh rate at a minimum.
That's much faster than is necessary, unless you're trying to display some really, really, really exotic visualizer, which is very unlikely if your spinner's radius is 28pt. Also, there's no reason to draw faster than the display's frequency.
One side effect is that your spinner's superviews may also updating at this high frequency. If you can make the spinner view opaque, then you can reduce overdrawing of superviews (and subviews if you have them).
60fps is a good number for a really fast desktop game. For an ornament/progress bar, it's far more than necessary.
Try this:
not using CADisplayLink, but the standard view system
use an NSTimer on the main run loop, begin with a frequency of 8 Hz*
adjust timer to taste
then let us know if that is adequately fast.
*the timer callback calls [spinner setNeedsDisplay]
Well, you could achieve some performance improvement by pre-assembling the background view, capturing the image of it, and then just using the image in an image view for the background. You could go further by capturing a view of the "relatively static" parts of your chart, updating that static view only when necessary.
Store your 360 circle segments as textures and use OpenGL to animate the sequences.
Related
So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.
Do you know how to create an animation like the Blue Marble drop User-Location in MKMapView?
Although I am not sure on the specifics of how Apple accomplished this effect, this feels to me like a great opportunity to use CoreAnimation and custom animatable properties. This post provides some nice background on the subject. I assume by the "Blue Marble drop" animation you're referring to the following sequence:
Large light blue circle zooms into frame
Large light blue circle oscillates between two relatively large radii as location is
calculated
Large light blue circle zooms into small darker blue circle on the user's location
Although this may be simplifying the process slightly, I think it's a good place to start and more complex/detailed functionality can be added with relative ease (i.e. the small dark circle pulsing as larger circle converges on it.)
The first thing we need is a custom CALayer subclass with a custom property for our outer large light blue circles radius:
#import <QuartzCore/QuartzCore.h>
#interface CustomLayer : CALayer
#property (nonatomic, assign) CGFloat circleRadius;
#end
and the implementation:
#import "CustomLayer.h"
#implementation CustomLayer
#dynamic circleRadius; // Linked post tells us to let CA implement our accessors for us.
// Whether this is necessary or not is unclear to me and one
// commenter on the linked post claims success only when using
// #synthesize for the animatable property.
+ (BOOL)needsDisplayForKey:(NSString*)key {
// Let our layer know it has to redraw when circleRadius is changed
if ([key isEqualToString:#"circleRadius"]) {
return YES;
} else {
return [super needsDisplayForKey:key];
}
}
- (void)drawInContext:(CGContextRef)ctx {
// This call is probably unnecessary as super's implementation does nothing
[super drawInContext:ctx];
CGRect rect = CGContextGetClipBoundingBox(ctx);
// Fill the circle with a light blue
CGContextSetRGBFillColor(ctx, 0, 0, 255, 0.1);
// Stoke a dark blue border
CGContextSetRGBStrokeColor(ctx, 0, 0, 255, 0.5);
// Construct a CGMutablePath to draw the light blue circle
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddArc(path, NULL, rect.size.width / 2,
rect.size.height / 2,
self.circleRadius, 0, 2 * M_PI, NO);
// Fill the circle
CGContextAddPath(ctx, path);
CGContextFillPath(ctx);
// Stroke the circle's border
CGContextAddPath(ctx, path);
CGContextStrokePath(ctx);
// Release the path
CGPathRelease(path);
// Set a dark blue color for the small inner circle
CGContextSetRGBFillColor(ctx, 0, 0, 255, 1.0f);
// Draw the center dot
CGContextBeginPath (ctx);
CGContextAddArc(ctx, rect.size.width / 2,
rect.size.height / 2,
5, 0, 2 * M_PI, NO);
CGContextFillPath(ctx);
CGContextStrokePath(ctx);
}
#end
With this infrastructure in place, we can now animate the radius of the outer circle with ease b/c CoreAnimation will take care of the value interpolations as well as redraw calls. All we have to do his add an animation to the layer. As a simple proof of concept, I chose a simple CAKeyframeAnimation to go through the 3 stage animation:
// In some controller class...
- (void)addLayerAndAnimate {
CustomLayer *customLayer = [[CustomLayer alloc] init];
// Make layer big enough for the initial radius
// EDIT: You may want to shrink the layer when it reacehes it's final size
[customLayer setFrame:CGRectMake(0, 0, 205, 205)];
[self.view.layer addSublayer:customLayer];
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:#"circleRadius"];
// Zoom in, oscillate a couple times, zoom in further
animation.values = [NSArray arrayWithObjects:[NSNumber numberWithFloat:100],
[NSNumber numberWithFloat:45],
[NSNumber numberWithFloat:50],
[NSNumber numberWithFloat:45],
[NSNumber numberWithFloat:50],
[NSNumber numberWithFloat:45],
[NSNumber numberWithFloat:20],
nil];
// We want the radii to be 20 in the end
customLayer.circleRadius = 20;
// Rather arbitrary values. I thought the cubic pacing w/ a 2.5 second pacing
// looked decent enough but you'd probably want to play with them to get a more
// accurate imitation of the Maps app. You could also define a keyTimes array for
// a more discrete control of the times per step.
animation.duration = 2.5;
animation.calculationMode = kCAAnimationCubicPaced;
[customLayer addAnimation:animation forKey:nil];
}
The above is a rather "hacky" proof of concept as I am not sure of the specific way in which you intend to use this effect. For example, if you wanted to oscillate the circle until data was ready, the above wouldn't make a lot of sense because it will always oscillate twice.
Some closing notes:
Again, I am not sure of your intent for this effect. If, for
example, you're adding it to an MKMapView, the above may require
some tweaking to integrate with MapKit.
The linked post suggests the above method requires the version of CoreAnimation in iOS 3.0+ and OS X 10.6+
Speaking of the linked post (as I did often), much credit and thanks to Ole Begemann who wrote it and did a wonderful job explaining custom properties in CoreAnimation.
EDIT: Also, for performance reasons, you're probably going to want to make sure the layer is only as big as it needs to be. That is, after your done animating from the larger size, you may want to scale the size down so you're only using/drawing as much room as necessary. A nice way to do this would be just to find a way animate the bounds (as opposed to circleRadius) and perform this animation based the size interpolation but I've had some trouble implementing that (perhaps someone could add some insight on that subject).
Hope this helps,
Sam
Add this to your map object:
myMap.showsUserLocation = TRUE;
Background: I have a custom scrollview (subclassed) that has uiimageviews on it that are draggable, based on the drags I need to draw some lines dynamically in a subview of the uiscrollview. (Note I need them in a subview as at a later point i need to change the opacity of the view.)
So before I spend ages developing the code (i'm a newbie so it will take me a while) I looked into what i need to do and found some possible ways. Just wondering what the right way to do this.
Create a subclass of UIView and use the drawRect method to draw the line i need (but unsure how to make it dynamically read in the values)
On the subview use CALayers and draw on there
Create a draw line method using CGContext functions
Something else?
Cheers for the help
Conceptually all your propositions are similar. All of them would lead to the following steps (some of them done invisibly by UIKit):
Setup a bitmap context in memory.
Use Core Graphics to draw the line into the bitmap.
Copy this bitmap to a GPU buffer (a texture).
Compose the layer (view) hierarchy using the GPU.
The expensive part of the above steps are the first three points. They lead to repeated memory allocation, memory copying, and CPU/GPU communication. On the other hand, what you really want to do is lightweight: Draw a line, probably animating start/end points, width, color, alpha, ...
There's an easy way to do this, completely avoiding the described overhead: Use a CALayer for your line, but instead of redrawing the contents on the CPU just fill it completely with the line's color (setting its backgroundColor property to the line's color. Then modify the layer's properties for position, bounds, transform, to make the CALayer cover the exact area of your line.
Of course, this approach can only draw straight lines. But it can also be modified to draw complex visual effects by setting the contents property to an image. You could, for example have fuzzy edges of a glow effect on the line, using this technique.
Though this technique has its limitations, I used it quite often in different apps on the iPhone as well as on the Mac. It always had dramatically superior performance than the core graphics based drawing.
Edit: Code to calculate layer properties:
void setLayerToLineFromAToB(CALayer *layer, CGPoint a, CGPoint b, CGFloat lineWidth)
{
CGPoint center = { 0.5 * (a.x + b.x), 0.5 * (a.y + b.y) };
CGFloat length = sqrt((a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y));
CGFloat angle = atan2(a.y - b.y, a.x - b.x);
layer.position = center;
layer.bounds = (CGRect) { {0, 0}, { length + lineWidth, lineWidth } };
layer.transform = CATransform3DMakeRotation(angle, 0, 0, 1);
}
2nd Edit: Here's a simple test project which shows the dramatical difference in performance between Core Graphics and Core Animation based rendering.
3rd Edit: The results are quite impressive: Rendering 30 draggable views, each connected to each other (resulting in 435 lines) renders smoothly at 60Hz on an iPad 2 using Core Animation. When using the classic approach, the framerate drops to 5 Hz and memory warnings eventually appear.
First, for drawing on iOS you need a context and when drawing on the screen you cannot get the context outside of drawRect: (UIView) or drawLayer:inContext: (CALayer). This means option 3 is out (if you meant to do it outside a drawRect: method).
You could go for a CALayer, but I'd go for a UIView here. As far as I have understood your setup, you have this:
UIScrollView
| | |
ViewA ViewB LineView
So LineView is a sibling of ViewA and ViewB, would need be big enough to cover both ViewA and ViewB and is arranged to be in front of both (and has setOpaque:NO set).
The implementation of LineView would be pretty straight forward: give it two properties point1 and point2 of type CGPoint. Optionally, implement the setPoint1:/setPoint2: methods yourself so it always calls [self setNeedsDisplay]; so it redraws itself once a point has been changed.
In LineView's drawRect:, all you need to is draw the line either with CoreGraphics or with UIBezierPath. Which one to use is more or less a matter of taste. When you like to use CoreGraphics, you do it like this:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// Set up color, line width, etc. first.
CGContextMoveToPoint(context, point1);
CGContextAddLineToPoint(context, point2);
CGContextStrokePath(context);
}
Using NSBezierPath, it'd look quite similar:
- (void)drawRect:(CGRect)rect
{
UIBezierPath *path = [UIBezierPath bezierPath];
// Set up color, line width, etc. first.
[path moveToPoint:point1];
[path addLineToPoint:point2];
[path stroke];
}
The magic is now getting the correct coordinates for point1 and point2. I assume you have a controller that can see all the views. UIView has two nice utility methods, convertPoint:toView: and convertPoint:fromView: that you'll need here. Here's dummy code for the controller that would cause the LineView to draw a line between the centers of ViewA and ViewB:
- (void)connectTheViews
{
CGPoint p1, p2;
CGRect frame;
frame = [viewA frame];
p1 = CGPointMake(CGRectGetMidX(frame), CGRectGetMidY(frame));
frame = [viewB frame];
p2 = CGPointMake(CGRectGetMidX(frame), CGRectGetMidY(frame));
// Convert them to coordinate system of the scrollview
p1 = [scrollView convertPoint:p1 fromView:viewA];
p2 = [scrollView convertPoint:p2 fromView:viewB];
// And now into coordinate system of target view.
p1 = [scrollView convertPoint:p1 toView:lineView];
p2 = [scrollView convertPoint:p2 toView:lineView];
// Set the points.
[lineView setPoint1:p1];
[lineView setPoint2:p2];
[lineView setNeedsDisplay]; // If the properties don't set it already
}
Since I don't know how you've implemented the dragging I can't tell you how to trigger calling this method on the controller. If it's done entirely encapsulated in your views and the controller is not involved, I'd go for a NSNotification that you post every time the view is dragged to a new coordinate. The controller would listen for the notification and call the aforementioned method to update the LineView.
One last note: you might want to call setUserInteractionEnabled:NO on your LineView in its initWithFrame: method so that a touch on the line will go through to the view under the line.
Happy coding !
I've an app which provides to the user some sort of a line graph. I'm using an UIScrollView which is containing the view with graph. The view is using CoreGraphics to draw the graph in it's drawrect method.
The problem arises when the graph gets too long. Scrolling through the graph seems to stutter and eventually the app would run out of memory and exit. Looking around at other apps I see the guys who created the WeightBot app were able to manage long ongoing graphs without any problems so apparently I'm doing it the wrong way.
I was wondering how this sort of long line graphs are created without bumping into memory issues?
EDIT: adding some code
Basically all I do is init the view which build's the graph in it's drawRect method and add the view as a subView to the scrollView.
This is how the view's drawRect is implemented:
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(c, self.backgroundColor.CGColor);
CGContextFillRect(c, rect);
//... do some initialization
for (NSUInteger i = 0; i < xValuesCount; i++)
{
NSUInteger x = (i * step) * stepX;
NSUInteger index = i * step;
CGPoint startPoint = CGPointMake(x + offsetX, offsetY);
CGPoint endPoint = CGPointMake(x + offsetX, self.frame.size.height - offsetY);
CGContextMoveToPoint(c, startPoint.x, startPoint.y);
CGContextAddLineToPoint(c, endPoint.x, endPoint.y);
CGContextClosePath(c);
CGContextSetStrokeColorWithColor(c, self.gridXColor.CGColor);
CGContextStrokePath(c);
}
}
A large view (with a draw method) takes lots of memory, even if its superview is small. Your oversized subview will require a huge backbuffer.
Instead, simply subclass directly from the uiscrollingview. The scrollingview is only as big as its visual part. The offset is automatically taken care of when drawing. Your draw method will be called all the time, but that should be okay.
The rect argument of drawRect: indicates which section of your view you're being asked to draw. You should add some logic to work out which parts of your graph are in that rect and only draw those, instead of redrawing the whole thing on every call.
Figure out what portion of your data set is visible, and only draw what you need to.
I need to draw a few hundred lines and circles on my view and they keep moving through a timer function, where I call [myView setNeedsDisplay] to update the view.
I subclass (myView) from UIView and implement drawRect function to do the following ...
-(void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat red[4] = { 1, 0, 0, 1};
CGContextSetLineWidth(context, 1);
CGContextSetShouldAntialias(context, NO);
CGContextSetLineCap(context, kCGLineCapSquare);
CGContextSetStrokeColor(context, red);
// rects is an array of CGRect of size ~200
for (int i = 0; i < N; i++) {
CGContextAddEllipseInRect(context, rects[i]);
}
// points is an array of CGPoint of size ~100
CGContextAddLines(context, points, N);
CGContextStrokePath(context, color);
}
But this is dog slow. Is there something I am missing here?
It is taking almost 1 sec to do one complete drawing
Animating objects by redrawing them constantly is a bad way to go. Quartz drawing is one of the slowest things you can do, UI-wise, because of the way that the display system works.
Instead, what you will want to do is create individual layers or views for each element that will be animated. These layers or views will only be drawn once, and then cached. When the layers move around, they won't be redrawn, just composited. Done this way, even the slowest iOS devices (the original iPhone, iPhone 3G, and first generation iPod touch) can animate up to 100 layers at 60 frames per second.
Think of it like animating a cartoon. Rather than have the animators redraw by hand every part of every frame, they use cells to reuse elements between frames that stay the same or just move without changing form. This significantly reduces the effort of producing a cartoon.