I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}
Related
Background : I would like to draw blocks when the user touch up somewhere. If the block is there, I want to erase it. I manage the blocks by using NSMutableArrayto keep track of points where the block should go. Every time user touches, it will determine if the touch place already contained a block or not and manage the array accordingly.
Problem : I got a very weird feedback from this. First of all, everything in the array works as I wanted. The problem comes when the user wanted to erase a block. While the array is maintained correctly, the drawing seems to ignore the change in the array. It will not remove anything but the last dot. And even that flashes toggles on and off when the user clicked elsewhere.
Here is the code :
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawrect current array %#",pointArray);
for (NSValue *pointValue in pointArray){
CGPoint point = [pointValue CGPointValue];
[self drawSquareAt:point];
}
}
- (void) drawSquareAt:(CGPoint) point{
float x = point.x * scale;
float y = point.y * scale;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, x, y);
CGContextAddLineToPoint(context, x+scale, y);
CGContextAddLineToPoint(context, x+scale, y+scale);
CGContextAddLineToPoint(context, x, y+scale);
CGContextAddLineToPoint(context, x, y);
CGContextSetFillColorWithColor(context, [UIColor darkGrayColor].CGColor);
CGContextFillPath(context);
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *aTouch = [touches anyObject];
CGPoint point = [aTouch locationInView:self];
point = CGPointMake( (int) (point.x/scale), (int) (point.y/scale));
NSLog(#"Touched at %#", [NSArray arrayWithObject: [NSValue valueWithCGPoint:point]]);
NSValue *pointValue = [NSValue valueWithCGPoint:point];
int i = [pointArray indexOfObject:pointValue];
NSLog(#"Index at %i",i);
if (i < [pointArray count]){
[pointArray removeObjectAtIndex:i];
NSLog(#"remove");
}else {
[pointArray addObject:pointValue];
NSLog(#"add");
}
NSLog(#"Current array : %#", pointArray);
[self setNeedsDisplay];
}
scale is defined as 16.
pointArray is a member variable of the view.
To Test : You can drop this into any UIView and add that to the viewController to see the effect.
Question : How do I get the drawing to agree with the array?
Update + Explanation: I am aware of the cost of this approach but it is only created for me to get a quick figure. It will not be used in the real application, thus, please do not get hung up about how expensive it is. I only created this capability to get a value in NSString (#"1,3,5,1,2,6,2,5,5,...") of a figure I draw. This will become more efficient when I am actually using it with no redrawing. please stick to the question asked. Thank you.
I don't see anywhere where you are actually clearing what you drew previously. Unless you explicitly clear (such as by filling with UIRectFill() - which, as an aside, is a more convenient way to draw rectangles than filling an explicit path), Quartz is going to just draw over your old content, which will cause unexpected behavior on attempts at erasure.
So... what happens if you put at the beginning of -drawRect::
[[UIColor whiteColor] setFill]; // Or whatever your background color is
UIRectFill([self bounds]);
(This is of course horrendously inefficient, but per your comment, I am disregarding that fact.)
(As a separate aside, you probably should wrap your drawing code in a CGContextSaveGState()/CGContextRestoreGState() pair to avoid tainting the graphics context of any calling code.)
EDIT: I always forget about this property since I usually want to draw more complex backgrounds anyway, but you can likely achieve similar results by setting clearsContextBeforeDrawing:YES on the UIView.
This approach seems a little weird to me because every time the touchesEnded method is called you need to redraw (which is an expensive operation) and also need keep track of the squares. I suggest you subclass an UIView and implement the drawRect: method, so the view knows how to draw itself and implement the touchesEnded method in your view controller, where you can check if you have touched a squareView then remove it from view controller's view otherwise create a squareView and add it as subview to the view controller's view.
The task is, to draw paths at runtime on custom maps which im using in a Scrollview, and then i will have to draw paths at runtime whenever the location coordinates (lat, long) updates. The problem what im trying to solve here is that i have made a class 'graphics' which is a subclass of UIView, in which i code the drawing in the 'drawrect:' method. So when im adding the graphics as subview of the scrollview over image, the line draws, but i need to keep drawing the line as though it were paths. I need to draw the lines at runtime, need to keep updating the points(x,y) of 'CGContextStrokeLineSegments' method. The code:
ViewController:
- (void)loadView {
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CGRect fullScreenRect=[[UIScreen mainScreen] applicationFrame];
scrollView=[[UIScrollView alloc] initWithFrame:fullScreenRect];
graph = [[graphics alloc] initWithFrame:fullScreenRect];
scrollView.contentSize=CGSizeMake(320,480);
UIImageView *tempImageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"fortuneCenter.png"]];
self.view=scrollView;
[scrollView addSubview:tempImageView2];
scrollView.userInteractionEnabled = YES;
scrollView.bounces = NO;
[scrollView addSubview:graph];
}
Graphics.m:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point [2] = { CGPointMake(160, 100), CGPointMake(160,300)};
CGContextSetRGBStrokeColor(context, 255, 0, 255, 1);
CGContextStrokeLineSegments(context, point, 2);
}
So how can i draw the lines at runtime. Im just simulating right now, so im not using the realtime data (coordinates). Just want to simulate by using dummy data (coordinates of x,y). Lets say have a button, whenever i press it it updates the coordinates so path extends.
The easiest way would be to add an instance variable representing the points to the UIView subclass.
Then, every time the path changes, update the ivar appropriately and call -setNeedsDisplay or setNeedsDisplayInRect on the custom UIView (or even on its superview). The runtime will then redraw the new path.
You just need to make CGPoint point[] dynamically resizable, from the looks of it.
You can use malloc, a std::vector, or even NSMutableData to store the points you add. Then you pass that array to CGContextStrokeLineSegments.
If 2 points is all you will need, move CGPoint point[2] to an ivar so you may store the positions, then (as Rich noted) invalidate rects appropriately when these values (or the array) are changed.
This subject comes up every now and then, so I created a longer blog post on the general concepts involved with one potential solution, creating and using your own graphics context, here: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
This Is a problem that I've been leaving and coming back to for a while now. I've never really nailed the problem.
What I've been trying to do use CADisplayLink to dynamically draw pie chart style progress. My code works fine when I have 1 - 4 uiviews updating simultaneously. When I add any more than that the drawing of the pies becomes very jerky.
I want to explain what I have been trying in the hope that somebody could point out the inefficiencies and suggest a better drawing method.
I create 16 uiviews and add a CAShapeLayer subview to each one. This is where I want to draw my pie slices.
I precalcuate 360 CGPaths representing 0 to 360 degrees of a circle and store them in an array to try and improve performance.
In a master View I start a displaylink,loop through all my other views, calculate how much of a full pie it should show, then find the right path and assign it to my shapelayer.
-(void)makepieslices
{
pies=[[NSMutableArray alloc]initWithCapacity:360];
float progress=0;
for(int i=0;i<=360;i++)
{
progress= (i* M_PI)/180;
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
_pies[i]=thePath;
}
}
- (void)updatePath:(CADisplayLink *)dLink {
for (int idx=0; idx<[spinnydelegates count]; idx++) {
id<SyncSpinUpdateDelegate> delegate = [spinnydelegates objectAtIndex:idx];
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[delegate updatePath:dLink];
});
}
}
- (void)updatePath:(CADisplayLink *)dLink {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
currentarc=[engineref getsyncpercentForPad:cid pad:pid];
int progress;
progress = roundf(currentarc*360);
dispatch_async(dispatch_get_main_queue(), ^{
shapeLayer_.path = _pies[progress];
});
});
}
This technique just straight out isnt working for me when trying to simultaneously update more than 4 or 5 pies at the same time. 16 screen updates at the same time sounds like it should really not be that big of a deal for the ipad to me. So this leads me to think I doing something very very fundamentally wrong.
I'd really appreciate if somebody could tell me why this technique results in jittery screen updates and also if they could suggest a different technique that I could go an investigate that will allow me to perform 16 simultaneous shapelayer updates smoothly.
EDIT Just to give you an idea of how bad performance is, when I have all 16 pies drawing the cpu goes up to 20%
*EDIT *
This is based on studevs advice but I don't see anything been drawn. segmentLayer is a CGLayerRef as a property of my pieview.
-(void)makepies
{
self.layerobjects=[NSMutableArray arrayWithCapacity:360];
CGFloat progress=0;
CGContextRef context=UIGraphicsGetCurrentContext();
for(int i =0;i<360;i++)
{
progress= (i*M_PI)/180.0f;
CGLayerRef segmentlayer=CGLayerCreateWithContext(context, CGSizeMake(30, 30), NULL);
CGContextRef layerContext=CGLayerGetContext(segmentlayer);
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
[layerobjects addObject:(id)segmentlayer];
CGLayerRelease(segmentlayer);
}
}
-(void)updatePath
{
int progress;
currentarc=[engineref getsyncpercent];
progress = roundf(currentarc*360);
//shapeLayer_.path = _pies[progress];
self.pieView.segmentLayer=(CGLayerRef)[layerobjects objectAtIndex:progress];
[self.pieView setNeedsDisplay];
}
-(void)drawRect:(CGRect)rect
{
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
I think one of the first things you should look to do is buffer your segments (currently represented by CGPath objects) offscreen using CGLayer objects. From the docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same
background. Draw the background scene to a layer and then draw the
layer whenever you need it. One added benefit is that you don’t need
to know color space or device-dependent information to draw to a
layer.
Repeated drawing. For example, you might want to create a
pattern that consists of the same item drawn over and over. Draw the
item to a layer and then repeatedly draw the layer, as shown in Figure
12-1. Any Quartz object that you draw repeatedly—including CGPath,
CGShading, and CGPDFPage objects—benefits from improved performance if
you draw it to a CGLayer. Note that a layer is not just for onscreen
drawing; you can use it for graphics contexts that aren’t
screen-oriented, such as a PDF graphics context.
Create a UIView subclass that draws the pie. Give it an instance variable for that pie's current progress, and override drawRect: to draw the layer representing that progress. The view needs to first get a reference the required CGLayer object, so implement a delegate with the method:
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
It will then become the delegate's job to return an existing CGLayerRef, or if it doesn't exist yet, create it. Since the CGLayer can only be created from within drawRect:, this delegate method should be called from PieView's drawRect: method. PieView should look something like this:
PieView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#class PieView;
#protocol PieViewDelegate <NSObject>
#required
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
#end
#interface PieView : UIView
#property(nonatomic, weak) id <PieViewDelegate> delegate;
#property(nonatomic) NSInteger progress;
#end
PieView.m
#import "PieView.h"
#implementation PieView
#synthesize delegate, progress;
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef segmentLayer = [delegate pieView:self segmentLayerForProgress:self.progress context:context];
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
#end
Your PieView's delegate (most likely your view controller) then implements:
NSString *const SegmentCacheKey = #"SegmentForProgress:";
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context
{
// First, try to retrieve the layer from the cache
NSString *cacheKey = [SegmentCacheKey stringByAppendingFormat:#"%d", progress];
CGLayerRef segmentLayer = (__bridge_retained CGLayerRef)[segmentsCache objectForKey:cacheKey];
if (!segmentLayer) { // If the layer hasn't been created yet
CGFloat progressAngle = (progress * M_PI) / 180.0f;
// Create the layer
segmentLayer = CGLayerCreateWithContext(context, layerSize, NULL);
CGContextRef layerContext = CGLayerGetContext(segmentLayer);
// Draw the segment
CGContextSetFillColorWithColor(layerContext, [[UIColor blueColor] CGColor]);
CGContextMoveToPoint(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f);
CGContextAddArc(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f, layerSize.width / 2.0f, 0.0f, progressAngle, NO);
CGContextClosePath(layerContext);
CGContextFillPath(layerContext);
// Cache the layer
[segmentsCache setObject:(__bridge_transfer id)segmentLayer forKey:cacheKey];
}
return segmentLayer;
}
So for each pie, create a new PieView and set it's delegate. When you need to update a pie, update the PieView's progress property and call setNeedsDisplay.
I'm using an NSCache here since there are a lot of graphics being stored, and it could take up a lot of memory. You could also limit the number of segments being drawn - 100 is probably plenty. Also, I agree with other comments/answers that you might try updating the views less often, as this will consume less CPU and battery power (60fps is probably not necessary).
I did some crude testing of this method on an iPad (1st gen) and managed to get well over 50 pies updating at 30fps.
dubbeat: ...CADisplayLink...
Justin: do you need to draw at the display's refresh rate?
dubbeat: The progress of the pie drawing is supposed to represent the progress of an mp3s playback progress so I guess at the displays refresh rate at a minimum.
That's much faster than is necessary, unless you're trying to display some really, really, really exotic visualizer, which is very unlikely if your spinner's radius is 28pt. Also, there's no reason to draw faster than the display's frequency.
One side effect is that your spinner's superviews may also updating at this high frequency. If you can make the spinner view opaque, then you can reduce overdrawing of superviews (and subviews if you have them).
60fps is a good number for a really fast desktop game. For an ornament/progress bar, it's far more than necessary.
Try this:
not using CADisplayLink, but the standard view system
use an NSTimer on the main run loop, begin with a frequency of 8 Hz*
adjust timer to taste
then let us know if that is adequately fast.
*the timer callback calls [spinner setNeedsDisplay]
Well, you could achieve some performance improvement by pre-assembling the background view, capturing the image of it, and then just using the image in an image view for the background. You could go further by capturing a view of the "relatively static" parts of your chart, updating that static view only when necessary.
Store your 360 circle segments as textures and use OpenGL to animate the sequences.
I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.
This community has been tremendous help for me in many respects.
First time question (for me), and it's an easy one. I'm working through the iPhone SDK learning curve, at a good rate... but every once in a while, I come across a problem that, despite it's simplicity, is easier to ask and work on something else, then to spend another hour on reading.
I have a 2D game where a vehicle is moving around on the surface rotating to face the direction of travel. I've determined that Core Animation is my best approach.
The vehicle is an image. It's interactive to user input (touch).
Am I on the right track?
UIView (to act as Responder) containing a CALayer tree that includes the image (from a file).
The current file is a GIF. It made it easy to make the frame transparent, leaving only the vehicle image.
From the UIView subclass, how do I load the gif image into a layer?
Sounds simple, so I thought...
Cheers.
You're on the right track with Core Animation. CAKeyFrameAnimation has a path property which you'll use extensively. The following sample code (untested) uses straight line paths, but it's also possible to use curved paths:
UIImage *carImage = [UIImage imageNamed:#"car.png"];
carView = [[UIImageView alloc] initWithImage:carImage];
[mapView addSubview:carView];
CAKeyframeAnimation *carAnimation = [CAKeyframeAnimation
animationWithKeyPath:#"position"];
carAnimation.duration = 5.0;
// keep the car at a constant velocity
carAnimation.calculationMode = kCAAnimationPaced;
// Rotate car relative to path
carAnimation.rotationMode = kCAAnimationRotateAuto;
// Keep the final animation
carAnimation.fillMode = kCAFillModeForwards;
carAnimation.removedOnCompletion = NO;
CGMutablePathRef carPath = CGPathCreateMutable();
CGPathMoveToPoint(carPath, NULL, 0.0, 0.0);
CGPathAddLineToPoint(carPath, NULL, 100.0, 100.0);
CGPathAddLineToPoint(carPath, NULL, 100.0, 200.0);
CGPathAddLineToPoint(carPath, NULL, 200.0, 100.0);
carAnimation.path = carPath;
CGPathRelease(carPath);
[carView.layer addAnimation:carAnimation forKey:#"carAnimation"];
Is there a reason you can't simply subclass UIImageView to handle your touch methods? It would seem to me that instantiating an image view with your image and having the overridden UIResponder methods handle where the vehicle is moving and whatever else you need would be a lot easier than manually managing your CALayer tree.
You can do this with something like the following:
UIImage *vehicleImage = [UIImage imageNamed:#"vehicle.gif"];
VehicleImageView *vehicleView = [[[VehicleImageView alloc]
initWithImage:vehicleImage] autorelease];
Then have VehicleImageView subclass UIImageView:
#interface VehicleImageView : UIImageView
// Your stuff
#end
#implementation VehicleImageView
// Your stuff
// UIResponder methods
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
// Custom implementation
}
// Other touch methods...
#end
You do seem to be on the right track, though, in that you do need a UIResponder somewhere in your vehicle's view/view hierarchy for touch methods.
More info:
UIImageView (specifically initWithImage:)
UIImage (specifically imageNamed:)
UIResponder (specifically touchesEnded:withEvent:)