This community has been tremendous help for me in many respects.
First time question (for me), and it's an easy one. I'm working through the iPhone SDK learning curve, at a good rate... but every once in a while, I come across a problem that, despite it's simplicity, is easier to ask and work on something else, then to spend another hour on reading.
I have a 2D game where a vehicle is moving around on the surface rotating to face the direction of travel. I've determined that Core Animation is my best approach.
The vehicle is an image. It's interactive to user input (touch).
Am I on the right track?
UIView (to act as Responder) containing a CALayer tree that includes the image (from a file).
The current file is a GIF. It made it easy to make the frame transparent, leaving only the vehicle image.
From the UIView subclass, how do I load the gif image into a layer?
Sounds simple, so I thought...
Cheers.
You're on the right track with Core Animation. CAKeyFrameAnimation has a path property which you'll use extensively. The following sample code (untested) uses straight line paths, but it's also possible to use curved paths:
UIImage *carImage = [UIImage imageNamed:#"car.png"];
carView = [[UIImageView alloc] initWithImage:carImage];
[mapView addSubview:carView];
CAKeyframeAnimation *carAnimation = [CAKeyframeAnimation
animationWithKeyPath:#"position"];
carAnimation.duration = 5.0;
// keep the car at a constant velocity
carAnimation.calculationMode = kCAAnimationPaced;
// Rotate car relative to path
carAnimation.rotationMode = kCAAnimationRotateAuto;
// Keep the final animation
carAnimation.fillMode = kCAFillModeForwards;
carAnimation.removedOnCompletion = NO;
CGMutablePathRef carPath = CGPathCreateMutable();
CGPathMoveToPoint(carPath, NULL, 0.0, 0.0);
CGPathAddLineToPoint(carPath, NULL, 100.0, 100.0);
CGPathAddLineToPoint(carPath, NULL, 100.0, 200.0);
CGPathAddLineToPoint(carPath, NULL, 200.0, 100.0);
carAnimation.path = carPath;
CGPathRelease(carPath);
[carView.layer addAnimation:carAnimation forKey:#"carAnimation"];
Is there a reason you can't simply subclass UIImageView to handle your touch methods? It would seem to me that instantiating an image view with your image and having the overridden UIResponder methods handle where the vehicle is moving and whatever else you need would be a lot easier than manually managing your CALayer tree.
You can do this with something like the following:
UIImage *vehicleImage = [UIImage imageNamed:#"vehicle.gif"];
VehicleImageView *vehicleView = [[[VehicleImageView alloc]
initWithImage:vehicleImage] autorelease];
Then have VehicleImageView subclass UIImageView:
#interface VehicleImageView : UIImageView
// Your stuff
#end
#implementation VehicleImageView
// Your stuff
// UIResponder methods
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
// Custom implementation
}
// Other touch methods...
#end
You do seem to be on the right track, though, in that you do need a UIResponder somewhere in your vehicle's view/view hierarchy for touch methods.
More info:
UIImageView (specifically initWithImage:)
UIImage (specifically imageNamed:)
UIResponder (specifically touchesEnded:withEvent:)
Related
The task is, to draw paths at runtime on custom maps which im using in a Scrollview, and then i will have to draw paths at runtime whenever the location coordinates (lat, long) updates. The problem what im trying to solve here is that i have made a class 'graphics' which is a subclass of UIView, in which i code the drawing in the 'drawrect:' method. So when im adding the graphics as subview of the scrollview over image, the line draws, but i need to keep drawing the line as though it were paths. I need to draw the lines at runtime, need to keep updating the points(x,y) of 'CGContextStrokeLineSegments' method. The code:
ViewController:
- (void)loadView {
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CGRect fullScreenRect=[[UIScreen mainScreen] applicationFrame];
scrollView=[[UIScrollView alloc] initWithFrame:fullScreenRect];
graph = [[graphics alloc] initWithFrame:fullScreenRect];
scrollView.contentSize=CGSizeMake(320,480);
UIImageView *tempImageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"fortuneCenter.png"]];
self.view=scrollView;
[scrollView addSubview:tempImageView2];
scrollView.userInteractionEnabled = YES;
scrollView.bounces = NO;
[scrollView addSubview:graph];
}
Graphics.m:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point [2] = { CGPointMake(160, 100), CGPointMake(160,300)};
CGContextSetRGBStrokeColor(context, 255, 0, 255, 1);
CGContextStrokeLineSegments(context, point, 2);
}
So how can i draw the lines at runtime. Im just simulating right now, so im not using the realtime data (coordinates). Just want to simulate by using dummy data (coordinates of x,y). Lets say have a button, whenever i press it it updates the coordinates so path extends.
The easiest way would be to add an instance variable representing the points to the UIView subclass.
Then, every time the path changes, update the ivar appropriately and call -setNeedsDisplay or setNeedsDisplayInRect on the custom UIView (or even on its superview). The runtime will then redraw the new path.
You just need to make CGPoint point[] dynamically resizable, from the looks of it.
You can use malloc, a std::vector, or even NSMutableData to store the points you add. Then you pass that array to CGContextStrokeLineSegments.
If 2 points is all you will need, move CGPoint point[2] to an ivar so you may store the positions, then (as Rich noted) invalidate rects appropriately when these values (or the array) are changed.
This subject comes up every now and then, so I created a longer blog post on the general concepts involved with one potential solution, creating and using your own graphics context, here: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
I have a UIView object that rotates using CALayer's transform:
// Create uiview object.
UIImageView *block = [[UIImageView alloc] initWithFrame....]
// Apply rotation.
CATransform3D basicTrans = CATransform3DIdentity;
basicTrans.m34 = 1.0/-distance;
blockImage.layer.transform = CATransform3DRotate(basicTrans, rangle, 1.0f, 0.0f, 0.0f);
After rotating the edges of the object are not antialiasing. I need to antialias them.
Help me, please. How can it be done?
One way to do this is by placing the image inside another view that's 5 pixels bigger. The bigger view should have a transparent rasterized border that will smooth the edges of the UIImageView:
view.layer.borderWidth = 3;
view.layer.borderColor = [UIColor clearColor].CGColor;
view.layer.shouldRasterize = YES;
view.layer.rasterizationScale = [[UIScreen mainScreen] scale];
Then, place your UIImageView inside this parent view and center it (With 2.5 pixels around each edge).
Finally, rotate the parent view instead of the image view.
It works very well - you can also encapsulate the whole thing in class that creates the hierarchy.
Simply add this key-value pair to your Info.plist: UIViewEdgeAntialiasing set to YES.
check allowsEdgeAntialiasing property of CALayer.
block.layer.allowsEdgeAntialiasing = YES; // iOS7 and above.
I had a similar issue when rotating around the z-axis. Setting shouldRasterize = YES prevented the jagged edges however it came at a performance cost. In my case I was re-using the views (and its layers) and keeping the shouldRasterize = YES was slowing things down.
The solution was, to turn off rasterization right after I didn't need it anymore. However since animation runs on another thread, there was no way of knowing when the animation was complete...until I found out about an extremely useful CATransaction method. This is an actual code that I used and it should illustrate its use:
// Create a key frame animation
CAKeyframeAnimation *wiggle = [CAKeyframeAnimation animationWithKeyPath:#"transform"];
NSInteger frequency = 5; // Higher value for faster vibration
NSInteger amplitude = 25; // Higher value for lower amplitude
// Create the values it will pass through
NSMutableArray *valuesArray = [[NSMutableArray alloc] init];
NSInteger direction = 1;
[valuesArray addObject:#0.0];
for (NSInteger i = frequency; i > 0; i--, direction *= -1) {
[valuesArray addObject:#((direction * M_PI_4 * (CGFloat)i / (CGFloat)amplitude))];
}
[valuesArray addObject:#0.0];
[wiggle setValues:valuesArray];
// Set the duration
[wiggle setAdditive:YES];
[wiggle setValueFunction:[CAValueFunction functionWithName:kCAValueFunctionRotateZ]];
[wiggle setDuration:0.6];
// Turn on rasterization to prevent jagged edges (anti-aliasing issues)
viewToRotate.layer.shouldRasterize = YES;
// ************ Important step **************
// Very usefull method. Block returns after ALL animations have completed.
[CATransaction setCompletionBlock:^{
viewToRotate.layer.shouldRasterize = NO;
}];
// Animate the layer
[viewToRotate.layer addAnimation:wiggle forKey:#"wiggleAnimation"];
worked like a charm for me.
I have not tried using this with implicit animations (i.e. animations that happen due to value change in animatable property for a non-view associated layer), however I would expect it to work as long as the CATransaction method is called before the property change, just as a guarantee the block is given to CATransaction before an animation starts.
This Is a problem that I've been leaving and coming back to for a while now. I've never really nailed the problem.
What I've been trying to do use CADisplayLink to dynamically draw pie chart style progress. My code works fine when I have 1 - 4 uiviews updating simultaneously. When I add any more than that the drawing of the pies becomes very jerky.
I want to explain what I have been trying in the hope that somebody could point out the inefficiencies and suggest a better drawing method.
I create 16 uiviews and add a CAShapeLayer subview to each one. This is where I want to draw my pie slices.
I precalcuate 360 CGPaths representing 0 to 360 degrees of a circle and store them in an array to try and improve performance.
In a master View I start a displaylink,loop through all my other views, calculate how much of a full pie it should show, then find the right path and assign it to my shapelayer.
-(void)makepieslices
{
pies=[[NSMutableArray alloc]initWithCapacity:360];
float progress=0;
for(int i=0;i<=360;i++)
{
progress= (i* M_PI)/180;
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
_pies[i]=thePath;
}
}
- (void)updatePath:(CADisplayLink *)dLink {
for (int idx=0; idx<[spinnydelegates count]; idx++) {
id<SyncSpinUpdateDelegate> delegate = [spinnydelegates objectAtIndex:idx];
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[delegate updatePath:dLink];
});
}
}
- (void)updatePath:(CADisplayLink *)dLink {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
currentarc=[engineref getsyncpercentForPad:cid pad:pid];
int progress;
progress = roundf(currentarc*360);
dispatch_async(dispatch_get_main_queue(), ^{
shapeLayer_.path = _pies[progress];
});
});
}
This technique just straight out isnt working for me when trying to simultaneously update more than 4 or 5 pies at the same time. 16 screen updates at the same time sounds like it should really not be that big of a deal for the ipad to me. So this leads me to think I doing something very very fundamentally wrong.
I'd really appreciate if somebody could tell me why this technique results in jittery screen updates and also if they could suggest a different technique that I could go an investigate that will allow me to perform 16 simultaneous shapelayer updates smoothly.
EDIT Just to give you an idea of how bad performance is, when I have all 16 pies drawing the cpu goes up to 20%
*EDIT *
This is based on studevs advice but I don't see anything been drawn. segmentLayer is a CGLayerRef as a property of my pieview.
-(void)makepies
{
self.layerobjects=[NSMutableArray arrayWithCapacity:360];
CGFloat progress=0;
CGContextRef context=UIGraphicsGetCurrentContext();
for(int i =0;i<360;i++)
{
progress= (i*M_PI)/180.0f;
CGLayerRef segmentlayer=CGLayerCreateWithContext(context, CGSizeMake(30, 30), NULL);
CGContextRef layerContext=CGLayerGetContext(segmentlayer);
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, 0.f, 0.f);
CGPathAddLineToPoint(thePath, NULL, 28, 0.f);
CGPathAddArc(thePath, NULL, 0.f,0.f, 28, 0.f, progress, NO);
CGPathCloseSubpath(thePath);
[layerobjects addObject:(id)segmentlayer];
CGLayerRelease(segmentlayer);
}
}
-(void)updatePath
{
int progress;
currentarc=[engineref getsyncpercent];
progress = roundf(currentarc*360);
//shapeLayer_.path = _pies[progress];
self.pieView.segmentLayer=(CGLayerRef)[layerobjects objectAtIndex:progress];
[self.pieView setNeedsDisplay];
}
-(void)drawRect:(CGRect)rect
{
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
I think one of the first things you should look to do is buffer your segments (currently represented by CGPath objects) offscreen using CGLayer objects. From the docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same
background. Draw the background scene to a layer and then draw the
layer whenever you need it. One added benefit is that you don’t need
to know color space or device-dependent information to draw to a
layer.
Repeated drawing. For example, you might want to create a
pattern that consists of the same item drawn over and over. Draw the
item to a layer and then repeatedly draw the layer, as shown in Figure
12-1. Any Quartz object that you draw repeatedly—including CGPath,
CGShading, and CGPDFPage objects—benefits from improved performance if
you draw it to a CGLayer. Note that a layer is not just for onscreen
drawing; you can use it for graphics contexts that aren’t
screen-oriented, such as a PDF graphics context.
Create a UIView subclass that draws the pie. Give it an instance variable for that pie's current progress, and override drawRect: to draw the layer representing that progress. The view needs to first get a reference the required CGLayer object, so implement a delegate with the method:
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
It will then become the delegate's job to return an existing CGLayerRef, or if it doesn't exist yet, create it. Since the CGLayer can only be created from within drawRect:, this delegate method should be called from PieView's drawRect: method. PieView should look something like this:
PieView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#class PieView;
#protocol PieViewDelegate <NSObject>
#required
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context;
#end
#interface PieView : UIView
#property(nonatomic, weak) id <PieViewDelegate> delegate;
#property(nonatomic) NSInteger progress;
#end
PieView.m
#import "PieView.h"
#implementation PieView
#synthesize delegate, progress;
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGLayerRef segmentLayer = [delegate pieView:self segmentLayerForProgress:self.progress context:context];
CGContextDrawLayerInRect(context, self.bounds, segmentLayer);
}
#end
Your PieView's delegate (most likely your view controller) then implements:
NSString *const SegmentCacheKey = #"SegmentForProgress:";
- (CGLayerRef)pieView:(PieView *)pieView segmentLayerForProgress:(NSInteger)progress context:(CGContextRef)context
{
// First, try to retrieve the layer from the cache
NSString *cacheKey = [SegmentCacheKey stringByAppendingFormat:#"%d", progress];
CGLayerRef segmentLayer = (__bridge_retained CGLayerRef)[segmentsCache objectForKey:cacheKey];
if (!segmentLayer) { // If the layer hasn't been created yet
CGFloat progressAngle = (progress * M_PI) / 180.0f;
// Create the layer
segmentLayer = CGLayerCreateWithContext(context, layerSize, NULL);
CGContextRef layerContext = CGLayerGetContext(segmentLayer);
// Draw the segment
CGContextSetFillColorWithColor(layerContext, [[UIColor blueColor] CGColor]);
CGContextMoveToPoint(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f);
CGContextAddArc(layerContext, layerSize.width / 2.0f, layerSize.height / 2.0f, layerSize.width / 2.0f, 0.0f, progressAngle, NO);
CGContextClosePath(layerContext);
CGContextFillPath(layerContext);
// Cache the layer
[segmentsCache setObject:(__bridge_transfer id)segmentLayer forKey:cacheKey];
}
return segmentLayer;
}
So for each pie, create a new PieView and set it's delegate. When you need to update a pie, update the PieView's progress property and call setNeedsDisplay.
I'm using an NSCache here since there are a lot of graphics being stored, and it could take up a lot of memory. You could also limit the number of segments being drawn - 100 is probably plenty. Also, I agree with other comments/answers that you might try updating the views less often, as this will consume less CPU and battery power (60fps is probably not necessary).
I did some crude testing of this method on an iPad (1st gen) and managed to get well over 50 pies updating at 30fps.
dubbeat: ...CADisplayLink...
Justin: do you need to draw at the display's refresh rate?
dubbeat: The progress of the pie drawing is supposed to represent the progress of an mp3s playback progress so I guess at the displays refresh rate at a minimum.
That's much faster than is necessary, unless you're trying to display some really, really, really exotic visualizer, which is very unlikely if your spinner's radius is 28pt. Also, there's no reason to draw faster than the display's frequency.
One side effect is that your spinner's superviews may also updating at this high frequency. If you can make the spinner view opaque, then you can reduce overdrawing of superviews (and subviews if you have them).
60fps is a good number for a really fast desktop game. For an ornament/progress bar, it's far more than necessary.
Try this:
not using CADisplayLink, but the standard view system
use an NSTimer on the main run loop, begin with a frequency of 8 Hz*
adjust timer to taste
then let us know if that is adequately fast.
*the timer callback calls [spinner setNeedsDisplay]
Well, you could achieve some performance improvement by pre-assembling the background view, capturing the image of it, and then just using the image in an image view for the background. You could go further by capturing a view of the "relatively static" parts of your chart, updating that static view only when necessary.
Store your 360 circle segments as textures and use OpenGL to animate the sequences.
I have a custom CALayer (say CircleLayer), containing custom properties (radius and tint). The layer renders itself in its drawInContext: method.
- (void)drawInContext:(CGContextRef)ctx {
NSLog(#"Drawing layer, tint is %#, radius is %#", self.tint, self.radius);
CGPoint centerPoint = CGPointMake(CGRectGetWidth(self.bounds)/2, CGRectGetHeight(self.bounds)/2);
CGContextMoveToPoint(ctx, centerPoint.x, centerPoint.y);
CGContextAddArc(ctx, centerPoint.x, centerPoint.y, [self.radius doubleValue], radians(0), radians(360), 0);
CGContextClosePath(ctx);
/* Filling it */
CGContextSetFillColorWithColor(ctx, self.tint.CGColor);
CGContextFillPath(ctx);
}
I want the radius to be animatable so I've implemented
+ (BOOL)needsDisplayForKey:(NSString *)key {
if ([key isEqualToString:#"radius"]) {
return YES;
}
return [super needsDisplayForKey:key];
}
And the animation is performed like this:
CABasicAnimation *theAnimation=[CABasicAnimation animationWithKeyPath:#"radius"];
theAnimation.duration=2.0;
theAnimation.fromValue=[NSNumber numberWithDouble:100.0];
theAnimation.toValue=[NSNumber numberWithDouble:50.0];
theAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
[circleLayer addAnimation:theAnimation forKey:#"animateRadius"];
circleLayer.radius = [NSNumber numberWithDouble:50.0];
drawInContext: gets called as expected during the animation to redraw the circle, however the tint is set to nil as soon as the animation starts and gets back to its original value when the animation ends.
I've concluded that if I want to animate a custom property and want other properties to keep their value during the animation, I have to animate them too, which I find not being convenient at all.
The purpose is not to grow/shrink a circle, I know I can use transformation for this. It is only to illustrate with a simple example the problem of animating a single custom property without having to animate all the other ones.
I've made a simple project illustrating the issue, which you can find here:
Sample project illustrating the issue
There is probably something I didn't get on how CoreAnimation works, I've performed intensive searching but I'm stuck with no clue. Anyone knows?
If I understood your question correctly, it goes like this. When you add an animation to a CALayer, it creates a so-called presentation copy of that layer using initWithLayer:. The presentation layer contains actual animated state for each animation frame, while the original layer has the final state. The problem with animating your own properties is that CALayer does not copy them all in initWithLayer:. If that's your case, your should override initWithLayer: and set up all the properties you need for animation, that is, both tint and radius.
+ (BOOL)needsDisplayForKey:(NSString *)key {
if ([key isEqualToString:#"radius"] || [key isEqualToString:#"tint"]) {
return YES;
}
return [super needsDisplayForKey:key];
}
The animation may require all properties of the context to respond to a refresh.
I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}