I am trying to draw individual pixels in xcode to be outputted to the iphone. I do not know any OpenGL or Quartz coding but I do know a bit about Core Graphics. I was thinking about drawing small rectangles with width and height of one, but do not know how to implement this into code and how to get this to show in the view. Any help is greatly appreciated.
For a custom UIView subclass that allows plotting dots of a fixed size and color:
// Make a UIView subclass
#interface PlotView : UIView
#property (nonatomic) CGContextRef context;
#property (nonatomic) CGLayerRef drawingLayer; // this is the drawing surface
- (void) plotPoint:(CGPoint) point; //public method for plotting
- (void) clear; // erases drawing surface
#end
// implementation
#define kDrawingColor ([UIColor yellowColor].CGColor)
#define kLineWeight (1.5)
#implementation PlotView
#synthesize context = _context, drawingLayer = _drawingLayer;
- (id) initPlotViewWithFrame:(CGRect) frame; {
self = [super initWithFrame:frame];
if (self) {
// this is total boilerplate, it rarely needs to change
self.backgroundColor = [UIColor clearColor];
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat width = frame.size.width;
CGFloat height = frame.size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (4 * width);
self.context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGSize size = frame.size;
self.drawingLayer = CGLayerCreateWithContext(self.context, size, NULL);
}
return self;
}
// override drawRect to put drawing surface onto screen
// you don't actually call this directly, the system will call it
- (void) drawRect:(CGRect) rect; {
// this creates a new blank image, then gets the surface you've drawn on, and stamps it down
// at some point, the hardware will render this onto the screen
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGImageRef image = CGBitmapContextCreateImage(self.context);
CGRect bounds = [self bounds];
CGContextDrawImage(currentContext, bounds, image);
CGImageRelease(image);
CGContextDrawLayerInRect(currentContext, bounds, self.drawingLayer);
}
// simulate plotting dots by drawing a very short line with rounded ends
// if you need to draw some other kind of shape, study this part, along with the docs
- (void) plotPoint:(CGPoint) point; {
CGContextRef layerContext = CGLayerGetContext(self.drawingLayer); // get ready to draw on your drawing surface
// prepare to draw
CGContextSetLineWidth(layerContext, kLineWeight);
CGContextSetLineCap(layerContext, kCGLineCapRound);
CGContextSetStrokeColorWithColor(layerContext, kDrawingColor);
// draw onto surface by building a path, then stroking it
CGContextBeginPath(layerContext); // start
CGFloat x = point.x;
CGFloat y = point.y;
CGContextMoveToPoint(layerContext, x, y);
CGContextAddLineToPoint(layerContext, x, y);
CGContextStrokePath(layerContext); // finish
[self setNeedsDisplay]; // this tells system to call drawRect at a time of it's choosing
}
- (void) clear; {
CGContextClearRect(CGLayerGetContext(self.drawingLayer), [self bounds]);
[self setNeedsDisplay];
}
// teardown
- (void) dealloc; {
CGContextRelease(_context);
CGLayerRelease(_drawingLayer);
[super dealloc];
}
If you want to be able to draw pixels that are cumulatively added to some previously drawn pixels, then you will need to create your own bitmap graphics context, backed by your own bitmap memory. You can then set individual pixels in the bitmap memory, or draw short lines or small rectangles in your graphics context. To display your drawing context, first convert it to an CGImageRef. Then you can either draw this image to a subclassed UIView in the view's drawRect, or assign the image to the contents of the UIView's CALayer.
Look up: CGBitmapContextCreate and CGBitmapContextCreateImage in Apple's documentation.
ADDED:
I wrote up a longer explanation of why you might need to do this when drawing pixels in an iOS app, plus some source code snippets, on my blog: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
All drawing needs to go into the - (void)drawRect:(CGRect)rect method. [self setNeedsDisplay] flags the code for a redraw. Problem is your redrawing nothing.
Related
I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];
this is my first question so please bear with me!
Im trying to write up a simple drawing app basically, I was using Core Graphics before, and the only problem was it was too slow, and when I drew with my finger it lagged, a hell of alot!
So, now I'm trying to use UIBezier paths to draw, as I understood to be alot faster, which it is!
When I was using Core Graphics, to keep the drawing speed up I was drawing to a custom bitmap context I created, which was constantly being updated as I drew.
So, I drew to my custom Bitmap context, then a CGImageRef was set to what was drawn in that context using -
cacheImage = CGBitmapContextCreateImage(imageContext);
and that was then drawn back into the bitmap context using -
CGContextDrawImage(imageContext, self.bounds, cacheImage); )
I also did this so when I changed the colour of the line being drawn, the rest of the drawing stayed as it was previously drawn, if that makes sense.
Now the problem Ive come across is this.
Im trying to draw the UIBezier path to my image context using -
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext); //invalid context here so added to solve
}
CGContextScaleCTM(imageContext, 1, -1); // using this as UIBezier
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
with path being my UIBezierPath. All the path set up is done in touches began and touched moved then calling [self setNeedsDisplay]; to call drawRect.
What's happening is when I draw, its either not drawing the CGImageRef to the context properly, or it is, but when its capturing the cache image its capturing a white background from somewhere, instead of just the path, and so its pasting over the entire image with the last path drawn together with a white background fill, so you cant see the last path that was drawn to build the image up, even though the views background colour is clearColor.
I really hope I'm making sense, I've just spent too many hours on this and Its drained me completely. Heres the drawing method I'm using -
This to create the image context -
-(CGContextRef) myCreateBitmapContext: (int) pixelsWide:(int) pixelsHigh{
imageContext = NULL;
CGColorSpaceRef colorSpace; // creating a colorspaceref
void * bitmapData; // creating bitmap data
int bitmapByteCount; // the bytes per count
int bitmapBytesPerRow; // number of bytes per row
bitmapBytesPerRow = (pixelsWide * 4); // calculating how many bytes per row the context needs
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); // how many bytes there are in total
colorSpace = CGColorSpaceCreateDeviceRGB(); // setting the colourspaceRef
bitmapData = malloc( bitmapByteCount ); // calculating the data
if (bitmapData == NULL)
{
//NSLog(#"Memory not allocated!");
return NULL;
}
imageContext = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,kCGImageAlphaPremultipliedLast);
if (imageContext== NULL)
{
free (bitmapData);
NSLog(#"context not created allocated!");
return NULL;
}
CGColorSpaceRelease( colorSpace ); //releasing the colorspace
CGContextSetRGBFillColor(imageContext, 1.0, 1.0, 1.0, 0.0); // filling the bitmap with a white background
CGContextFillRect(imageContext, self.bounds);
CGContextSetShouldAntialias(imageContext, YES);
return imageContext;
}
And heres my drawing -
-(void)drawRect:(CGRect)rect
{
DataClass *data = [DataClass sharedInstance];
[data.lineColor setStroke];
[path setLineWidth:data.lineWidth];
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext);
}
CGContextScaleCTM(imageContext, 1, -1); // this one
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
DataClass *data = [DataClass sharedInstance];
CGContextSetStrokeColorWithColor(imageContext, [data.lineColor CGColor]);
ctr = 0;
UITouch *touch2 = [touches anyObject];
pts[0] = [touch2 locationInView:self];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint p = [touch locationInView:self];
ctr++;
pts[ctr] = p;
if (ctr == 4)
{
pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); // move the endpoint to the middle of the line joining the second control point of the first Bezier segment and the first control point of the second Bezier segment
[path moveToPoint:pts[0]];
[path addCurveToPoint:pts[3] controlPoint1:pts[1] controlPoint2:pts[2]]; // add a cubic Bezier from pt[0] to pt[3], with control points pt[1] and pt[2]
//[data.lineColor setStroke];
[self setNeedsDisplay];
// replace points and get ready to handle the next segment
pts[0] = pts[3];
pts[1] = pts[4];
ctr = 1;
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
[path removeAllPoints];
[self setNeedsDisplay];
ctr = 0;
}
'path' is my UIBezierPath
'cacheImage' is a CGImageRef
'imageContext' is a CGContextRef
Any Help is much appreciated! And if you can think of a better way to do it, please let me know! I do however need the cache image to have a transparent background, so its just the paths visible, as I'm going to apply something later on when I get this working!
EDIT Also I'm removing the points everytime to keep the drawing speed up, just so you know!
Thanks in advance :)
Well, this is a big question. One potential lead would be to verify that you draw exactly what you need (not the whole image all the time), and to divide the invariant bitmap from those regions/rects which actively mutate across multiple layers.
Hi i am making a sample app in which i wanto create a square for which i used the following code
- (void)viewDidLoad {
[super viewDidLoad];
[self drawRect:CGRectMake(0, 0, 300, 200)];
[[self view] setNeedsDisplay];
}
- (void) drawRect:(CGRect)rect
{
NSLog(#"drawRect");
CGFloat centerx = rect.size.width/2;
CGFloat centery = rect.size.height/2;
CGFloat half = 100/2;
CGRect theRect = CGRectMake(-half, -half, 100, 100);
// Grab the drawing context
CGContextRef context = UIGraphicsGetCurrentContext();
// like Processing pushMatrix
CGContextSaveGState(context);
CGContextTranslateCTM(context, centerx, centery);
// Uncomment to see the rotated square
//CGContextRotateCTM(context, rotation);
// Set red stroke
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
{
CGContextSetRGBFillColor(context, 0.0, 1.0, 0.0, 1.0);
}
// Draw a rect with a red stroke
CGContextFillRect(context, theRect);
CGContextStrokeRect(context, theRect);
// like Processing popMatrix
CGContextRestoreGState(context);
[[self view] setNeedsDisplay];
}
But nothing is drawn on screen , dont know wheres the issue is .When i debug it the CGContextRef context was always 0x0 , i dont know why its 0x0 always am i missing something in my code.
It looks like you're trying to draw in a subclass of UIViewController. You need to subclass UIView to override the drawRect: method, which is then called automatically with a valid graphics context in place. You almost never call this method yourself.
To quote the Apple docs:
"To draw to the screen in an iOS application, you set up a UIView object and implement its drawRect: method to perform drawing. The view’s drawRect: method is called when the view is visible onscreen and its contents need updating. Before calling your custom drawRect: method, the view object automatically configures its drawing environment so that your code can start drawing immediately. As part of this configuration, the UIView object creates a graphics context (a CGContextRef opaque type) for the current drawing environment. You obtain this graphics context in your drawRect: method by calling the UIKit function UIGraphicsGetCurrentContext."
So essentailly, your code is on track, you just need to get it in the right place. It needs to be in the View object.
I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}
I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.