I'm still a little fresh to CoreGraphics programming, so please bear with me. I'm trying to write an application, which allows the user to rub stuff off an image with the finger. I have the basic functionality nailed down, but the result is sluggish since the screen is redrawn completely every time a touch is rendered. I did some research and found out that I can refresh only a portion of the screen using UIView's setNeedsDisplayInRect: method.
This does call drawRect: as expected, however everything I draw in the drawRect: following the setNeedsDisplayInRect: is ignored. Instead, the area in the rect parameter is simply filled with white. No matter what I draw inside, all I end up with is a white rectangle.
In essence, this is what I do:
1) when user touches screen, this touch is rendered into a mask
2) when the drawRect: is called, the image is masked with that mask
There must be something simple I'm overlooking, surely?
I found a solution, however it still escapes me how exactly this works. Here's the code:
This method flips the given rectangle in the same manner, in which the coordinate transformation in the context flips the context coordinate system:
- (CGRect) flippedRect:(CGRect)rect
{
CGRect flippedRect = rect;
flippedRect.origin.y = self.bounds.size.height - rect.origin.y - rect.size.height;
return CGRectIntersection( self.bounds, flippedRect );
}
This calculates the rectangle to be updated from the touch location. Note that the rectangle gets flipped:
- (CGRect) updateRectFromTouch:(UITouch *)touch
{
CGPoint location = [touch locationInView:self];
int d = RubbingSize;
CGRect touchRect = [self flippedRect:CGRectMake( location.x - d, location.y - d, 2*d, 2*d )];
return CGRectIntersection( self.frame, touchRect );
}
In render touch the 'flipped' update rectangles are coerced:
- (void) renderTouch:(UITouch *)touch
{
//
// Code to render into the mask here
//
if ( m_updateRect.size.width == 0 )
{
m_updateRect = [self updateRectFromTouch:touch];
}
else
{
m_updateRect = CGRectUnion( m_updateRect, [self updateRectFromTouch:touch] );
}
}
The whole view is refreshed with about 20Hz during the fingerpainting process. The following method is called every 1/20th second and submits the rectangle for rendering:
- (void) refreshScreen
{
if ( m_updateRect.size.width > 0 )
{
[self setNeedsDisplayInRect:[self flippedRect:m_updateRect]];
}
}
Here's a helper method to compare to rectangles:
BOOL rectIsEqualTo(CGRect a, CGRect b)
{
return a.origin.x == b.origin.x && a.origin.y == b.origin.y && a.size.width == b.size.width && a.size.height == b.size.height;
}
In the drawRect: method, the update rectangle is used to draw only the portion that needs updating.
- (void)drawRect:(CGRect)rect
{
BOOL drawFullScreen = rectIsEqualTo( rect, self.frame );
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
// Turn coordinate system around
CGContextTranslateCTM( context, 0.0, self.frame.size.height );
CGContextScaleCTM( context, 1.0, -1.0 );
if ( drawFullScreen )
{
// draw the full thing
CGContextDrawImage( context, self.frame, self.image );
}
else
{
CGImageRef partialImage = CGImageCreateWithImageInRect( self.image, [self flippedRect:m_updateRect] );
CGContextDrawImage( context, m_updateRect, partialPhoto );
CGImageRelease( partialImage );
}
...
// Reset update box
m_updateRect = CGRectZero;
}
If someone can explain to me why the flipping works, I'd appreciate it.
The flipping issue is most likely because UIKit has a 'flipped' coordinate compared to Core Graphics.
Apple documentation here
Related
this is my first question so please bear with me!
Im trying to write up a simple drawing app basically, I was using Core Graphics before, and the only problem was it was too slow, and when I drew with my finger it lagged, a hell of alot!
So, now I'm trying to use UIBezier paths to draw, as I understood to be alot faster, which it is!
When I was using Core Graphics, to keep the drawing speed up I was drawing to a custom bitmap context I created, which was constantly being updated as I drew.
So, I drew to my custom Bitmap context, then a CGImageRef was set to what was drawn in that context using -
cacheImage = CGBitmapContextCreateImage(imageContext);
and that was then drawn back into the bitmap context using -
CGContextDrawImage(imageContext, self.bounds, cacheImage); )
I also did this so when I changed the colour of the line being drawn, the rest of the drawing stayed as it was previously drawn, if that makes sense.
Now the problem Ive come across is this.
Im trying to draw the UIBezier path to my image context using -
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext); //invalid context here so added to solve
}
CGContextScaleCTM(imageContext, 1, -1); // using this as UIBezier
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
with path being my UIBezierPath. All the path set up is done in touches began and touched moved then calling [self setNeedsDisplay]; to call drawRect.
What's happening is when I draw, its either not drawing the CGImageRef to the context properly, or it is, but when its capturing the cache image its capturing a white background from somewhere, instead of just the path, and so its pasting over the entire image with the last path drawn together with a white background fill, so you cant see the last path that was drawn to build the image up, even though the views background colour is clearColor.
I really hope I'm making sense, I've just spent too many hours on this and Its drained me completely. Heres the drawing method I'm using -
This to create the image context -
-(CGContextRef) myCreateBitmapContext: (int) pixelsWide:(int) pixelsHigh{
imageContext = NULL;
CGColorSpaceRef colorSpace; // creating a colorspaceref
void * bitmapData; // creating bitmap data
int bitmapByteCount; // the bytes per count
int bitmapBytesPerRow; // number of bytes per row
bitmapBytesPerRow = (pixelsWide * 4); // calculating how many bytes per row the context needs
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); // how many bytes there are in total
colorSpace = CGColorSpaceCreateDeviceRGB(); // setting the colourspaceRef
bitmapData = malloc( bitmapByteCount ); // calculating the data
if (bitmapData == NULL)
{
//NSLog(#"Memory not allocated!");
return NULL;
}
imageContext = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,kCGImageAlphaPremultipliedLast);
if (imageContext== NULL)
{
free (bitmapData);
NSLog(#"context not created allocated!");
return NULL;
}
CGColorSpaceRelease( colorSpace ); //releasing the colorspace
CGContextSetRGBFillColor(imageContext, 1.0, 1.0, 1.0, 0.0); // filling the bitmap with a white background
CGContextFillRect(imageContext, self.bounds);
CGContextSetShouldAntialias(imageContext, YES);
return imageContext;
}
And heres my drawing -
-(void)drawRect:(CGRect)rect
{
DataClass *data = [DataClass sharedInstance];
[data.lineColor setStroke];
[path setLineWidth:data.lineWidth];
imageContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(imageContext);
[path stroke];
if(imageContext != nil){
cacheImage = CGBitmapContextCreateImage(imageContext);
}
CGContextScaleCTM(imageContext, 1, -1); // this one
CGContextDrawImage(imageContext, self.bounds, cacheImage); // draws the current context;
[path removeAllPoints];
CGImageRelease(cacheImage); // releases the image to solve memory errors.
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
DataClass *data = [DataClass sharedInstance];
CGContextSetStrokeColorWithColor(imageContext, [data.lineColor CGColor]);
ctr = 0;
UITouch *touch2 = [touches anyObject];
pts[0] = [touch2 locationInView:self];
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint p = [touch locationInView:self];
ctr++;
pts[ctr] = p;
if (ctr == 4)
{
pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); // move the endpoint to the middle of the line joining the second control point of the first Bezier segment and the first control point of the second Bezier segment
[path moveToPoint:pts[0]];
[path addCurveToPoint:pts[3] controlPoint1:pts[1] controlPoint2:pts[2]]; // add a cubic Bezier from pt[0] to pt[3], with control points pt[1] and pt[2]
//[data.lineColor setStroke];
[self setNeedsDisplay];
// replace points and get ready to handle the next segment
pts[0] = pts[3];
pts[1] = pts[4];
ctr = 1;
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
[path removeAllPoints];
[self setNeedsDisplay];
ctr = 0;
}
'path' is my UIBezierPath
'cacheImage' is a CGImageRef
'imageContext' is a CGContextRef
Any Help is much appreciated! And if you can think of a better way to do it, please let me know! I do however need the cache image to have a transparent background, so its just the paths visible, as I'm going to apply something later on when I get this working!
EDIT Also I'm removing the points everytime to keep the drawing speed up, just so you know!
Thanks in advance :)
Well, this is a big question. One potential lead would be to verify that you draw exactly what you need (not the whole image all the time), and to divide the invariant bitmap from those regions/rects which actively mutate across multiple layers.
Can you let me know what is the best way to draw a line or rectangle on a scene layer using Cocos2d ios4 iphone.
So far have tried Texture2d, but it is more like a paint brush and is not so good. Tried drawing a line using draw method, but previous line disappears on drawing another line.
Basically want to draw multiple horizontal ,vertical, oblique beams. Please suggest. Any code would help a lot .
The code to draw using texture is below:
CGPoint start = edge.start;
CGPoint end = edge.end;
// begin drawing to the render texture
[target begin];
// for extra points, we'll draw this smoothly from the last position and vary the sprite's
// scale/rotation/offset
float distance = ccpDistance(start, end);
if (distance > 1)
{
int d = (int)distance;
for (int i = 0; i < d; i++)
{
float difx = end.x - start.x;
float dify = end.y - start.y;
float delta = (float)i / distance;
[brush setPosition:ccp(start.x + (difx * delta), start.y + (dify * delta))];
[brush setScale:0.3];
// Call visit to draw the brush, don't call draw..
[brush visit];
}
}
// finish drawing and return context back to the screen
[target end];
The rendering is not good esp. with oblique lines as the scaling affects the quality.
Cheers
You could create a separate layer and call the draw method like this:
-(void) draw
{
CGSize s = [[Director sharedDirector] winSize];
drawCircle( ccp(s.width/2, s.height/2), circleSize, 0, 50, NO);
It's for a circle but the principle is the same. This is from a project I made a while back and it worked then. Don't know if anything has changed since.
You need to add draw method to your layer:
-(void) draw {
// ...
}
Inside it you can use some openGL like functions and cocos2d wrapper methods for openGL.
Hint: other methods can be called inside draw method.
But keep in mind that using other name for method
containing openGL instructions, that's not called inside draw mentioned above simply won't work.
Even when called from update method or other method used by scheduleUpdate selector.
So you will end up with something like this:
-(void) draw {
glEnable(GL_LINE_SMOOTH);
glColor4ub(255, 0, 100, 255);
glLineWidth(4);
CGPoint verts[] = { ccp(0,200), ccp(300,200) };
ccDrawLine(verts[0], verts[1]);
[self drawSomething];
[self drawSomeOtherStuffFrom:ccp(a,b) to:ccp(c,d)];
[someObject doSomeDrawingAsWell];
}
For more information check out cocos2d-iphone programming guide :
http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:draw_update?s[]=schedule#draw
I am struggling to get my custom drawing code to render at the proper scale for all iOS devices, i.e., older iPhones, those with retina displays and the iPad.
I have a subclass of UIView that has a custom class that displays a vector graphic. It has a scale property that I can set. I do the scaling in initWithCoder when the UIView loads and I first instantiate the vector graphic. This UIView is shown when the user taps a button on the home screen.
At first I tried this:
screenScaleFactor = 1.0;
if ([UIScreen instancesRespondToSelector:#selector(scale)]) {
screenScaleFactor = [[UIScreen mainScreen] scale];
}
// and then I multiply stuff by screenScale
... which worked for going between normal iPhones and retina iPhones, but chokes on the iPad. As I said, you can get to the UIView at issue by tapping a button on the home screen. When run on the iPad, if you display the UIView when at 1X, it works, but at 2X I get a vector graphic that twice as big as it should be.
So I tried this instead:
UPDATE: This block is the one that's right. (with the corrected spelling, of course!)
screenScaleFactor = 1.0;
if ([self respondsToSelector:#selector(contentScaleFactor)]) { //EDIT: corrected misspellng.
screenScaleFactor = (float)self.contentScaleFactor;
}
// again multiplying stuff by screenScale
Which works at both 1X and 2X on the iPad and on the older iPhones, but on a retina display, the vector graphic is half the size it should be.
In the first case, I query the UIScreen for its scale property and in the second case, I'm asking the parent view of the vector graphic for its contentsScaleFactor. Neither of these seem to get me where I want for all cases.
Any suggestions?
UPDATE:
Here's the method in my subclassed UIView (it's called a GaugeView):
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
needle is of class VectorSprite which is a subclass of Sprite which is subclassed from NSObject. These are from a programming book I'm working through. needle has the scale property that I set.
updateBox comes from Sprite and looks like this:
- (void) updateBox {
CGFloat w = width*scale;
CGFloat h = height*scale;
CGFloat w2 = w*0.5;
CGFloat h2 = h*0.5;
CGPoint origin = box.origin;
CGSize bsize = box.size;
CGFloat left = -kScreenHeight*0.5;
CGFloat right = -left;
CGFloat top = kScreenWidth*0.5;
CGFloat bottom = -top;
offScreen = NO;
if (wrap) {
if ((x+w2) < left) x = right + w2;
else if ((x-w2) > right) x = left - w2;
else if ((y+h2) < bottom) y = top + h2;
else if ((y-h2) > top) y = bottom - h2;
}
else {
offScreen =
((x+w2) < left) ||
((x-w2) > right) ||
((y+h2) < bottom) ||
((y-h2) > top);
}
origin.x = x-w2*scale;
origin.y = y-h2*scale;
bsize.width = w;
bsize.height = h;
box.origin = origin;
box.size = bsize;
}
Sprite also has the draw and drawBody methods which are:
- (void) draw: (CGContextRef) context {
CGContextSaveGState(context);
// Position the sprite
CGAffineTransform t = CGAffineTransformIdentity;
t = CGAffineTransformTranslate(t,x,y);
t = CGAffineTransformRotate(t,rotation);
t = CGAffineTransformScale(t,scale,scale);
CGContextConcatCTM(context, t);
// draw sprite body
[self drawBody: context];
CGContextRestoreGState(context);
}
- (void) drawBody: (CGContextRef) context {
// Draw your sprite here, centered
// on (x,y)
// As an example, we draw a filled white circle
if (alpha < 0.05) return;
CGContextBeginPath(context);
CGContextSetRGBFillColor(context, r,g,b,alpha);
CGContextAddEllipseInRect(context, CGRectMake(-width/2,-height/2,width,height));
CGContextClosePath(context);
CGContextDrawPath(context,kCGPathFill);
}
How, exactly, are you rendering the graphic?
This should be handled automatically in drawRect: (the context you get should be already 2x). This should also be handled automatically with UIGraphicsBeginImageContextWithOptions(size,NO,0); if available (if you need to fall back to UIGraphicsBeginImageContext(), assume a scale of 1). You shouldn't need to worry about it unless you're drawing the bitmap yourself somehow.
You could try something like self.contentScaleFactor = [[UIScreen mainScreen] scale], with appropriate checks first (this might mean if you display it in an iPad at 2x, you'll get high-res graphics).
Fundamentally, there's not much difference between an iPad in 2x mode and a "retina display", except that the iPad can switch between 1x and 2x.
Finally, there's a typo: #selector(contentsScaleFactor) has an extra s.
I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.
I am trying to draw individual pixels in xcode to be outputted to the iphone. I do not know any OpenGL or Quartz coding but I do know a bit about Core Graphics. I was thinking about drawing small rectangles with width and height of one, but do not know how to implement this into code and how to get this to show in the view. Any help is greatly appreciated.
For a custom UIView subclass that allows plotting dots of a fixed size and color:
// Make a UIView subclass
#interface PlotView : UIView
#property (nonatomic) CGContextRef context;
#property (nonatomic) CGLayerRef drawingLayer; // this is the drawing surface
- (void) plotPoint:(CGPoint) point; //public method for plotting
- (void) clear; // erases drawing surface
#end
// implementation
#define kDrawingColor ([UIColor yellowColor].CGColor)
#define kLineWeight (1.5)
#implementation PlotView
#synthesize context = _context, drawingLayer = _drawingLayer;
- (id) initPlotViewWithFrame:(CGRect) frame; {
self = [super initWithFrame:frame];
if (self) {
// this is total boilerplate, it rarely needs to change
self.backgroundColor = [UIColor clearColor];
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat width = frame.size.width;
CGFloat height = frame.size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (4 * width);
self.context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorspace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGSize size = frame.size;
self.drawingLayer = CGLayerCreateWithContext(self.context, size, NULL);
}
return self;
}
// override drawRect to put drawing surface onto screen
// you don't actually call this directly, the system will call it
- (void) drawRect:(CGRect) rect; {
// this creates a new blank image, then gets the surface you've drawn on, and stamps it down
// at some point, the hardware will render this onto the screen
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGImageRef image = CGBitmapContextCreateImage(self.context);
CGRect bounds = [self bounds];
CGContextDrawImage(currentContext, bounds, image);
CGImageRelease(image);
CGContextDrawLayerInRect(currentContext, bounds, self.drawingLayer);
}
// simulate plotting dots by drawing a very short line with rounded ends
// if you need to draw some other kind of shape, study this part, along with the docs
- (void) plotPoint:(CGPoint) point; {
CGContextRef layerContext = CGLayerGetContext(self.drawingLayer); // get ready to draw on your drawing surface
// prepare to draw
CGContextSetLineWidth(layerContext, kLineWeight);
CGContextSetLineCap(layerContext, kCGLineCapRound);
CGContextSetStrokeColorWithColor(layerContext, kDrawingColor);
// draw onto surface by building a path, then stroking it
CGContextBeginPath(layerContext); // start
CGFloat x = point.x;
CGFloat y = point.y;
CGContextMoveToPoint(layerContext, x, y);
CGContextAddLineToPoint(layerContext, x, y);
CGContextStrokePath(layerContext); // finish
[self setNeedsDisplay]; // this tells system to call drawRect at a time of it's choosing
}
- (void) clear; {
CGContextClearRect(CGLayerGetContext(self.drawingLayer), [self bounds]);
[self setNeedsDisplay];
}
// teardown
- (void) dealloc; {
CGContextRelease(_context);
CGLayerRelease(_drawingLayer);
[super dealloc];
}
If you want to be able to draw pixels that are cumulatively added to some previously drawn pixels, then you will need to create your own bitmap graphics context, backed by your own bitmap memory. You can then set individual pixels in the bitmap memory, or draw short lines or small rectangles in your graphics context. To display your drawing context, first convert it to an CGImageRef. Then you can either draw this image to a subclassed UIView in the view's drawRect, or assign the image to the contents of the UIView's CALayer.
Look up: CGBitmapContextCreate and CGBitmapContextCreateImage in Apple's documentation.
ADDED:
I wrote up a longer explanation of why you might need to do this when drawing pixels in an iOS app, plus some source code snippets, on my blog: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
All drawing needs to go into the - (void)drawRect:(CGRect)rect method. [self setNeedsDisplay] flags the code for a redraw. Problem is your redrawing nothing.