I'm drawing lines according to touchesMoved: method and normally it works fine. But when I zoom into the image and draw, the previously drawn lines are both displaced and keep getting more and more blurry, ultimately vanishing. I've tried using UIPinchGestureRecognizer and simply increasing the frame of myImageView (for multi-touch events only) but the problem occurs both ways. Here's the code for drawing:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSArray *allTouches = [touches allObjects];
int count = [allTouches count];
if(count==1){//single touch case for drawing line
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:myImageView];
UIGraphicsBeginImageContext(myImageView.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, myImageView.frame.size.width, myImageView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 2.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
}
else{//multi touch case
// handle pinch/zoom
}
}
Here is the image drawn over without zooming:
And this is the image depicting the problem after zooming-in with the red arrow showing the segment that was already drawn before zooming-in (as shown in previous image). The image is both blurred and displaced:
It can also be noticed that a part of the line drawn towards the end is unaffected and the phenomenon occurs for lines drawn back in time. I believe the reason for this is that the image size attributes are being lost when I zoom in/out which probably causes the blur and shift, but I'm not sure about that!
EDIT- I've uploaded a short video to show what's happening. It's sort of entertaining...
EDIT 2- Here's a sample single-view app focussing on the problem.
I downloaded your project and I found the problem is of autoresizing. The following steps will solve it:
Step 1. Comment the line 70:
drawImage.frame = CGRectMake(0, 0, labOrderImgView.frame.size.width, labOrderImgView.frame.size.height);
in your touchesMoved method.
Step 2. Add one line of code after drawImage is alloced (line 90) in viewDidLoad method:
drawImage.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
Then, the bug is fixed.
You are always just drawing to an image context the size of your image view - this of course gets blurry, as you do not adapt to a higher resolution when zoomed in. It would be more sensible to instead create a UIBezierPath once and just add a line to it (with addLineToPoint:) in the touchesMoved method, then draw it in a custom drawRect method via [bezierPath stroke]. You could also just add a CAShapeLayer as a subview of the image view and set its path property to the CGPath property of the bezierPath you created earlier.
See Drawing bezier curves with my finger in iOS? for an example.
I implemented behavior like this in next way:
You should remember all coordinates of your path (MODEL).
Draw your path into temporary subview of imageView.
When user starts pinch/zoom your image - do nothing, i.e path will be scaled by iOS
In the moment when user has finished pinch zooming - redraw your path in correct way using your model.
If you save path as image you get bad looking scaling as result.
Also - do not draw path straight to image - draw to some transparent view and split them together at the end of editing.
Related
This is a difficult problem to explain... but i'll do my best.
First a background on the problem, basically i am creating a paint like app for ios and wanted to add a functionality that allows the user to select part of the image (multi-touch shows an opaque rectangle) and delete/copy-paste/rotate that part. I have got the delete and copy-paste working perfectly but the rotation is another story. To rotate the part of the image I first start by copying the part of the image and setting it to be the background of the selected rectangle layer, then the user rotates by an arbitrary angle using a slider. The problem is that sometimes the image ends up being displayed from another location of the rectangle (meaning the copied image hangs off the wrong corner of the rectangle). I thought this could be a problem with my rectangle.frame.origin but the value for that seems to be correct through various tests. It also seems to change depending on the direction that the drag goes in...
These Are Screens of the problem
In each of the above cases the mismatched part of the image should be inside the grey rectangle, i am at a loss as to what the problem is.
bg = [[UIImageView alloc] initWithImage:[self crop:rectangle.frame:drawImage.image]];
[rectangle addSubview:bg];
drawImage is the users drawing, and rectangle is the selected grey area.
crop is a method which returns a part of a given image from a give rect.
I am also having trouble with pasting an arbitrarily rotated image.. any ideas on how to do that?
Edit: adding more code.
-(void)drawRect:(int)x1:(int)y1:(int)x2:(int)y2{
[rectangle removeFromSuperview];
rectangle = [[UIView alloc] initWithFrame:CGRectMake(x1, y1, x2-x1, y2-y1)];
rectangle.backgroundColor = [UIColor colorWithRed:0.9 green:0.9 blue:0.9 alpha:0.6];
selectionImage = drawImage.image;
drawImage.image = selectionImage;
[drawImage addSubview:rectangle];
rectangleVisible = true;
rectangle.transform = transformation;
Could it have anything to do with how i draw my rectangle? (above) I call this method from a part of a touchesMoved method (below) which may cause the problem (touch 1 being in the wrong location may cause width to be negative?) if so, is there an easy way to remedy this?
if([[event allTouches] count] == 2 && !drawImage.hidden){
NSSet *allTouches = [event allTouches];
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
[self drawRect:[touch1 locationInView:drawImage].x :[touch1 locationInView:drawImage].y:
[touch2 locationInView:drawImage].x :[touch2 locationInView:drawImage].y];
}
I'm not sure if this is your problem, but it looks like you are just assuming that touch1 represents the upper left touch. I would start out by standardizing the rectangle.
// Standardizing the rectangle before making it the frame.
CGRect frame = CGRectStandardize(CGRectMake(x1, y1, x2-x1, y2-y1));
rectangle = [[UIView alloc] initWithFrame:frame];
Background : I would like to draw blocks when the user touch up somewhere. If the block is there, I want to erase it. I manage the blocks by using NSMutableArrayto keep track of points where the block should go. Every time user touches, it will determine if the touch place already contained a block or not and manage the array accordingly.
Problem : I got a very weird feedback from this. First of all, everything in the array works as I wanted. The problem comes when the user wanted to erase a block. While the array is maintained correctly, the drawing seems to ignore the change in the array. It will not remove anything but the last dot. And even that flashes toggles on and off when the user clicked elsewhere.
Here is the code :
- (void)drawRect:(CGRect)rect
{
NSLog(#"drawrect current array %#",pointArray);
for (NSValue *pointValue in pointArray){
CGPoint point = [pointValue CGPointValue];
[self drawSquareAt:point];
}
}
- (void) drawSquareAt:(CGPoint) point{
float x = point.x * scale;
float y = point.y * scale;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, x, y);
CGContextAddLineToPoint(context, x+scale, y);
CGContextAddLineToPoint(context, x+scale, y+scale);
CGContextAddLineToPoint(context, x, y+scale);
CGContextAddLineToPoint(context, x, y);
CGContextSetFillColorWithColor(context, [UIColor darkGrayColor].CGColor);
CGContextFillPath(context);
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *aTouch = [touches anyObject];
CGPoint point = [aTouch locationInView:self];
point = CGPointMake( (int) (point.x/scale), (int) (point.y/scale));
NSLog(#"Touched at %#", [NSArray arrayWithObject: [NSValue valueWithCGPoint:point]]);
NSValue *pointValue = [NSValue valueWithCGPoint:point];
int i = [pointArray indexOfObject:pointValue];
NSLog(#"Index at %i",i);
if (i < [pointArray count]){
[pointArray removeObjectAtIndex:i];
NSLog(#"remove");
}else {
[pointArray addObject:pointValue];
NSLog(#"add");
}
NSLog(#"Current array : %#", pointArray);
[self setNeedsDisplay];
}
scale is defined as 16.
pointArray is a member variable of the view.
To Test : You can drop this into any UIView and add that to the viewController to see the effect.
Question : How do I get the drawing to agree with the array?
Update + Explanation: I am aware of the cost of this approach but it is only created for me to get a quick figure. It will not be used in the real application, thus, please do not get hung up about how expensive it is. I only created this capability to get a value in NSString (#"1,3,5,1,2,6,2,5,5,...") of a figure I draw. This will become more efficient when I am actually using it with no redrawing. please stick to the question asked. Thank you.
I don't see anywhere where you are actually clearing what you drew previously. Unless you explicitly clear (such as by filling with UIRectFill() - which, as an aside, is a more convenient way to draw rectangles than filling an explicit path), Quartz is going to just draw over your old content, which will cause unexpected behavior on attempts at erasure.
So... what happens if you put at the beginning of -drawRect::
[[UIColor whiteColor] setFill]; // Or whatever your background color is
UIRectFill([self bounds]);
(This is of course horrendously inefficient, but per your comment, I am disregarding that fact.)
(As a separate aside, you probably should wrap your drawing code in a CGContextSaveGState()/CGContextRestoreGState() pair to avoid tainting the graphics context of any calling code.)
EDIT: I always forget about this property since I usually want to draw more complex backgrounds anyway, but you can likely achieve similar results by setting clearsContextBeforeDrawing:YES on the UIView.
This approach seems a little weird to me because every time the touchesEnded method is called you need to redraw (which is an expensive operation) and also need keep track of the squares. I suggest you subclass an UIView and implement the drawRect: method, so the view knows how to draw itself and implement the touchesEnded method in your view controller, where you can check if you have touched a squareView then remove it from view controller's view otherwise create a squareView and add it as subview to the view controller's view.
I want to create a simple tool for drawing. The purpose is to draw a line that follows the accelerometer of the iPhone & iPad, so if the user tilts the device a line will be draw in the direction the device was moved.
I am able to register acceleration and drawing lines. My problem is that as soon as I draw a line the old one disappears. One possible solution would be to save to points already drawn and then re-draw everything, but I would think there are better solutions?
All help is appreciated!
My drawRect is at the moment like this:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 20.0);
CGContextSetStrokeColorWithColor(context, [UIColor yellowColor].CGColor);
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
A different method is responsible for refreshing. This method is also called from the uiviewcontroller with certain intervals. Right now it shows a "trail" (or what I should call it) in the direction the device was moved. Not exactly what I am looking for:
- (void)drawNewLine:(CGPoint)to {
// calculate trail behind current point
float pointDifferenceX = ((toPoint.x - to.x) * 9);
float pointDifferenceY = ((toPoint.y - to.y) * 9);
fromPoint = CGPointMake(toPoint.x + pointDifferenceX, toPoint.y + pointDifferenceY);
toPoint = to;
[self setNeedsDisplay];
}
I can think of two options:
Either save all points and redraw the lines whenever the screen needs to be refreshed (as you mentioned)
Draw the lines into an off-screen pixelmap and refresh the screen from there
In either case, respect the Hollywood principle: Don't call, you will be called. That means don't just draw to the screen but wait for until drawRect: of your UIView is called. (You can trigger this by calling setNeedsDisplay.)
I have subclassed MKAnnotationView to create an annotation that basically draws a circle around a point on a map view through override of drawRect. The circle draws fine in the following situations (in the simulator):
On initial load of the map view
On swipe, but only when swipe motion is stopped before touch ends (so that map doesn't "coast" after touch ends)
On pinch zoom
The circle will disappear when any of the following actions occur:
Swipe where map "coasts" after touch ends
Double-tap zoom
The circle will reappear if any of the actions in the "working" group are taken after it has disappeared.
What might cause this? I'm not a draw/display/layout expert (frankly, I'm not an obj C or iPhone expert either).
Here is some slightly simplified code that seems most relevant from my MKAnnotationView subclass:
- (void)drawRect:(CGRect)rect {
// Drawing code
[self drawCircleAtPoint:CGPointMake(0,0) withRadius:self.radiusInPixels andColor:self.circleAnnotation.color];
}
- (void)drawCircleAtPoint:(CGPoint)p withRadius:(int)r {
CGContextRef contextRef = UIGraphicsGetCurrentContext();
float alpha = 0.75;
CGContextSetRGBFillColor(contextRef, 255, 0, 0, alpha);
CGContextSetRGBStrokeColor(contextRef, 255, 0, 0, alpha);
// Draw a circle (border only)
CGContextStrokeEllipseInRect(contextRef, CGRectMake(0, 0, 2*r, 2*r));
}
Did you add this method?
- (void)setAnnotation:(id <MKAnnotation>)annotation
{
[super setAnnotation:annotation];
[self setNeedsDisplay];
}
This is taken from Apple's sample code app called WeatherMap which was removed from Apple Developer Center, but can be found on github
https://github.com/acekiller/iOS-Samples/blob/master/WeatherMap/Classes/WeatherAnnotationView.m
I am trying to make an application for drawing shapes on screen by touching it.
I can draw a line from one point to another- but it erases on each new draw.
Here is my code:
CGPoint location;
CGContextRef context;
CGPoint drawAtPoint;
CGPoint lastPoint;
-(void)awakeFromNib{
//[self addSubview:noteView];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
location = [touch locationInView:touch.view];
[self setNeedsDisplayInRect:CGRectMake(0, 0, 320, 480)];
}
- (void)drawRect:(CGRect)rect {
context = UIGraphicsGetCurrentContext();
[[UIColor blueColor] set];
CGContextSetLineWidth(context,10);
drawAtPoint.x =location.x;
drawAtPoint.y =location.y;
CGContextAddEllipseInRect(context,CGRectMake(drawAtPoint.x, drawAtPoint.y, 2, 2));
CGContextAddLineToPoint(context,lastPoint.x, lastPoint.y);
CGContextStrokePath(context);
lastPoint.x =location.x;
lastPoint.y =location.y;
}
Appreciate your help-
Nir.
As you have discovered, -drawRect is where you display the contents of a view. You will only 'see' on screen what you draw here.
This is much more low-level than something like Flash where you might add a movieclip containing a line to the stage and some time later add another movieclip containing a line to the stage and now you see - two lines!
You will need to do some work and prdobably set up something like..
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
location = [touch locationInView:touch.view];
[self addNewLineFrom:lastPoint to:location];
lastPoint = location;
[self setNeedsDisplayInRect:CGRectMake(0, 0, 320, 480)];
}
- (void)drawRect:(CGRect)rect {
context = UIGraphicsGetCurrentContext();
for( Line *eachLine in lineArray )
[eachLine drawInContext:context];
}
I think you can see how to flesh this out to what you need.
Another way to approach this is to use CALayers. Using this approach you dont draw inside - - (void)drawRect at all - you add and remove layers, draw what you like inside them, and the view will handle compositing them together and drawing to the screen as needed. Probably more what you are looking for.
Every time that drawRect is called, you start with a blank slate. If you don't keep track of everything you have drawn before in order to draw it again, then you end up only drawing the latest swipe of your finger, and not any of the old ones. You will have to keep track of all of your finger swipes in order to redraw them every time drawRect is called.
Instead of redrawing every line, you can draw into an image and then just display the image in your drawRect: method. The image will accumulate the lines for you. Of course, this method makes undo more difficult to implement.
From the iPhone Application Programming Guide:
Use the UIGraphicsBeginImageContext
function to create a new image-based
graphics context. After creating this
context, you can draw your image
contents into it and then use the
UIGraphicsGetImageFromCurrentImageContext
function to generate an image based on
what you drew. (If desired, you can
even continue drawing and generate
additional images.) When you are done
creating images, use the
UIGraphicsEndImageContext function to
close the graphic context. If you
prefer using Core Graphics, you can
use the CGBitmapContextCreate function
to create a bitmap graphics context
and draw your image contents into it.
When you finish drawing, use the
CGBitmapContextCreateImage function to
create a CGImageRef from the bitmap
context. You can draw the Core
Graphics image directly or use this it
to initialize a UIImage object.