I'm drawing an oblong 'egg' shape (on iOS), and want to use it as a boundary for particles. My thought is to use the curve paths to make a UIView and then use hitTest:withEvent: or pointInside:withEvent: to enforce boundary collision.
The problem is, of course, that UIView is always rectangular. How would you go about checking to see if a point is inside an irregular shape like this?
- (void)drawRect:(CGRect)rect {
int w = rect.size.width;
int h = rect.size.height;
CGContextBeginPath(context);
CGContextMoveToPoint(context, w/2, h/5);
CGContextAddCurveToPoint(context, w*0.1, h/4.3, w*0.1, h*0.82, w/2, h*0.8);
CGContextAddCurveToPoint(context, w*0.9, h*0.82, w*0.9, h/4.3, w/2, h*0.2);
I'm using openFrameworks, for what that's worth. This code is just Obj-C but I'm open to any C++/obj-C++ solutions out there.
If you make a CGPathRef you can use CGPathContainsPoint. You can use that same CGPathRef to render into the context. You could also call CGContextPathContainsPoint on the context containing the path, but depending on when you need to test you might not have a context. And another alternative is the containsPoint selector on UIBezierPath.
If you want to code this from scratch, http://www.softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm goes through a couple of different algorithms that will work for an arbitrary polygon.
Related
I have two square images in my UIVIew. Once I drag my finger from one image to another image I want to draw a straight line between them.
I have handled touchesMoved method to check when my touchLocation reaches the frame of either of the images. So I have handled the logic part of when to start drawing and between which two points.
I just cant figure out how to do that using (void)drawRect:(CGRect)rect. For one thing, I addded an NSlog in my drawRect and I wrote code to draw a line between two lines, even that's not happening.
I checked this question too, but I want to continue to drag and draw lines between multiple points.
- (void)drawRect:(CGRect)rect
{
NSLog(#"Draw Rect Entered");
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetStrokeColorWithColor(context, [[UIColor blackColor]CGColor]);
CGContextSetLineWidth(context, 1.0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, 20, 20);
CGContextStrokePath(context);
}
In order for the view to update you need to call -setNeedsDisplay or -setNeedsDisplayInRect: on the view that needs to be redrawn. You should do this in the touch handler once you have determined you need to draw the line. This will trigger the -drawRect: method on the view- you don't call -drawRect directly. For more information, read the UIView docs. I don't know whether you want your line to exist permanently once created- if there's a possibility it might be deleted, I'd add an if statement to -drawRect: controlled by a boolean determining whether to draw the line or not. Then you can easily switch it off if necessary.
Based on your comments below, I'd create an ivar array in which to store the points for the currently traced line and then loop over them in -drawRect:, using the Core graphics functions to draw the line a segment at a time. That way the new segment and all previous segments will be drawn with each redraw. Since CGPoint isn't an obj-c object, you'll either have to create a wrapper class or use obj-C++ and std::vector. You could also use a fixed size C-Array, (you mention you have precisely 10 points)- in that case you'd have to preset the unused coordinates to some arbitrary value defined as invalid (e.g. -500,-500) and add logic to not draw line segments if those values are encountered.
Minor point- I wouldn't hardcode the line coordinates - instead you could derive them from the images' frame property. That way your code will be more readable and won't break if you change the images or resize them.
There are two issues I have run into that could cause this problem:
1) Make sure you are calling setNeedsDisplay on the main thread
2) If you are working with multiple views make sure you are calling setNeedsDisplay against the correct one.
It is easy to work with one view and forget to call setNeedsDisplay against the parent view.
I have read what I believe to be the relevant parts of the Quartz 2D Programming Guide, but cannot find an answer to the following (they don't seem to talk a lot about iOS in the document):
My application displays a drawing in a UIView. Every now and then I have to update the drawing in some way, e.g. change the fill colour of one of the shapes (I keep CGPathRefs to the important shapes to be able to redraw them with a different fill colour later). As described in the Section "Drawing With a CGLayer" on page 169 of the aforementioned document, I was thinking of drawing the entire drawing into a CGContext that I would obtain from a CGLayer, like so:
CGContextRef offscreenContext = CGLayerGetContext(offscreenLayer);
Then I could do my updating off-screen into the CGContext and draw the CGLayer into my UIView in the UIView's drawRect: method, like so:
CGContextDrawLayerAtPoint(viewContext, CGPointZero, offscreenLayer);
The problem I am having is, where do I get my CGLayer from? My understanding is I have to make it using CGLayerCreateWithContext and supply a CGContext as a parameter from which it inherits most of it's properties. Obviously, the right context would be the context of the UIView, that I am getting with
CGContextRef viewContext = UIGraphicsGetCurrentContext();
but if I am not mistaken, I can only get that within the drawRect: method and it is not valid to assume that the context I am given there will be the same one next time the method is called, i.e. I can only use that CGContext locally within the method.
So, how can I get a CGContext that I can use to initialise my CGLayer to create an offscreen CGContext to draw into and then draw the entire layer back into my UIView's CGContext?
PS: While you're at it; if anything above does not make sense or is not sane, please let me know. I am just starting to get my head around Quartz 2D.
First of all, if you are doing it from in an iOS environment, I think you are right. The documentation clearly said that the only way to obtain a CGContextRef is by
CGContextRef ctx = UIGraphicGetCurrentContext();
Then you use that context for creating the CGLayer with
CGLayerRef layer = CGLayerCreateWithContext(ctx, (CGSize){0,0}, NULL);
And if you want to draw on that layer, you have to draw it with the context you get from the layer. (It is somewhat different from the context you passed in earlier to create the CGLayer). Im guessing the CGLayerCreateWithContext saves the information it can get from the context passed in, but not everything. (One of the example is the ColorSpace information, you have to re-specify when you fill something with the context from CGLayer).
You can get the CGLayer context reference from the CGLayerGetContext() function and use that to draw.
CGContextRef layerCtx = CGLayerGetContext(layer);
CGContextBeginPath(layerCtx);
CGContextMoveToPoint(layerCtx, -10, 10);
CGContextAddLineToPoint(layerCtx, 100, 10);
CGContextAddLineToPoint(layerCtx, 100, 100);
CGContextClosePath(layerCtx);
One point that I found out is when you draw something offscreen, it automatically clips the thing offscreen. (make sense, so it doesnt draw things that is not seen) but when you move the layer (using the matrix transformation). The clipped path is not showing (missing).
One last thing, if you save the reference to a layer into a variable and later on you want to draw it, you can use CGContextDrawLayerAtPoint() method like
CGContextDrawLayerAtPoint(ctx, (CGPoint) {newPointX, newPointY}, layer);
It will sort of "stampt" or "draw" the layer at that newPointX and new PointY coordinate.
I hope that answer your question, if its not please let me know.
I can combine rect1 with rect2 using CGRectUnion() and get a combined rect3 fine.
Is it possible to subtract a rect1 from a rect3 (which contains rect1) and get a remaining part of rect?
As Brad Larson said, you can't do this in Quartz, because the CGRect functions work with nothing but rects and their component parts (points, sizes, and single numbers).
If you were programming the Mac, I would suggest using another API named HIShape. It's the modern successor to QuickDraw Regions, and as such, it is capable of non-rectangular shapes. Unfortunately, though HIShape is still available on 64-bit Mac OS X, it is not available on iOS.
If you really need something like this, you will have to write it yourself, including your own HIShape-like not-necessarily-rectangular shape class.
Try CGRectIntersection if I could understand you correctly.
well, it depends... on how rect3 contains rect1...
i mean, it may happens that the resulting area is no more a rect...
for example, if rect1 is all inside rect3 the remaining area is not a rect, so you couldn't use the CGRect object.
You could obtain a rect just in case rect3 and rect1 share completely a side and have it (all of it) in common. So i need to know what kind of objet you wanna obtain by that subtraction...
may it be a new image with 2 different areas coloured? or slit the resuting area in more CGrect (upper rect, left, bitton, right...)
what are you going to do with the resulting "object"?
luca
I've been creating iPhone apps for a while now, using basic transformations (rotations, scale, etc) but now I'd like to do something a little more complex.
Maths really isn't my strongest point... but I was wondering how I might go about adding 'perspective' to a UIView (see the image below). I quickly mocked the screenshot up using skew options in Photoshop.
I have had a look around stackoverflow for solutions to this, I found How do I apply a perspective transform to a UIView? which works excellently - but it's not really what i'm after because the height of the left most edge is larger than the right most edge.
Does anyone know how I might go about doing this CATransform3D but without these differing heights?
alt text http://img594.imageshack.us/img594/4354/perspectivel.png
If you just want to skew, you don't need 3D transform. An affine transform will suffice.
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform.b = -0.1;
transform.a = 0.9;
CGContextConcatCTM(ctx,transform);
// do drawing on the context
}
this is a modified copy&paste from a project which has a similar transform, but you may need to tune the parameters a and b. This will give a 1 in 9 rise from left to right (0.1/0.9), while condensing from left to right to 90% (0.9).
This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either.
Since Quartz allows for arbitrary coordinate systems using affine transforms, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet.
So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 10-inch box for example), I get a 60x60 pixels rectangle.
It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason why, and moreover, I don't know how to fix it. Here is my code:
I set my coordinate system in my awakeFromNib custom view method:
- (void) awakeFromNib {
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
self.transform = scale;
}
And here is my draw routine:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 1.0;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}
The square I get is scaled just fine, but totally fuzzy. Playing with lineWidth doesn't help: when lineWidth is set smaller, it gets lighter, but not crisper.
So is there a way to set up a view to have a scaled coordinate system, so that I can use my domain coordinates? Or should I go back and implementing scaling in my drawing routines?
Note that this issue doesn't occur for translation or rotation.
Thanks
the [stroked] rectangle I get is quite fuzzy.
Usually, this is because you plotted the rectangle on whole-number co-ordinates and your line width is 1.
In PostScript (and thus in its descendants: AppKit, PDF, and Quartz), drawing units default to points, 1 point being exactly 1/72 inch. The Mac and iPhone currently* treat every such point as 1 pixel, regardless of the actual resolution of the screen(s), so, in a practical sense, points (by default, on the Mac and iPhone) are equal to pixels.
In PostScript and its descendants, integral co-ordinates run between points. 0, 0, for example, is the lower-left corner of the lower-left point. 1, 0 is the lower-right corner of that same point (and the lower-left corner of the next point to the right).
A stroke is centered on the path you're stroking. Thus, half will be inside the path, half outside.
In the (conceptually) 72-dpi world of the Mac, these two facts combine to produce a problem. If 1 pt is equal to 1 pixel, and you apply a 1-pt stroke between two pixels, then half of the stroke will hit each of those pixels.
Quartz, at least, will render this by painting the current color into both pixels at one-half of the color's alpha. It determines this by how much of the pixel is covered by the conceptual stroke; if you used a 1.5-pt line width, half of that is 0.75 pt, which is three-quarters of each 1-pt pixel, so the color will be rendered at 0.75 alpha. This, of course, goes to the natural conclusion: If you use a 2-pt line width, each pixel is completely covered, so the alpha will be 1. That's why you can see this effect with a 1-pt stroke and not a 2-pt stroke.
There are several workarounds:
Half-point translation: Exactly what it says on the box, you translate up and right by half a point, compensating for the aforementioned 1-pt-cut-in-half division.
This works in simple cases, but flakes out when you involve any other co-ordinate transformations except whole-point translations. That is to say, you can translate by 30, 20 and it'll still work, but if you translate by 33+1/3, 25.252525…, or if you scale or rotate at all, your half-point translation will be useless.
Inner stroke: Clip first, then double the line width (because you're only going to draw half of it), then stroke.
This can require gstate juggling if you have a lot of other drawing to do, since you don't want that clipping path affecting your other drawing.
Outer stroke: Essentially the same as an inner stroke, except that you reverse the path before clipping.
Can be better (less gstate juggling) than an inner stroke if you're sure that the paths you want to stroke won't overlap. On the other hand, if you also want to fill the path, the gstate juggling returns.
*This won't last forever. Apple's been dropping hints for some time that they're going to change at least the Mac's drawing resolution at some point. The API foundation for such a change is pretty much all there now; it's all a matter of Apple throwing the switch.
Well, as often, explaining the issue lead me to a solution.
The problem is that the view transform property is applied to it after it has been drawn into a bit buffer. The scaling transform has to be applied before drawing, ie. in the drawRect method. So scratch the awakeFromNib I gave, and here is a correct drawRect:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
CGContextConcatCTM(context, scale);
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 0.1;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}