I'm trying to resize a drawn quadCurve by dragging one of its 3 control points so the curve can fit. What is the best approach to do this? letting you know that I'm using an imageView for drawing. Not using drawRect.
I know that I should detect if the touch is on the control points which is pretty easy but I don't know what to do after in my touchMoved and touchEnded methods.
Several things:
I would not use an image view for this. This is the kind of problem that drawRect: is for.
Don't use touchesMoved. Use a UIPanGestureRecognizer on the control points.
Make the control points subviews so you can attach gesture recognizers to them.
To work well, the control points typically need to have a pretty large hit area (larger than they are visually). You can do this pretty easily by making the control point views larger than what they draw (so if they're drawn as a 13 point circle, you put that in the middle of a 23 point view).
For an example of code that does all this see CurvyTextView.m. It doesn't do the last point (the control point views are too small to use well on a real device). Ignore all the text drawing code. You just care about updateControlPoints, addControlPoint:color:, initWithFrame:, pan:, and drawPath.
Related
I wish to emulate this effect in Xcode with Swift. After some research I managed to find some articles about drawing smooth curves using a set of points .But I am still unclear about how I could to dynamically modify curves when the user touches/holds the screen.
Question :
I know how to make a smooth Bezier curve, but how can I add gesture recognizers such that by dragging the curve its shape changes.
I only need someone to point me in the right direction. Is there a guide or article in particular that could be useful?
Create transparent ControlPointView for every control point of the curve with a size of 50*50pt, so that users can easily tap them and drag.
Add a small image in the middle of every ControlPointView, so that users can see where the control point is located.
Add UIPanGestureRecognizer on every ControlPointView and handle it in view controller.
Use centers of control points to rebuild UIBezierPath every time gesture recognizer's state is changed.
I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents
There are three layers added to UIView. One layer draws a rectangle. One draws a circle. One draws a polygon. The layer's opacity is no. When I touched the polygon, I want to get the correct layer which draws the polygon. And the three layers are full filled to the view. I have implemented this. But I don't know if we have better solution to solve it .My way is like this:
1.Drawing the content using -drawLayer:inContext. store the CGPath that you used.
2.In the UIView's -touchedEnded:withEvent method. using CGPathContainsPoint() to detect if the touch point is contained by the CGPath.
Maybe this is the stupid way to solve this. Anyone who can tell me how to solve it better?
If you need an accurate hit test for path's I'm afraid you have to check/iterate the layer hierarchy yourself if the point is inside your path using CGPathContainsPoint as you suggested.
While iterating you could optimize it by skipping layers where the point is outside their frame.
For less fine grained control you can get the touched layer by using CALayers
- (CALayer *)hitTest:(CGPoint)thePoint
method.
If you have a layer hierarchy with a nesting level < 1000 (which is almost always true) I would not worry too much.
I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded
Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.