I need to show a view on which I need to animate a polygon using its vertices. The polygon should be touchable, thus fire an event once touched, and I need to be able to move its vertices using some animation procedure, once it has fired that event.
I need to have three polygons like that to form a 3D Cube.
The darkened area is the view (actually an image) on which I have the cube.
There are two steps in the process: drawing and event handling.
Drawing can be done with Quartz2D, by implementing a drawRect in a view, calculating the coordinates of the cube on screen, followed by creating and drawing the path, which works fine for solidly filled shapes. The alternative method uses an OpenGL view where you specify triangles.
At the event handling end, you can implement onTouchesBegan: and friends to get the location of the interaction, and possibly hitTest: to allow other views below it to handle subsequent events. The next thing you will need to decide is how accurate you want to be - you can define a box that roughly matches the cube and test that for touches. Most people will want to touch it somewhere in the middle anyway. For accurate testing, you need the screen coordinates, and test each triangle in each polygon to see if it contains the location. Google turned up a nice explanation on the maths needed for that. In the OpenGL case, you'll have to manually repeat the calculations performed by OpenGL to find out where on screen your polygons have ended up.
Related
I wish to emulate this effect in Xcode with Swift. After some research I managed to find some articles about drawing smooth curves using a set of points .But I am still unclear about how I could to dynamically modify curves when the user touches/holds the screen.
Question :
I know how to make a smooth Bezier curve, but how can I add gesture recognizers such that by dragging the curve its shape changes.
I only need someone to point me in the right direction. Is there a guide or article in particular that could be useful?
Create transparent ControlPointView for every control point of the curve with a size of 50*50pt, so that users can easily tap them and drag.
Add a small image in the middle of every ControlPointView, so that users can see where the control point is located.
Add UIPanGestureRecognizer on every ControlPointView and handle it in view controller.
Use centers of control points to rebuild UIBezierPath every time gesture recognizer's state is changed.
Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.
I'm trying to resize a drawn quadCurve by dragging one of its 3 control points so the curve can fit. What is the best approach to do this? letting you know that I'm using an imageView for drawing. Not using drawRect.
I know that I should detect if the touch is on the control points which is pretty easy but I don't know what to do after in my touchMoved and touchEnded methods.
Several things:
I would not use an image view for this. This is the kind of problem that drawRect: is for.
Don't use touchesMoved. Use a UIPanGestureRecognizer on the control points.
Make the control points subviews so you can attach gesture recognizers to them.
To work well, the control points typically need to have a pretty large hit area (larger than they are visually). You can do this pretty easily by making the control point views larger than what they draw (so if they're drawn as a 13 point circle, you put that in the middle of a 23 point view).
For an example of code that does all this see CurvyTextView.m. It doesn't do the last point (the control point views are too small to use well on a real device). Ignore all the text drawing code. You just care about updateControlPoints, addControlPoint:color:, initWithFrame:, pan:, and drawPath.
There are three layers added to UIView. One layer draws a rectangle. One draws a circle. One draws a polygon. The layer's opacity is no. When I touched the polygon, I want to get the correct layer which draws the polygon. And the three layers are full filled to the view. I have implemented this. But I don't know if we have better solution to solve it .My way is like this:
1.Drawing the content using -drawLayer:inContext. store the CGPath that you used.
2.In the UIView's -touchedEnded:withEvent method. using CGPathContainsPoint() to detect if the touch point is contained by the CGPath.
Maybe this is the stupid way to solve this. Anyone who can tell me how to solve it better?
If you need an accurate hit test for path's I'm afraid you have to check/iterate the layer hierarchy yourself if the point is inside your path using CGPathContainsPoint as you suggested.
While iterating you could optimize it by skipping layers where the point is outside their frame.
For less fine grained control you can get the touched layer by using CALayers
- (CALayer *)hitTest:(CGPoint)thePoint
method.
If you have a layer hierarchy with a nesting level < 1000 (which is almost always true) I would not worry too much.
I have a line graph I've drawn in Quartz, and a UIView 'bubble' that I'd like to ideally pop up when the user touches the single plot line, and moves their finger along it. The bubble displays some extra graph information.
I'd like to 'attach' the UIView to the CGPath plot, but I'm having trouble conceptually figuring out the best way to do this. I know you can animate a view along a CGPath, but this doesn't seem to work for me, because the user needs to 'scrub' along the graph themselves with their finger rather than any automatic animation.
Does anyone have any suggestions of a good approach?
Maybe you don't need to animate it. Touch events fly by pretty quickly -- maybe if you just move the view to the appropriate location relative to the touch without animation the move will be smooth enough. If that's not good enough, you'll have to animate along the graph segment from the current location of the view (see CALayer presentationLayer) to the desired location. The key is that you'll be animating every time you receive a touch event (and any previous animations would be cancelled).
Like Neil said, your best bet is to just move with the touch events, it will look smooth if all you're doing is moving a view. Use [aTouch locationInView:view] to get the touch's position, then find the closest point on the path (maybe use the x value and look up the y value on the path for the x).