difference between CATransform3DTranslate and CATransform3DScale - iphone

Pretty naive question. I don't really understand the term 'translate' here.

While both are functions that modify a CATransform3D, the modification they perform is different. CATransform3DTranslate moves transforms the coordinate space by moving it in the x,y,z space. If you apply a CATransform3DTranslate to an object's transform (e.g a CALayer) it would change position in the screen. CATransform3DScale will resize the space, making transformed objects bigger and smaller. If you apply a CATransform3DScale to an object's transform is would change size.

Related

Applying perspective that is always relative to the screen

I want to ask a question about the perspective that is achieved through CATransform3D.
I know that if you have a view that is 320x480 and then apply this:
CATransform3D perspective = CATransform3DIdentity;
CGFloat zDistance = 1000;
perspective.m34 = 1.0 / -zDistance;
view.layer.sublayerTransform = perspective;
you create a perspective that makes it look like the observer is looking straight at the center of the screen and therefore the same transformation looks different, depending on where the subview that is being transformed is located on the screen. For example, tilting a view looks like this when the view is in the middle of the screen:
And it looks like this if it's in the lower left corner:
Now, my problem is that making the perspective relative to the screen only works if the view I'm transforming is a subview of another view that is 320x480px big. But what if the view I want to transform is a subview of a view that is only 100x100px? Is there a way to make the perspective relative to the whole screen if the superview isn't the size of the screen?
Thanks in advance
According to apple
"The anchorPoint property is a CGPoint that specifies a location within the bounds of a layer that corresponds with the position coordinate. The anchor point specifies how the bounds are positioned relative to the position property, as well as serving as the point that transforms are applied around."
Your perspective should not be relative to the center of the screen or even to the center of your layer by default, is that where you have your anchor point? Aside from that though, what you seem to be asking is how to make your perspective appear to be relative to a different point. The trick is that your perspective is created by multiplying by your perspective matrix. Setting m34 to a small number does nothing magic, you are multiplying by your projection matrix.
YourProjection = {1,0,0 ,0,
0,1,0 ,0,
0,0,1 ,0,
0,0,-1.0/zdistance,1};
Remember that you can combine successive transforms by multiplying them together. Just transform your layer to wherever you want, then apply your Projection matrix, then transform it back, presto, perspective from a different origin.
float x = your coordinates relative to the screen center;
float y = same thing
TranslationMatrix = {1,0,0,0,
0,1,0,0,
0,0,1,0,
x,y,0,1};
ReverseTranslationMatrix = {1,0,0,0,
0,1,0,0,
0,0,1,0,
-x,-y,0,1};
//now just multiply them all together
Final = ReverseTranslation*YourProjection*Translation;
You will need to do the matrix math yourself, hopefully you already have a generic 4x4 column major matrix class that can do multiplication for you, if not i suggest you make one. Also, if you are interested, you might consider reading. This for an explanation of how the matrix you are currently using works, or This for a different take on projection matrices.

iOS Get Subview's Rotation

I'm using CGAffineTransformMakeRotation to rotate a subview using its transform property. Later, I need to find out how far the subview has been rotated. I realize that I could simply use an int to keep track of this, but is there a simple way to get the current rotation of the subview?
CGAffineTransformMakeRotation is explicitly defined to return a matrix with cos(angle) in the transform's a value, sin(angle) in b, etc (and, given the way the transform works, that's the only thing it really could do).
Hence you can work out the current rotation by doing some simple inverse trigonometry. atan2 is probably the thing to use, because it'll automatically figure out the quadrants appropriately.
So, e.g.
- (float)currentAngleOfView:(UIView *)view
{
CGAffineTransform transform = view.transform;
return atan2f(transform.b, transform.a);
}
Because that'll do an arctangent that'll involve dividing the a and b fields in an appropriate manner, that method will continue to work even if you apply scaling (as long as it's the same to each axis) or translation.
If you want to apply more complex or arbitrary transformations then things get a lot more complicated. You'll want to look how to calculate normal matrices. From memory I think you'd want the adjugate, which is about as much fun to work out as it sounds.

How do I position `contents` png in CALayer?

Do I have to move the layer frame or apply translate matrix transformation to layer? Or perhaps I can move the contents inside of the layer? If contents is not movable inside of layer, how it would position initially?
A CALayer has a frame (or, equivalently, a bounds and an origin), which is used logically to determine what to draw. When drawInContext: or equivalent is called, it's the frame that determines how the contents are produced.
However, like OS X, iOS adopts a compositing window manager, which means that views know how to draw their output to a buffer and the buffers are combined to create the view, with the window manager figuring out what to do about caching and video memory management in between.
If you adjust the transform property of the view or of the layer class, then you adjust how the compositing happens. However, the results of drawInContext: should explicitly still be the same so the window manager knows it can just use the cached image.
So, for example, if you set a frame of size 128x128 and then a transform that scales the CALayer up to double, you'll occupy a 256x256 area of the screen but the image used for compositing will be only 128x128 in size, making each source pixel into four target pixels. If you set a frame of size 256x256 and the identity transform, you'll cover the same amount of screen space but with each source pixel being 1:1 related to a target pixel.
A side effect is that changing the frame causes a redraw from first principles. Changing the transform doesn't. So the latter is usually faster, and is also the thing to do if you decide to use something like CATiledLayer (as used in Safari, Maps, etc) that draws in a separate thread and may take a while to come up with results.
As a rule of thumb, you use the frame to set the initial position and update the frame for normal work stuff. You play with the transform for transitions and other special effects. However, all of the frame and transform properties of a CATiledLayer are animatable in the CoreAnimation sense, so that's really still at your discretion.
Most people don't work on the level of a CALayer, but prefer to work with UIViews. In which case the comments are mostly the same, with the caveat that you can then adjust the [2d] transform on the view or the [3d] transform on the view's layer and have the compositor figure it all out, but change the frame to prompt a redraw.

OpenGL ES: Rotating 3d model around itself

I'm playing with OpenGL ES on iPhone and I'm trying to rotate a model by panning with the finger. I discovered the open source app Molecules that let's you do that and I'm looking at that code, but when it comes to rotate a model of mine I'm able to rotate it only around a point distant in the space (like it was in orbit as a satellite and I am the fixed planet).
Any suggestion on what can be wrong?
I can post the code later , maybe on demand (many lines)
For the most part refer to Molecules you can find it here MOLECULES
If my memory serves me correctly, I think you need to translate the model to the origin, rotate, and then translate back to starting position to get the effect you are after.
I think there is a glTranslate() function, Say the object is at 1,0,0. You should then translate by -1,0,0 to go to origin. That is translate by a vector going from the center of the object to the origin.
The draw code probably looks roughly like this:
glLoadIdentity();
glTranslate(0, 0, -10);
glRotate(...);
drawMolecule();
Now it's important to realize that these transformations are applied in reverse order. If, in drawMolecule, we specify a vertex, then this vertex will first be rotated about the axis given to glRotate (which by definition passes through the local origin of the molecule), and then be translated 10 units in the −z direction.
This makes sense, because glTranslate essentially means: "translate everything that comes after this". This includes the glRotate call itself, so the result of the rotation also gets translated. Had the calls been reversed, then the result of the translation would have been rotated, which results in a rotation about an axis that does not pass through the origin anymore.
Bottom line: to rotate an object about its local origin, put the glRotate call last.

In my Application i want to perform Scaling and Translation together?

i want to perform Scaling and Translation of image together so how its possible?
Just make the new opposite corners of your image the two detected multitouch points. You have to keep either the rotation or the aspect ratio fixed, of course. (In theory, you could mess with the aspect ratio rather than the rotation, but you probably want the rotation to change, not the aspect ratio).
i.e. Override the touchesBegan and touchesMoved to save the initial points (in the began) and calculate rotation, translation and zoom (in the Moved), and construct a CGAffineTransform to apply to the imageView.