How to restore saved transform when flipped - iphone

I persist a transform and a frame of an object. When the app reloads I simply need to restore the frame and the transform on a new view. You would think that you could simply set the frame and transform to the saved values, but you can't. Doing so creates undesired results.
http://iphonedevelopment.blogspot.com/2008/10/demystifying-cgaffinetransform.html
Says : "When you apply success transformations, the order matter. Rotating and then translating will give you a different result then translating and then rotating. This can bite you if you're not careful."
I only have transform problems when I save an transform that has been FLIPPED & ROTATED. My save values are identical to the values that are applied to the restored view.
Actual Results:
When doing :
myNewView.frame = savedFrame
myNewView.transform = savedTransform
My view will shift down and to the right (out of place).
Can someone help me restore my transform in the proper order of operations?
Again, I have confirmed that my saved and loaded transform data is perfect, so I know its an order of operations problem, im just not smart enough to figure out what to do next :P

To undo the saved transformation, you need the inverse transformation. See the heading Stepping Back in the page you referenced. Try:
myNewView.transform = CGAffineTransformInvert(savedTransform);
An arbitrary matrix is not always invertible, but you are usually safe with an affine transformation. Here are some special cases to watch out for.
In general, matrix inversion may lead to round-off errors, so if you need to do this repeatedly, you may need to use a different approach.
EDIT:
Having re-read the question, I suspect that the origin of the transformation may have changed. When a rotation is involved, rotating about a different origin will appear to as an unexpected translation. From the docs, the origin is either the view's center, or the layer's anchor point. I suggest checking these properties when you save the transform and see if they differ when you restore.

Related

How to make UNDO action for image editing(Resize, Rotation, 3D Transform)?

How to make the UNDO action in image processing? If I am change the image position(Resize, Rotation, 3D Transform) then click UNDO button, I want the previous state of the image. How to do? Please give sample source code or give some idea about this. Thanks in advance.
Make a NSDictionary with all effects providing / or can be applid with your editor
Store NSDictionary into an NSMutableArray with [lastEffectArray addObject:effectDictionary];
If user presses UNDO, load and applied effectDictionary effects from lastEffectArray with key/value pair. Here, you can do two things :
Can Remove last effectDictionary object from lastEffectArray or, Add it for REDO action into other array.
Your effectDictionary should store each & every effects which can be available or applied with your editor. You can make some easy-to-use function to apply / remove effects!!!
Lastly, don't forget to [release] all effects Array & dictionaries.
Note, This isn't a solution, but can be implemented with own innovative logic.
The common way of implementing undoable operations is storing the state after every operation. So, in this case, you could
Store every version of the image. This is extremely inefficient regarding memory.
Store every transform step; when one transform is to be undone, you can just apply the inverse of the transform to the image. This is a preferred approach.

iOS Get Subview's Rotation

I'm using CGAffineTransformMakeRotation to rotate a subview using its transform property. Later, I need to find out how far the subview has been rotated. I realize that I could simply use an int to keep track of this, but is there a simple way to get the current rotation of the subview?
CGAffineTransformMakeRotation is explicitly defined to return a matrix with cos(angle) in the transform's a value, sin(angle) in b, etc (and, given the way the transform works, that's the only thing it really could do).
Hence you can work out the current rotation by doing some simple inverse trigonometry. atan2 is probably the thing to use, because it'll automatically figure out the quadrants appropriately.
So, e.g.
- (float)currentAngleOfView:(UIView *)view
{
CGAffineTransform transform = view.transform;
return atan2f(transform.b, transform.a);
}
Because that'll do an arctangent that'll involve dividing the a and b fields in an appropriate manner, that method will continue to work even if you apply scaling (as long as it's the same to each axis) or translation.
If you want to apply more complex or arbitrary transformations then things get a lot more complicated. You'll want to look how to calculate normal matrices. From memory I think you'd want the adjugate, which is about as much fun to work out as it sounds.

iphone - apply CGAffineTransformRotate on point in hittest

I've got an image that's allowed to be rotated and scaled by the user.
Every time the user clicks the image I try to figure out if the point is transparent or not.
If it's transparent I return null in my view's HitTest, if it's not transparent I return the view. Problems start when user rotates the image. In my hitTest method, I need to transform the point according to the current view's rotation. Otherwise the point will indicate an irrelevant location on the view (and the image).
How do I do that?
Thank you very much.
This CGAffineTransform Reference might help:
CGPointApplyAffineTransform
CGRectApplyAffineTransform
and
CGSizeApplyAffineTransform
But before you start thinking that you need to perform the mapping by hand, I would suggest to give it a try 'as if' the current transformation was CGAffineIdentity, and code your coordinate detection accordingly. You might be surprised by the results ...
My own experience says that it looks like when you get your points from UITouch locationIn_SomeView_ the inverted matrix of SomeView is applied to the point before it is handed back to you.
So, you probably don't need any of the CGxxxApplyAffineTransform unless you generate the points yourself, outside of the events system.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}

OpenGL ES: Rotating 3d model around itself

I'm playing with OpenGL ES on iPhone and I'm trying to rotate a model by panning with the finger. I discovered the open source app Molecules that let's you do that and I'm looking at that code, but when it comes to rotate a model of mine I'm able to rotate it only around a point distant in the space (like it was in orbit as a satellite and I am the fixed planet).
Any suggestion on what can be wrong?
I can post the code later , maybe on demand (many lines)
For the most part refer to Molecules you can find it here MOLECULES
If my memory serves me correctly, I think you need to translate the model to the origin, rotate, and then translate back to starting position to get the effect you are after.
I think there is a glTranslate() function, Say the object is at 1,0,0. You should then translate by -1,0,0 to go to origin. That is translate by a vector going from the center of the object to the origin.
The draw code probably looks roughly like this:
glLoadIdentity();
glTranslate(0, 0, -10);
glRotate(...);
drawMolecule();
Now it's important to realize that these transformations are applied in reverse order. If, in drawMolecule, we specify a vertex, then this vertex will first be rotated about the axis given to glRotate (which by definition passes through the local origin of the molecule), and then be translated 10 units in the −z direction.
This makes sense, because glTranslate essentially means: "translate everything that comes after this". This includes the glRotate call itself, so the result of the rotation also gets translated. Had the calls been reversed, then the result of the translation would have been rotated, which results in a rotation about an axis that does not pass through the origin anymore.
Bottom line: to rotate an object about its local origin, put the glRotate call last.