Suppose I had a small UIView as a child/subview of a larger UIView, and that child could be moved around via some CGAffineTransforms. How might the the parent know what the true 'center' of that view is within its own coordinate system? I have tried using the convertPoint routines with whatever is returned by child.center, but it isn't working... is 'center' completely bogus in this context or am I just using the wrong method?
EDIT:
After doing a bit of testing I noticed the following:
UIViews don't have an anchorPoint property, but they do have a center property. The center property is always calculated properly after applying transforms, except a translation transform, for which you have to do the following:
CGPoint realCenter = CGPointMake(myView.center.x + myView.frame.origin.x, ...);
As for CALayers, they do have an anchorPoint property, but they lack a center property. So, what you want to do is calculate the center manually by doing calculations on the position property, anchorPoint property and the translation of your layer.
I can't provide any code, since I am not sure which method you are using, but to wrap it up, you have to roll out your own center calculator either ways.
Please look at the pictures below carefully (courtesy of Stanford iPhone Development course slides):
Before applying any rotation:
After applying a 45° rotation:
Conclusion:
Notice how the old center was (300, 225) and the new center is, well not new! It's the same. If you are doing everything correctly, your center should be the same. If you have another point within the view that you'd like to calculate, then you'd have to that yourself.
Please also notice how the frame changed from (200, 100, 200, 250) to (140, 65, 320, 320). This is just how UIKit does it's magic.
Related
I am trying to draw a simple straight line. For this, I am filling an UIImageView with some color with given width as, say 2 pixel and some length. Now user is provided with two UISliders out of which one is used to stretch the line and another slider to rotate. I use myImageView.frame to change the height and CGAffineTransform to rotate. It works fine until I change the rotation angle using slider. But once I rotate the imageview, The slider to stretch doesn't work properly.
I have searched and found that frames won't work after rotating using CGAffineTranfor.
Alternately bounds should work but that didn't work too.
Seeking help. Any brief code will help me.
Thanks in advance.
Give transformed bounds every time you apply a transform. This should work:
CGRect transformedBounds = CGRectApplyAffineTransform(myView.bounds, myView.transform);
I need to change the size of my images according to their distance from the center of the screen. If an image is closer to the middle, it should be given a scale of 1, and the further it is from the center, the nearer it's scale is to zero by some function.
Since the user is panning the screen, I need a way to change the images (UIViews) scale, but since this is not a very classic animation where I know a how to define an animation sequence exactly - mostly because of timing issues (due to system performance, I don't know how long the animation will last), I am going to need to simply change the scale in one step (no timed animations).
This way every frame the functiion gets called when panning, all images should update easily.
Is there a way to do that ?
You could directly apply a CGAffineTransform to your UIImageView. i,e:
CGAffineTransform trans = CGAffineTransformMakeScale(1.0,1.0);
imageView.transform = trans;
Of course you can change your values, and or use other CGAffineTransform's, this should get you on your way though.
Hope it helps !
I have a subclass of NSView where I'm handling the -mouseDown: event to get the position of the click on the screen. With this position I defined a point that I'll use to draw a rect on the -drawRect: it's working fine.
BUT... when I set up wantsLayer the things isn't work. When I get the position of the input, I looked that Y-axis have an increase of 20 points and I don't know what's happening... Can anyone explain? How I fix this problem?
Simulation:
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 100; It's okay, it's what I want.
With setWantsLayer:YES
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 120; (or something like this)
Is possible I use CALayers without setting -setWantsLayer to YES? I'm trying figure this out but I have no idea what's happening... I need your help.
UPDATE: I'm trying figure this out and I did a lot of tests now...
Now I can say that the problem is with -mouseDown: from NSView, when I set up -setWantsLayer to YES it don't works like expected anymore...
I have on my window a CustomView and I created a subclass of NSView and set as the CustomView class. The CustomView is at position (0, 20). The coordinate orientation isn't flipped.
I believe when I set up to NSView wants layer the -mouseDown: update the frame to position (0, 0) (or in other words, it get the NSWindow frame) instead of (0, 20). When it occurs every position from -mouseDown: get an increase of 20 points on Y-axis. I don't know if what I'm saying is right, but is the facts that I'm getting as result of my tests.
Someone can help me to figure this out?
Now with help of mikeash from (#macdev # frenoode) I solved this question.
The problem was how I was converting the point return from -mouseDown: event. I was using -convertPointFromBacking: and like mikeash said: "the problem is that -convertPointFromBacking: is not correct for converting the point returned from locationInWindow". "Because locationInWindow is not in 'its pixel aligned backing store coordinate system'".
I changed to -convertPoint:fromView: like: [sender convertPoint:[mEvent locationInWindow] fromView: nil]; and it's working nice!
Thank you to mikeash.
And I'm posting the answer here to help others with the same question.
I'm implementing a basic speedometer using an image and rotating it. However, when I set the initial rotation (at something like 240 degrees, converted to radians) It rotates the image and makes it much smaller than it otherwise would be. Some values make the image disappear entirely. (like M_PI_4)
the slider goes from 0-360 for testing.
the following code is called on viewDidLoad, and when the slider value is changed.
-(void) updatePointer
{
double progress = testSlider.value;
progress += pointerStart
CGAffineTransform rotate = CGAffineTransformMakeRotation((progress*M_PI)/180);
[pointerImageView setTransform:rotate];
}
EDIT: Probably important to note that once it gets set the first time, the scale remains the same. So, if I were to set pointerStart to 240, it would shrink, but moving the slider wouldn't change the scale (and it would rotate it as you'd suspect) Replacing "progress" with 240 in the transformation does the same thing. (shrinks it.)
I was able to resolve the issue for anybody who stumbles across this question. Apparently the image is not fully loaded/measured when viewDidLoad is called, so the matrix transforms that cgAffineTransform does actually altered the size of the image. Moving the update code to viewDidAppear fixed the problem.
Take the transform state of the view which you want to rotate and then apply the rotation transform to it.
CGAffineTransform trans = pointerImageView.transform;
pointerImageView.transform = CGAffineTransformRotate(trans, 240);
I'm trying to add a box to the centre of the screen in a zoom view. E.g. if I move into an area of the content view and try using the offset coordinates, it becomes erratic if I zoom in or out. I can't seem to figure out the right mathematical formula for this.
If you are working with a UIView or one of it's subclasses. You'll always have a center property available for you. That property is a CGPoint and you can do something like this to test if it is the required result you seek:
CGPoint center = [YourUIViewOrSubclass center];
NSLog(#"Center x is '%f' and y is '%f', center.c, center.y);
I hope this helps you. Otherwise try and rephrase your question and include a little context.