Bug when set up -setWantsLayer: on Lion - osx-lion

I have a subclass of NSView where I'm handling the -mouseDown: event to get the position of the click on the screen. With this position I defined a point that I'll use to draw a rect on the -drawRect: it's working fine.
BUT... when I set up wantsLayer the things isn't work. When I get the position of the input, I looked that Y-axis have an increase of 20 points and I don't know what's happening... Can anyone explain? How I fix this problem?
Simulation:
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 100; It's okay, it's what I want.
With setWantsLayer:YES
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 120; (or something like this)
Is possible I use CALayers without setting -setWantsLayer to YES? I'm trying figure this out but I have no idea what's happening... I need your help.
UPDATE: I'm trying figure this out and I did a lot of tests now...
Now I can say that the problem is with -mouseDown: from NSView, when I set up -setWantsLayer to YES it don't works like expected anymore...
I have on my window a CustomView and I created a subclass of NSView and set as the CustomView class. The CustomView is at position (0, 20). The coordinate orientation isn't flipped.
I believe when I set up to NSView wants layer the -mouseDown: update the frame to position (0, 0) (or in other words, it get the NSWindow frame) instead of (0, 20). When it occurs every position from -mouseDown: get an increase of 20 points on Y-axis. I don't know if what I'm saying is right, but is the facts that I'm getting as result of my tests.
Someone can help me to figure this out?

Now with help of mikeash from (#macdev # frenoode) I solved this question.
The problem was how I was converting the point return from -mouseDown: event. I was using -convertPointFromBacking: and like mikeash said: "the problem is that -convertPointFromBacking: is not correct for converting the point returned from locationInWindow". "Because locationInWindow is not in 'its pixel aligned backing store coordinate system'".
I changed to -convertPoint:fromView: like: [sender convertPoint:[mEvent locationInWindow] fromView: nil]; and it's working nice!
Thank you to mikeash.
And I'm posting the answer here to help others with the same question.

Related

SKScene scale + anchorPoint = strange behavior

I have an empty SKScene which needs to be horizontally centered, so I set it's anchorPoint:
self.anchorPoint = CGPoint(0.5f, 0);
I also need to scale it, per SpriteKit - Set Scale and Physics, so I run an action on it like this:
[self runAction:[SKAction scaleTo:0.5f duration:0.0f]];
My problem is that when I detect touches, I get locations that seem strangely off.
I detect touches using the usual (locationInNode:self) method, and I'm currently adding a little blue square where the touches are, but when I touch the 4 corners of my device, I see a frame that is a quarter of my screen (correctly) but is moved to the left by a seemingly arbitrary amount
Here are some things I've already checked:
scene is initialized in viewWillLayoutSubviews, I know it has the correct initial dimensions
scene's scaleMode is set to SKSceneScaleModeAspectFill, but I've tried all of them to no avail
I was struggling with the same issue for awhile, but I think I finally got it figured out. You're not supposed to scale the scene (like it hints if you try setScale). You're supposed to resize it.
Try this:
myScene.scaleMode = SKSceneScaleModeAspectFill;
And then while zooming:
myScene.size = CGSizeMake(newX, newY);
Set Anchor Point as
self.anchorPoint = CGPoint(0.5f, 0);
And set Scene Scale Mode as ASPECT FIT, not aspect fill.
SKSceneScaleModeAspectFit

Get center of a UIView after CGAffineTransform

Suppose I had a small UIView as a child/subview of a larger UIView, and that child could be moved around via some CGAffineTransforms. How might the the parent know what the true 'center' of that view is within its own coordinate system? I have tried using the convertPoint routines with whatever is returned by child.center, but it isn't working... is 'center' completely bogus in this context or am I just using the wrong method?
EDIT:
After doing a bit of testing I noticed the following:
UIViews don't have an anchorPoint property, but they do have a center property. The center property is always calculated properly after applying transforms, except a translation transform, for which you have to do the following:
CGPoint realCenter = CGPointMake(myView.center.x + myView.frame.origin.x, ...);
As for CALayers, they do have an anchorPoint property, but they lack a center property. So, what you want to do is calculate the center manually by doing calculations on the position property, anchorPoint property and the translation of your layer.
I can't provide any code, since I am not sure which method you are using, but to wrap it up, you have to roll out your own center calculator either ways.
Please look at the pictures below carefully (courtesy of Stanford iPhone Development course slides):
Before applying any rotation:
After applying a 45° rotation:
Conclusion:
Notice how the old center was (300, 225) and the new center is, well not new! It's the same. If you are doing everything correctly, your center should be the same. If you have another point within the view that you'd like to calculate, then you'd have to that yourself.
Please also notice how the frame changed from (200, 100, 200, 250) to (140, 65, 320, 320). This is just how UIKit does it's magic.

CGAffineTransformMakeRotation scales the image

I'm implementing a basic speedometer using an image and rotating it. However, when I set the initial rotation (at something like 240 degrees, converted to radians) It rotates the image and makes it much smaller than it otherwise would be. Some values make the image disappear entirely. (like M_PI_4)
the slider goes from 0-360 for testing.
the following code is called on viewDidLoad, and when the slider value is changed.
-(void) updatePointer
{
double progress = testSlider.value;
progress += pointerStart
CGAffineTransform rotate = CGAffineTransformMakeRotation((progress*M_PI)/180);
[pointerImageView setTransform:rotate];
}
EDIT: Probably important to note that once it gets set the first time, the scale remains the same. So, if I were to set pointerStart to 240, it would shrink, but moving the slider wouldn't change the scale (and it would rotate it as you'd suspect) Replacing "progress" with 240 in the transformation does the same thing. (shrinks it.)
I was able to resolve the issue for anybody who stumbles across this question. Apparently the image is not fully loaded/measured when viewDidLoad is called, so the matrix transforms that cgAffineTransform does actually altered the size of the image. Moving the update code to viewDidAppear fixed the problem.
Take the transform state of the view which you want to rotate and then apply the rotation transform to it.
CGAffineTransform trans = pointerImageView.transform;
pointerImageView.transform = CGAffineTransformRotate(trans, 240);

finding the center point of the screen in a zoom view in iphone

I'm trying to add a box to the centre of the screen in a zoom view. E.g. if I move into an area of the content view and try using the offset coordinates, it becomes erratic if I zoom in or out. I can't seem to figure out the right mathematical formula for this.
If you are working with a UIView or one of it's subclasses. You'll always have a center property available for you. That property is a CGPoint and you can do something like this to test if it is the required result you seek:
CGPoint center = [YourUIViewOrSubclass center];
NSLog(#"Center x is '%f' and y is '%f', center.c, center.y);
I hope this helps you. Otherwise try and rephrase your question and include a little context.

How to shift item position in UIKit

Sorry guys, I hate asking dumb questions but I have seriously been searching for days online to no avail. Every method I've tried with item.bounds or the like has failed miserably. I'm just trying to move some elements, like a UILabel and a UIButton over a few pixels under a certain circumstance. Just a simple point to a tutorial that I missed would be so helpful.
Generally frame is what you want. It is specified in terms of parent view coordinates. So if my view's frame is CGRectMake(10.f,20.f,50.f,60.f) it appears in the parent's coordinates at x=10 y=20 with width 50 and height 60.
UIView* someView ...
CGRect someViewFrame = someView.frame;
someView.frame.origin.y += 10;
someView.frame = someViewFrame;
moves the view down by 10 pixels.
If you just want to move the view in a superview, leave bounds alone.
This example should be useful.