How to shift item position in UIKit - iphone

Sorry guys, I hate asking dumb questions but I have seriously been searching for days online to no avail. Every method I've tried with item.bounds or the like has failed miserably. I'm just trying to move some elements, like a UILabel and a UIButton over a few pixels under a certain circumstance. Just a simple point to a tutorial that I missed would be so helpful.

Generally frame is what you want. It is specified in terms of parent view coordinates. So if my view's frame is CGRectMake(10.f,20.f,50.f,60.f) it appears in the parent's coordinates at x=10 y=20 with width 50 and height 60.
UIView* someView ...
CGRect someViewFrame = someView.frame;
someView.frame.origin.y += 10;
someView.frame = someViewFrame;
moves the view down by 10 pixels.
If you just want to move the view in a superview, leave bounds alone.

This example should be useful.

Related

How to change length of line and rotate it dynamically?

I am trying to draw a simple straight line. For this, I am filling an UIImageView with some color with given width as, say 2 pixel and some length. Now user is provided with two UISliders out of which one is used to stretch the line and another slider to rotate. I use myImageView.frame to change the height and CGAffineTransform to rotate. It works fine until I change the rotation angle using slider. But once I rotate the imageview, The slider to stretch doesn't work properly.
I have searched and found that frames won't work after rotating using CGAffineTranfor.
Alternately bounds should work but that didn't work too.
Seeking help. Any brief code will help me.
Thanks in advance.
Give transformed bounds every time you apply a transform. This should work:
CGRect transformedBounds = CGRectApplyAffineTransform(myView.bounds, myView.transform);

Bug when set up -setWantsLayer: on Lion

I have a subclass of NSView where I'm handling the -mouseDown: event to get the position of the click on the screen. With this position I defined a point that I'll use to draw a rect on the -drawRect: it's working fine.
BUT... when I set up wantsLayer the things isn't work. When I get the position of the input, I looked that Y-axis have an increase of 20 points and I don't know what's happening... Can anyone explain? How I fix this problem?
Simulation:
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 100; It's okay, it's what I want.
With setWantsLayer:YES
I click at coordinate x: 100; y: 100; and the drawRect draws the rect on x: 100; y: 120; (or something like this)
Is possible I use CALayers without setting -setWantsLayer to YES? I'm trying figure this out but I have no idea what's happening... I need your help.
UPDATE: I'm trying figure this out and I did a lot of tests now...
Now I can say that the problem is with -mouseDown: from NSView, when I set up -setWantsLayer to YES it don't works like expected anymore...
I have on my window a CustomView and I created a subclass of NSView and set as the CustomView class. The CustomView is at position (0, 20). The coordinate orientation isn't flipped.
I believe when I set up to NSView wants layer the -mouseDown: update the frame to position (0, 0) (or in other words, it get the NSWindow frame) instead of (0, 20). When it occurs every position from -mouseDown: get an increase of 20 points on Y-axis. I don't know if what I'm saying is right, but is the facts that I'm getting as result of my tests.
Someone can help me to figure this out?
Now with help of mikeash from (#macdev # frenoode) I solved this question.
The problem was how I was converting the point return from -mouseDown: event. I was using -convertPointFromBacking: and like mikeash said: "the problem is that -convertPointFromBacking: is not correct for converting the point returned from locationInWindow". "Because locationInWindow is not in 'its pixel aligned backing store coordinate system'".
I changed to -convertPoint:fromView: like: [sender convertPoint:[mEvent locationInWindow] fromView: nil]; and it's working nice!
Thank you to mikeash.
And I'm posting the answer here to help others with the same question.

Is it possible to know the final UIScrollView contentOffset before it ends decelerating?

When user "flicks" a UIScrollView, causing it to scroll with momentum, is there a way to figure out the final contentOffset before the deceleration ends?
Basically, I would like to know what the ultimate contentOffset is from inside scrollViewDidEndDragging:willDecelerate: instead of scrollViewDidEndDecelerating:
There is a float property called decelerationRate, which might be one piece of the puzzle, but I have yet to figure out what to do with it.
PS: I have pagingEnabled set to YES. In iOS 5, there is actually scrollViewWillEndDragging:withVelocity:targetContentOffset:, but the doc says it's not fired if pagingEnabled is YES
As I noticed the max contentOffset can be calculated from the start it's just a difference between the scrollview contentSize and scrollview frame size.
I calculated it on y like this. maxOffsetY = scrollView.contentSize.height - scrollview.frame.size.height;
There might be a better way but off the top of my head I would try this:
implement the scrollViewDidScroll: method of the delegate and get the contentOffsets of the first two times this method is called (of course they will have to be stored outside the method).
get the difference between the two readings of the content offset; this will be your delta.
now you can use the delta as the current velocity of your scroll and your decelerationRate to figure how much it will scroll before the delta becomes 0. this value +- (depending on scroll diection) the second reading of the content offset will be your "destination scroll offset".
be sure that you check that the scrollview is in fact decelerating before you apply this and also check to make sure your final calculation is not "out of bounds" in terms of the content size.
hope this helps

finding the center point of the screen in a zoom view in iphone

I'm trying to add a box to the centre of the screen in a zoom view. E.g. if I move into an area of the content view and try using the offset coordinates, it becomes erratic if I zoom in or out. I can't seem to figure out the right mathematical formula for this.
If you are working with a UIView or one of it's subclasses. You'll always have a center property available for you. That property is a CGPoint and you can do something like this to test if it is the required result you seek:
CGPoint center = [YourUIViewOrSubclass center];
NSLog(#"Center x is '%f' and y is '%f', center.c, center.y);
I hope this helps you. Otherwise try and rephrase your question and include a little context.

How to convert window coordinates relative to a specific view?

Example: I have a CGPoint in window coordinates:
CGPoint windowPoint = CGPointMake(220.0f, 400.0f);
There is aView which has a superview in a superview in a superview. Somewhere deep in the view hierarchy, probably even transformed a few times.
When you get a UITouch, you can ask it for -locationInView: and it will return the coordinates relative to that view.
I need pretty much the same thing. Is there an easy way to accomplish that?
I found a really easy solution:
[self.aView convertPoint:windowPoint fromView:self.window];
Perhaps you could iterate through the superview tree, drilling down to subviews by tag, and add or subtract the subview's frame.origin values to get to a translated windowPoint relative to the view-of-interest's frame.origin.