UIView frame larger than iPhone's screen - iphone

I have made a new iPhone View-based project in XCode, and I have a look at the xib generated, with the name myAppViewController.xib. In that xib, the generated view's frame looks like this: { x = 160, y = 230, w = 320, h = 460 } this view is supposed to fill the entire screen, excluding the status bar. I would expect the view's frame to look like this: { x = 0, y = 0, w = 240, y = 300 } for the older iPhones, and { x = 0, y = 0, w = 480, h = 600 } for the retina display. However, I get those weird values above. Can anyone please explain to me what is happening here, it is ruining some of my drawing code in my other project (the area to draw is based off of the frame of the view).

Retina display doesn't actually increase the point size of the view.
When Apple introduced it, it made a difference between points and pixels. Retina display has more pixels, but the same amount of points.
Everything in your code related to UIKit will generally be in points, so no changes are needed there, a view of 100x100 will still be that size.
I don't understand really your question but the frame value can be modified by the bounds property and the center property... so check if you're not modifying those.

Frame rects are as such:
CGRectMake(CGFloat x, CGFloat y, CGFloat width, CGFloat height)
the screen width is 320px wide by 480px tall. the status bar is 20px. thus the rect you would want is:
CGRectMake(0,0,320,460)
regardless of retina or not, the device will convert up as needed.

Coordinates are in points, not pixels. The coordinate system is 320 X 480, regardless of the resolution of the screen.

Related

UIScrollview scale with ContentOffset

I'm making an app that allows a user to import an image into a UIScrollView where they can scroll and position it. Other elements are places on top of the UIScrollView, and then an image is created by taking a snapshot of the entire view.
This works great and all is fine. Then I had to upload the image to Facebook. Facebook scales the image up significantly, so I need to increase the size of the images. So I should just scale up all of the elements and take a snapshot, that should work fine, right?
However, I cannot match the contentOffset of the UIScrollView after it is scaled up. I've tried all sorts of mathematical equations (some my own, others found here) and nothing seems to match the offset once scaled.
Conceptually, I would think this should work, but it doesn't:
NSLog(#"Starting offset X: %f",self.imageScroll.contentOffset.x);
NSLog(#"Starting offset Y: %f",self.imageScroll.contentOffset.y);
self.imageScroll.transform = CGAffineTransformMakeScale(4, 4);
CGPoint offset = self.imageScroll.contentOffset;
float newX = offset.x*4;
float newY = offset.y*4;
NSLog(#"Scaled offset X: %f",newX);
NSLog(#"Scaled offset Y: %f",newY);
self.imageScroll.contentOffset = CGPointMake(newX, newY);
I appreciate your time.

Calculating new origin of insetted CGRect after its size changes

I have a CGRect A and CGRect B where B is centered inside of A (a CGRect contains the x and y origin and height and width size of a rectangle). If I increase the width and height of A by some proportion, and also increase the width and height of B by that same proportion, will multiplying the x origin of B and the y origin of B by this same proportion (for the width and height respectfully), will that keep B in the center of A as both grow by the new proportion? I've tested this out in a few different scenarios and it works, but just wanted to verify it'll work for all situations as I am not that sharp in math.
Also, was wondering if there is a method that will simply allow you to multiply all values of a CGRect by this proportion without having to do it manually (couldn't find one in the docs).
UPDATE: Actually, this will not work...trying to think of a methodology that will allow me to correctly position a view within another view after a proportional increase in size for both.
Yes, what you proposed works, but only if the origin of the outer CGRect is 0,0 or if you multiply its origin by the factor, too. If you don't do that, the inner rect will be shifted to the bottom right.
Here's what happens if you multiple both origin and size:
If you don't multiply the outer rect's origin, this happens:
From your question, it isn't entirely clear what you're trying to achieve.
If you want to enlarge a CGRect and (re)center it another one, use these functions:
// center a CGRect in another one
static inline CGRect ALRectCenterInRect(CGRect outerRect, CGRect innerRect)
{
return CGRectMake(CGRectGetMidX(outerRect)-innerRect.size.width/2, CGRectGetMidY(outerRect)-innerRect.size.height/2, innerRect.size.width, innerRect.size.height);
}
// multiply each value of a CGRect with factor
// combine with CGRectIntegral() to prevent fractions (and the resulting aliasing)
static inline CGRect ALRectMultiply(CGRect rect, CGFloat factor)
{
return CGRectMake(rect.origin.x*factor, rect.origin.y*factor, rect.size.width*factor, rect.size.height*factor);
}
How to use them:
CGRect centeredInnerRect = ALRectCenterInRect(outerRect, innerRect);
CGRect multipliedRect = ALRectMultiply(someRect, 1.5);
However, when dealing with CGRects, it's usually about UIViews. If you want to center a UIView in its superview, do this:
someSubview.center = CGPointMake(CGRectGetMidX(someSuperview.bounds), CGRectGetMidY(someSuperview.bounds));
If the inner view has the same superview as the outer view, you can simply do this to center it in the outer view:
innerView.center = outerView.center;

How to get coordinates of a view according to the coordinates of the uiimage on which it is placed in ios sdk?

I have an app where user takes a picture from camera and it is placed on a full screen image view. Now user can dear rectangular /square views on it and send the coordinates/size of the overlayed view to the server. The problem is i am getting the coordinates of the view according to the screen size but i want that the size should be according to the image on the imageview because normally image from the camera would be of larger resolution than the screen .
Please provide me a way to get the the size of the overlayed view according to the uiimage and not the screen
EDIT : For example :If user draw a view on a human face in the image and send the coordinates of the crop view to the server . Then it will be difficult to synchronize the coordinates with the real image if the coordinates are according to the screen and not according to the uiimage itself
The answer is simple:
You have the frame of your UIImageView, and the frame of your drew square (both relative to the self.view)
You only need to find the origin of your square, relative to the UIImageView.
Just subtract:
//get the frame of the square
CGRect *correctFrame = square.frame;
//edit square's origin
correctFrame.origin.x -= imageView.frame.origin.x;
correctFrame.origin.y -= imageView.frame.origin.y;
Now correctFrame is the frame of your square relative to the imageView, while square.frame is still relative to self.view (as we didn't change it)
To get the frame accordingly to the image resolution, do exact the same as above, then:
float xCoefficient = imageResolutionWidth / imageView.frame.size.width;
float yCoefficient = imageResolutionHeight / imageView.frame.size.height;
correctFrame.origin.x *= xCoefficient;
correctFrame.origin.y *= yCoefficient;
correctFrame.size.width *= xCoefficient;
correctFrame.size.height *= yCoefficient;
Why: the image resolution is much grater than the imageView.frame, so you gotta calculate a coefficient that will adjust your square.
That's all!

Setting a view's bounds changes the coordinates of the frame, WHY?

Why setting the bounds property of a UIView messes up it's frame's coordinates?
For example:
self.view.frame = CGRectMake(10, 10, 200, 200);
CGRect b = CGRectMake(0, 0, 399, 323);
self.view.bounds = b;
I would expect the view's frame to be (10, 10, 399, 323) but instead the coordinates get some weird values like (-89.5 -51.5; 399 323).
Thanks!
From the UIView class reference:
Changing the bounds size grows or shrinks the view relative to its center point.
So it is keeping the center point in the same place, which means the origin of the frame has to adjust.
If you want to resize the view but keep the origin in the same place, set the frame instead of the bounds.

AFOpenFlow coverFlow

in the AFOpenFlow library are two lines that says that the coverFlow is in the middle of your
iPhone - Screen:
CGPoint newPosition;
newPosition.x = halfScreenWidth + aCover.horizontalPosition;
newPosition.y = halfScreenHeight + aCover.verticalPosition;
But how could i change this line, that the coverFlow at the high range of the screen?
Marco
I don't know AFOpenFlow but this math here set the middle of the View related to the center of the screen. If you want the view to only take half of the size of your screen, you also need to change its height. it would be like :
[Updated Code]
CGSize newSize = CGSizeMake(screenWidth, swcreenHeight/2);
CGPoint newCenterPosition = CGPointMake(halfScreenWidth+aCover.horizontalPosition, halfScreenHeight/2+aCover.verticalPosition)
aCover.bounds = CGRectMake(newCenterPosition.x, newCenterPosition.y, newSize.width, newSize.height);