In unity I am trying to scale the scene to fit the screen size without loosing it’s aspect ratio. I have tried one solution of aspect utility but it is not working properly, it is showing black strips and so UI do not look good.
I want to target both devices android as well as iPad( e.g 16:9, 4:3 ratio)
Can anybody guide me how to achieve scaling on any kind of devices?
You can use NGUI plugin and attach UIStretchScript on the Image.
try this...name the script as "camera.cs"....add it to your camera...and paste the following code :
using UnityEngine;
using System.Collections;
public class camera : MonoBehaviour {
// Use this for initialization
void Start ()
{
// set the desired aspect ratio (the values in this example are
// hard-coded for 16:9, but you could make them into public
// variables instead so you can set them at design time)
float targetaspect = 16.0f / 9.0f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
else // add pillarbox
{
float scalewidth = 1.0f / scaleheight;
Rect rect = camera.rect;
rect.width = scalewidth;
rect.height = 1.0f;
rect.x = (1.0f - scalewidth) / 2.0f;
rect.y = 0;
camera.rect = rect;
}
}
// Update is called once per frame
void Update () {
}
}
There's only two ways to maintain the game's aspect ratio with differing viewport aspect ratios. First, stretching in a given direction, which is never a good option. Second, letterboxing (black bars), which can affect usability, especially on handheld screens. My recommendation would be to allow the game view to scale according to the screen's aspect ratio (which I think is the default functionality in Unity), and design the GUI to be responsive to the screen size (i.e. don't draw GUI elements with pixel specific coords, but with coords relative to the screen size).
I'm just overriding a MKOverlay so I'm looking for good practice in Apple developers sample code ...
When I see this :
// bite off up to 1/4 of the world to draw into.
MKMapPoint origin = points[0];
origin.x -= MKMapSizeWorld.width / 8.0;
origin.y -= MKMapSizeWorld.height / 8.0;
MKMapSize size = MKMapSizeWorld;
size.width /= 4.0;
size.height /= 4.0;
boundingMapRect = (MKMapRect) { origin, size };
MKMapRect worldRect = MKMapRectMake(0, 0, MKMapSizeWorld.width, MKMapSizeWorld.height);
boundingMapRect = MKMapRectIntersection(boundingMapRect, worldRect);
Many questions
Why Apple do the intersection with worldRect after calculate the good boundingMapRect ?
By Security ?
boundingMapRect inter worldRect isn't always boundingMapRect ?
If I have two MKMapPoint represent my CGRect, I haven't to do it, right ?
Regards,
EDIT
so I have just to do like this ?
// set the smallest rectangle which able to contain this overlay
MKMapPoint leftTopPoint = MKMapPointForCoordinate(leftTopCorner);
MKMapPoint rightBottomPoint = MKMapPointForCoordinate([self bottomRightCoord]);
boundingMapRect = MKMapRectMake(leftTopPoint.x, leftTopPoint.y,
fabs(leftTopPoint.x - rightBottomPoint.x),
fabs(leftTopPoint.y - rightBottomPoint.y));
That code is from the Breadcrumb sample which is perhaps a little unusual compared to most overlays in that it's dynamically updated. They are actually setting the bounding area to be a quarter of the world area which is a bit unusual in itself, normally you would expect it to be much smaller. I think the intersection is just to make sure that the resulting rect is definitely inside the full world rect. I'm not sure I'd recommend this code as best practice unless you are doing something similar (i.e. creating a dynamic overlay). Presumably even this code could be caught out if you were to run the Breadcrumb app while flying around the world non-stop, or maybe even just crossing the international dateline...
In my app there is a feature where you can drag and drop a image ,image is in a UIimageView. Its a universal app.I need to store the x,y coordinates relative to 2048x1536 .I use CGPointApplyAffineTransform to calculate the relative point .And inverse the transform to iPhone screen to find the relative position on iPhone screen. Due to pixel density variation i am not getting the relative position correctly.
CGFloat TARGET_WIDTH =2048.00;
CGFloat TARGET_HEIGHT =1536.00;
CGPoint thispoint= CGPointApplyAffineTransform(imageView.frame.origin, CGAffineTransformMakeScale(TARGET_HEIGHT/self.view.frame.size.height, TARGET_WIDTH/self.view.frame.size.width));
This is the code i find the relative point .
And i inverse the transform with iPhone view size how do i calculate the relative with the density of pixels iPad and iPhone.
Test this,
CGFloat TARGET_WIDTH =2048.00;
CGFloat TARGET_HEIGHT =1536.00;
CGFloat xScaleFactor = TARGET_WIDTH / self.view.frame.size.width;
CGFloat yScaleFactor = TARGET_HEIGHT / self.view.frame.size.height;
CGPoint thispoint = CGPointMake(imageView.frame.origin.x*xScaleFactor, imageView.frame.origin.y*yScaleFactor);
//point with respect to your given target size
I have this code into the - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context method (into a MKOverlayView subclass) to prevent drawing segments that are less than 10 pixels long on a map overlay :
CGPoint origin = [self pointForMapPoint:poly.points[0]];
CGPoint lastPoint = origin;
CGContextMoveToPoint(context, origin.x, origin.y);
for (int i=1; i<poly.pointCount; i++) {
CGPoint point = [self pointForMapPoint:poly.points[i]];
CGFloat xDist = (point.x - lastPoint.x);
CGFloat yDist = (point.y - lastPoint.y);
CGFloat distance = sqrt((xDist * xDist) + (yDist * yDist)) * zoomScale;
if (distance >= 10.0) {
lastPoint = point;
CGContextAddLineToPoint(context, point.x, point.y);
}
}
will the test >= 10.0 will take care about the screen resolution, or may I introduce some [UIScreen mainScreen].scale parameter ?
I believe that test >= 10.0 does not take into account the screen resolution. Apple does most of their drawing arithmetic using "points" instead of pixels- that way code does not have to change for a retina display compared to a normal display.
If you want to draw something just 10.0 pixels wide, you will need to take into account the screen resolution; however, if you do this you'll have to write the method to support both retina display and normal display.
It depends on how the graphics context is configured. If this is in UIView drawing code, the view's scale factor (which is set automatically) will take care of this, if you're drawing into a bitmap context, you have to do it manually.
My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event.
Zooming using the built in pinching works great. setZoomScale also works as expected.
I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint.
In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped.
So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap.
I started with the following function I found on apples site:
http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html
The following is a modified version for my UIScrollView extended class:
- (void)zoomToCenter:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
zoomRect.size.height = self.frame.size.height / scale;
zoomRect.size.width = self.frame.size.width / scale;
zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0);
//return zoomRect;
[self zoomToRect:zoomRect animated:YES];
}
When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center.
If I make UIView like this
UIView *v = [[UIView alloc] initWithFrame:zoomRect];
[v setBackgroundColor:[UIView redColor]];
[self addSubview:v];
The red box shows up with the touch point dead in the center.
Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good.
However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results.
Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection?
This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw?
I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions.
This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....
Ok another answer:
The apple supplied routine works for me, but you need to have the gesture recognizer convert the tap point to the imageView coords - not to the scroller.
Apple's example does this, but since our app works differently (we change the UIImageView), so the gestureRecongnizer was set up on the uiscrollview - which works fine, but you need to do this in the handleDoubleTap:
This is loosely based on the apple example code "TaptoZoom", but as I said we needed our gesture recognizer hooked up to the scroll view.
- (void)handleDoubleTap:(UIGestureRecognizer *)gestureRecognizer {
// double tap zooms in
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(handleSingleTap:) object:nil];
float newScale = [imageScrollView zoomScale] * 1.5;
// Note we need to get location of the tap in the imageView coords, not the imageScrollView
CGRect zoomRect = [self zoomRectForScale:newScale withCenter:[gestureRecognizer locationInView:imageView]];
[imageScrollView zoomToRect:zoomRect animated:YES];
}
Declare BOOL isZoom; in .h
-(void)handleDoubleTap:(UIGestureRecognizer *)recognizer {
if(isZoom){
CGPoint Pointview=[recognizer locationInView:self];
CGFloat newZoomscal=3.0;
newZoomscal=MIN(newZoomscal, self.maximumZoomScale);
CGSize scrollViewSize=self.bounds.size;
CGFloat w=scrollViewSize.width/newZoomscal;
CGFloat h=scrollViewSize.height /newZoomscal;
CGFloat x= Pointview.x-(w/2.0);
CGFloat y = Pointview.y-(h/2.0);
CGRect rectTozoom=CGRectMake(x, y, w, h);
[self zoomToRect:rectTozoom animated:YES];
[self setZoomScale:3.0 animated:YES];
isZoom=NO;
}
else{
[self setZoomScale:1.0 animated:YES];
isZoom=YES;
}
}
I've noticed that the apple you're using doesn't zoom properly if the image is starting at a zoomScale less than 1 because the zoomRect origin is incorrect. I edited it to work correctly. Here's the code:
- (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
// the zoom rect is in the content view's coordinates.
// At a zoom scale of 1.0, it would be the size of the imageScrollView's bounds.
// As the zoom scale decreases, so more content is visible, the size of the rect grows.
zoomRect.size.height = [self frame].size.height / scale;
zoomRect.size.width = [self frame].size.width / scale;
// choose an origin so as to get the right center.
zoomRect.origin.x = (center.x * (2 - self.minimumZoomScale) - (zoomRect.size.width / 2.0));
zoomRect.origin.y = (center.y * (2 - self.minimumZoomScale) - (zoomRect.size.height / 2.0));
return zoomRect;
}
The key is this part multiplying the center value by (2 - self.minimumZoomScale).
Hope this helps.
In my case it was:
zoomRect.origin.x = center.x / self.zoomScale - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y / self.zoomScale - (zoomRect.size.height / 2.0);
extension UIScrollView {
func getRectForVisibleView() -> CGRect {
var visibleRect: CGRect = .zero
visibleRect.origin = self.contentOffset
visibleRect.size = self.bounds.size
let theScale = 1.0 / self.zoomScale
visibleRect.origin.x *= theScale
visibleRect.origin.y *= theScale
visibleRect.size.width *= theScale
visibleRect.size.height *= theScale
return visibleRect
}
func moveToRect(rect: CGRect) {
let scale = self.bounds.width / rect.width
self.zoomScale = scale
self.contentOffset = .init(x: rect.origin.x * scale, y: rect.origin.y * scale)
}
}
I had something similar and it was because I didn't adjust the center.x and center.y values by dividing them by the scale also (using center.x/scale and center.y/scale). Maybe I'm not reading your code right.
I am having the same behavior and it is quite frustrating... The rectangle being fed to the UIScrollView is perfect.. yet my view, no matter what I do anything that involves changing the zoomScale programmatically always zooms and scales to coordinate 0,0, no matter what.
I have tried just changing the zoomScale, I've tried zoomToRect, I have tried them all, and every one the minute I touch the zoomScale in code, it goes to coordinate 0,0.
I did also have to add and explicit setContentSize to the resized image in the scrollview after a zooming operation, or otherwise I cannot scroll after a zoom or pinch.
Is this a bug in 3.1.3 or what?
I have tried different solutions, but this looks the best resolution
It is really straight forward and conceptional?
CGRect frame = [[UIScreen mainScreen] applicationFrame];
scrollView.contentInset = UIEdgeInsetsMake(frame.size.height/2,
frame.size.width/2,
frame.size.height/2,
frame.size.width/2);
I disagree with one of the comments above saying that you should never multiply the center's coordinates by some factor.
Say that you are currently displaying an entire 400x400px image or PDF file in a 100x100 scroll view and want to allow the users to double the size of the content until it's 1:1.
If you double tap at point (75,75), you expect the zoomed-in rectangle to have origin 100,100 and size 100x100 within the new 200x200 content view. So the original tapping point (75,75) is now (150,150) in the new 200x200 space.
Now, after zoom action #1 has completed, if you again double tap at (75,75) inside the new 100x100 rectangle (which is the bottom-right square of the larger 200x200 rectangle), you expect the user to be shown the bottom-right 100x100 square of the larger image, which would now become zoomed to 400x400 pixels.
In order to calculate the origin of this latest 100x100 rectangle within the larger 400x400 rectangle, you would need to consider the scale and current content offset (since before this last zoom action we were displaying the bottom-right 100x100 rectangle within a 200x200 content rectangle).
So the x coordinate of the final rectangle becomes:
center.x/currentScale - (scrollView.frame.size.width/2) + scrollView.contentOffset.x/currentScale
= 75/.5 - 100/2 + 100/.5 = 150 - 50 + 200 = 300.
In this case, being a square, the calculation for the y coordinate is the same.
And we did indeed zoom in the bottom-right 100x100 rectangle, which, in the larger 400x400 content view has origin 300,300.
So here is how you would calculate the zoom rectangle's size and origin:
zoomRect.size.height = mScrollView.frame.size.height/scale;
zoomRect.size.width = mScrollView.frame.size.width/scale;
zoomRect.origin.x = center.x/currentScale - (mScrollView.frame.size.width/2) + mScrollView.contentOffset.x/currentScale;
zoomRect.origin.y = center.y/currentScale - (mScrollView.frame.size.height/2) + mScrollView.contentOffset.y/currentScale;
Hope this made sense; it's hard to explain it in writing without sketching out the various squares/rectangles.
Cheers,
Raf Colasante