How to handle resolution change 320x480 => 640x960 related to gameplay - iphone

I have decided to have 2 set of images for my iPod game. One with 320x480 and the other for the retina one. I can switch happily between them but this forces me to add extra code to handle the change in resolution.
My game is played in screen space on a grid, so, if I have 32 pixel tiles, I will have to use 32 offsets in low res and 64 in retina (because of doubling resolution). For a simple game this can be no problem, but what about other more complex games? How do you handle this without hardcoding things depending on the target resolution.
Of course an easy way to bypass this is just releasing a 320x480 version an let the hardware upscale, but this is not what I want because of blurry images. I'm a bit lost here.

If you have to, you can do the conversion from points to pixels (and vice versa), easily by either multiplying or dividing the pixel/point position with the contentScaleFactor of your view. However, normally this is done automatically by you if you just keep it to using points instead of pixels.

This is automatic. You only need to add image files suffixed '#2x' for the retina resolution.
Regarding pixels, from your program you work in points which are translated to pixels by the system. Screen dimensions are 320x480 points for iphone retina and non-retina.

Related

Pixel-perfect shader in Unity ShaderLab

In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)
There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.
is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.
First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.

Xcode does not recognize change in size of image

In my program I have an image which has been scaled down to a size which fits perfectly on the screen. Upon further investigation, I realized those dimensions must be doubled to provide maximum quality in iPhone Retina. I doubled those dimensions by using the original image (which was much larger) so there was no loss in quality. However, when I run my program in iPhone simulator (retina display) the image's quality has not changed whatsoever. Is there any apparent reason why Xcode does not seem to recognize that the image has been updated? Any help is appreciated.
When you have two versions of the image (normal and #2x), use the "normal" name (without #2x) in XIB or UIImage#imageWithName:, system will automatically choose the best version for the current screen. Also check that your UIImageView size corresponds to the normal (not 2x) resolution of your image. There are several content modes (like Aspect Fit, Aspect Fill, Center, etc.) that will resize or position your image in the UIImageView.

iPhone image resource - 1024 maximum, 2048 pixels #2x?

The restriction of 1024x1024 as the largest image for an iPhone is a problem with iPhone 4. However if an #2x image is used with maximum dimensions of 2048x2048 everything looks equally good on the 4 as it does on a 3 - tried and tested in simulator and device. Question is, does the image dimension restriction relate to the UIImage or the resource that it contains? I can't imagine resources of more than 1024 pixels are discouraged with the 960 pixel height of the screen.
The right answer is really to use tiles so that things look even better, but the deadline for for this deliverable is too close - it's a future thing.
From UIImage class reference:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
That is, views are rendered and composited with the iPhone's GPU. If your view, for example, overrides drawRect and tries to render a very big UIImage you might run into problems. Newer generation iDevices, such as iPad and iPhone 4 support bigger textures than 1024x1024 (2048x2048 I think).
I didn't realise there was a restriction, I'm using an image 15198 × 252 as the scrolling landscape in Scramble... works really well, though I must say I did have reservations before I tried it out!

iPhone4 UI Element Size in Pixels?

Does anyone know where the various screen dimensions are for the iPhone4? I have checked the MobileHIG but alls I could find where references back to the old iPhone3G. Should I just assume that all previous values are doubled (i.e. StatusBar = 40 pixels), or is there a more accurate illustration (like the one below hidden somewhere else?
Cheers Gary
Points are the new pixels.
You keep working with the values you're used to, just like if you were still developing for 3G / 3Gs. The unit of these values is now called points instead of pixels, to avoid confusion. On the older iPhone models, a 2x2 point square equals 2x2 pixels on the screen. But on iPhone 4 the same square equals 4x4 pixels. UI elements are rendered at the appropriate resolution automatically, images and other content you provide will be scaled, unless you provide high resolution versions of those ressources.
You might want to read this document for further information.
You shouldn't really assume anything about the screen dimensions. If you need the dimensions, read them vie the API.
There is no promise that dimensions or even proportions will stay the same forever. (The iPhone, iPad and new iPhone all have different resolution size and proportions.)
That said, the dimensions on the iPhone 4 should be exactly twice the dimensions of earlies iPhone models.

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.