Why do images for textures on the iPhone need to have power-of-two dimensions? - iphone

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?

The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.

Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.

You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.

I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.

Are you using PVRTC compression? That requires powers of 2 and square images.

Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.

Related

is non-square tilable atlas effective

I have a building model with 6K faces and I want to texture with some pretty high detail 512x512 tilable textures (which represent about 32cm x 32cm), and I'd like to be as mobile-friendly as possible, but not necessarily with old phones but for like GearVR capable phones.
The model happens to have mostly long horizontal quads eg
|-----------|---|----------------|
|-----------|---|----------------|
|-----------| |----------------|
|-----------| |----------------|
|-----------|---|----------------|
|-----------|---|----------------|
So the uv's of each of those horizontal sections can be stacked on one tileable texture, to achieve both horizontal and vertical tiling.
Further, if the tiles were 512x512 textures, I could stack 8 of them in a 512x4096 non-square (but power of two) texture.
That way I could texture the main mesh with a single texture or one extra for metalic.
Is this reasonable, or should I keep them as separate 512x512 textures? Wouldn't separate textures mean like 8x the draw calls which would be far worse than a non-square 512x4096 texture?
After some research, I found the technique of stacking textures and tiling horizontally is called using a trim sheet, and is very much valid, and used extensively game development to be able to re-use high-detail textures on many different objects.
https://www.youtube.com/watch?v=IziIY674NAw
The trim sheet info I found though did not cover 'non square' which is the main question. But from several sources I found that some devices do not support non-quare and some do, and some do but don't do compression well on non-square, so it's a 'check your target devices' issue.
Assuming a device does support non-square, it should in fact save memory to have a strip of textures, and should save draw calls, but your engine may just 'repeat them horizontally until square' for you when importing the texture to 'be safe' (so again, check target devices and engines). It would perhaps be wise to limit to 4 rather than 16 stacked textures, to avoid 'worst case scenarios'.
Hopefully, the issue will be addressed by either having video cards able to do several materials in 1 draw call, or by more universal handling the texture strips well, but it seems state of the art has not focused on that yet.
Another solution is more custom, but some people have created custom shaders that use vertex color information on a mesh to choose which part of a texture to use, and then tile from there. Apparently the overhead turned out to be quite low, and it was a success, so it's good to have an idea about 'backup plans'. This however would be an engine/environment/device specific kind of optimization, not a general modeling practice.
http://www.gamasutra.com/blogs/MarkHogan/20140721/221458/Unity_Optimizing_For_Mobile_Using_SubTile_Meshes.php

cocos2d zooming sprite without distortion?

I want to implement zooming of sprites with a pinch gesture in Cocos2d.
How do I achieve it without the image getting pixelated?
I tried with vectors but with no success, I'm doomed using raster bitmap images.
Do I need the largest possible image with the highest resolution to make it look
nice?
What is the size limit for pngs in cocos2d?
What other pitfalls do I need to consider?
Yes. For example if the sprite should cover an area of 1024x1024 pixels when zoomed in to maximum, you need to create the image as 1024x1024 and set the scale property to below 1 in order to create a smaller version. If you use scale greater than 1.0 the image will always lose detail and become ever more blurred as scale increases.
There is no size limit in cocos2d, it's the devices that impose the limit. Most devices can handle 2048x2048 except 1st and 2nd generation which support only 1024x1024. You wouldn't normally support these older devices though, so 2048x2048 should be the default. Several newer devices (iPad 2+, iPhone 4S+) can use up to 4096x4096 textures.
Memory consumption. Not sure what you're trying to do, but often developers have little understanding about how much memory textures consume and what amount of memory is available. For instance, 2048x2048 as PNG with 32-Bit color consumes 16 MB of memory. Don't plan on using more than 4-5 of these, unless you're able to reduce color bit depth and use TexturePacker to be able to use the compressed .pvr.ccz format. Read my article about optimizing memory usage for more info.

OpenGL ES Tiled Texture Mipmap problem - iPad/iPhone

I'm running into the traditional tile/mipmap problem on the iPad using OpenGL ES. Basically, if you have a large texture (larger than 1k X 1k), you can break it up into pieces and map those pieces onto individual polygons. You can clamp the texture coordinates to the edges and it mostly works, but you get artifacts along the boundaries.
Now I know why you get this and know what the traditional solution is. To wit, you make a border around the outside of each littler texture (say 6 pixels). You sample from the little textures into the big one so you're just using those inside pixels (say 256-2*6). Then you smear the valid pixels out into the border area. Lastly, you map your texture coordinates to just use those valid inside pixels. Works okay.
If you're not nodding along at this point, don't try to answer. :-)
Anyway, OpenGL introduced clamping modes way back in the day to solve this. I don't see those modes in OpenGL ES (at least on this hardware) and I see other references to this problem. What I'm wonder is if I'm missing something. Is there a newer way to solve the tile/edge problem that I'm not aware of?
[Update]
A screen shot of the result is attached here. The visible line is at the end of one texture and the start of another. This is using CLAMP_TO_EDGE.
GLES supplies GL_CLAMP_TO_EDGE but not GL_CLAMP, which clamps to the centres of the outermost pixels in a texture rather than to the extreme edges. So out-of-bounds (border or wraparound) accesses are completely prevented with CLAMP_TO_EDGE but not with CLAMP.
CLAMP_TO_EDGE is a part of the GL ES specification (as per here for 1.1 and here for 2.0), so if your hardware doesn't support it then it's not technically GL ES compliant. It's also available in full Open GL, but I think only as of version 1.2. It's implied that CLAMP_TO_EDGE made the leap to ES but CLAMP didn't because the former is considered to be a fixed version of the latter.
It sounds to me like CLAMP_TO_EDGE should be suitable for what you're doing — have I misunderstood?
In the end the problem was related to texture compression. The lines were due to the compression method assuming the texture wrapped around.
I solved the problem by building slightly larger textures than needed, compressing and then using only an area within each texture, thus leaving a border.

Optimizing OpenGL ES on the iPhone and interpreting Instruments

I'm trying to push my FPS up on iPhone 3Gs from 30 as high as possible... and I'm running into a couple of issues and thought it would be better to ask for advice.
1) What exactly do the Renderer Utilization and Tiler Utilization columns on the OpenGL ES Instrument signify? My Tiler Utiliation percentage is extremely low, and my Renderer Utilization tends to drop during user interaction and when the app is flipped to landscape mode. I noticed that my FPS tends to drop whenever the Renderer Utilization value drops as well. My FPS dropping during landscape mode is particularly odd for me, because portrait mode and landscape mode use the exact same game logic, and textures... and landscape mode actually renders fewer vertices/triangles to boot (some parts of the UI aren't drawn at all in landscape mode).
2) I've already done most of the recommended optimizations in the ngmoco/Stanford videos, and the only things I think I can do left are changing GLfloats to GLshorts and interleaving my vertices with my texture coordinates into one array. Are any of these likely to have large effects on my FPS? It's a 2D sprite game with lots of large, detailed textures...
3) Which is a faster way to hide a polygon: setting all of its vertices to the same coordinates (essentially, reducing it to a point), or setting its alpha value to 0? I'm guessing its the former, since blending is slower in general and particularly expensive on the iPhone.
4) Currently, I'm using a 2 512x512 textures, a 1024x512 texture, and a 256x256 texture. I've sought advice on how to best manage this, and I was told not to combine them into 1 1024x1024 texture because of memory problems on the iPhone 3G. I'd like to confirm that here, because if I put everything into 1 texture, I can eradicate having to call glBindTexture repeatedly...
To #4: (a) yes, the iPhone is documented not to deal with images larger than 1024 on a side. 1024x1024 is the maximum theoretical limit, although you may run into problems if you try to push right up against the limit.
(b) all your textures don't fit into a 1024x1024; after the 1024x512 and 2 512x512s fill that space, you'll still have the 256x256 left over.

PVR Texture Compression Tiling (exposing edge context)

I've got PVR texture compression working all happy and good in my iPhone game, but I've got issues when tiling multiple textures together. Basically, I've got a very large background which is split into multiple 512x512 tiles, all PVR compressed. Then they're drawn together to look like one big background image. The way PVR works, because it doesn't know that it's supposed to be compressing the texture as if it were a really big texture - i.e. use a neighbor's tiled information to determine how to perform the PVR compression.
I can think of maybe a couple ways to do this.
1) Somehow tell the texturetool command line program to accommodate for other images that will be adjacent.
2) Use the command line program to generate a huge PVR texture that represents the whole image, then somehow split up the bytes into multiple images - probably impossible.
3) Do some kind of OpenGL ES trickiness that blends the edges nicely.
4) Do some trickiness where I have redundant information in each tile and then clip those areas when the texture is drawn (please no).
Hopefully I can do 1, 2 or 3, or there is some other well known solution.
I ended up going with option 4. I don't think this was a situation where PVRTC isn't appropriate - in fact it's almost a necessity. When I've got a total of 24 512x512 textures in memory at once (representing a very large background and foreground), putting those in uncompressed is suicide. So I simply used PVR compression as normal, then my edited a few lines of code in my tiling algorithm so that they overlap and trim on 15 pixels on each end. Voila, looks great. Took a couple days and was pretty annoying, but I think this is a good option for people who need very large tiled backgrounds on the iPhone.
My best advice, but not what you asked, is to know when PVRTC isn't appropriate. By far the simplest solution is to just not use PVRTC for those tiles. I've spent a lot of time trying to bend PVRTC to work in situations it just isn't suited for.
That said,
When using PVRTC, the texture is always assumed to tile (with itself), thus pixels on the right edge affect the pixels on the left edge (same with top & bottom). So choices 1 or 2 likely won't work.
One possibility is to add an alpha channel to the tiles and allow them to fade out around the edges so that when you overlap them, they fade into each other. Keep in mind PVRTC tends to work better with gradual alpha fades. Hard alpha edges often have artifacts in PVRTC.