OpenGL ES Tiled Texture Mipmap problem - iPad/iPhone - iphone

I'm running into the traditional tile/mipmap problem on the iPad using OpenGL ES. Basically, if you have a large texture (larger than 1k X 1k), you can break it up into pieces and map those pieces onto individual polygons. You can clamp the texture coordinates to the edges and it mostly works, but you get artifacts along the boundaries.
Now I know why you get this and know what the traditional solution is. To wit, you make a border around the outside of each littler texture (say 6 pixels). You sample from the little textures into the big one so you're just using those inside pixels (say 256-2*6). Then you smear the valid pixels out into the border area. Lastly, you map your texture coordinates to just use those valid inside pixels. Works okay.
If you're not nodding along at this point, don't try to answer. :-)
Anyway, OpenGL introduced clamping modes way back in the day to solve this. I don't see those modes in OpenGL ES (at least on this hardware) and I see other references to this problem. What I'm wonder is if I'm missing something. Is there a newer way to solve the tile/edge problem that I'm not aware of?
[Update]
A screen shot of the result is attached here. The visible line is at the end of one texture and the start of another. This is using CLAMP_TO_EDGE.

GLES supplies GL_CLAMP_TO_EDGE but not GL_CLAMP, which clamps to the centres of the outermost pixels in a texture rather than to the extreme edges. So out-of-bounds (border or wraparound) accesses are completely prevented with CLAMP_TO_EDGE but not with CLAMP.
CLAMP_TO_EDGE is a part of the GL ES specification (as per here for 1.1 and here for 2.0), so if your hardware doesn't support it then it's not technically GL ES compliant. It's also available in full Open GL, but I think only as of version 1.2. It's implied that CLAMP_TO_EDGE made the leap to ES but CLAMP didn't because the former is considered to be a fixed version of the latter.
It sounds to me like CLAMP_TO_EDGE should be suitable for what you're doing — have I misunderstood?

In the end the problem was related to texture compression. The lines were due to the compression method assuming the texture wrapped around.
I solved the problem by building slightly larger textures than needed, compressing and then using only an area within each texture, thus leaving a border.

Related

is non-square tilable atlas effective

I have a building model with 6K faces and I want to texture with some pretty high detail 512x512 tilable textures (which represent about 32cm x 32cm), and I'd like to be as mobile-friendly as possible, but not necessarily with old phones but for like GearVR capable phones.
The model happens to have mostly long horizontal quads eg
|-----------|---|----------------|
|-----------|---|----------------|
|-----------| |----------------|
|-----------| |----------------|
|-----------|---|----------------|
|-----------|---|----------------|
So the uv's of each of those horizontal sections can be stacked on one tileable texture, to achieve both horizontal and vertical tiling.
Further, if the tiles were 512x512 textures, I could stack 8 of them in a 512x4096 non-square (but power of two) texture.
That way I could texture the main mesh with a single texture or one extra for metalic.
Is this reasonable, or should I keep them as separate 512x512 textures? Wouldn't separate textures mean like 8x the draw calls which would be far worse than a non-square 512x4096 texture?
After some research, I found the technique of stacking textures and tiling horizontally is called using a trim sheet, and is very much valid, and used extensively game development to be able to re-use high-detail textures on many different objects.
https://www.youtube.com/watch?v=IziIY674NAw
The trim sheet info I found though did not cover 'non square' which is the main question. But from several sources I found that some devices do not support non-quare and some do, and some do but don't do compression well on non-square, so it's a 'check your target devices' issue.
Assuming a device does support non-square, it should in fact save memory to have a strip of textures, and should save draw calls, but your engine may just 'repeat them horizontally until square' for you when importing the texture to 'be safe' (so again, check target devices and engines). It would perhaps be wise to limit to 4 rather than 16 stacked textures, to avoid 'worst case scenarios'.
Hopefully, the issue will be addressed by either having video cards able to do several materials in 1 draw call, or by more universal handling the texture strips well, but it seems state of the art has not focused on that yet.
Another solution is more custom, but some people have created custom shaders that use vertex color information on a mesh to choose which part of a texture to use, and then tile from there. Apparently the overhead turned out to be quite low, and it was a success, so it's good to have an idea about 'backup plans'. This however would be an engine/environment/device specific kind of optimization, not a general modeling practice.
http://www.gamasutra.com/blogs/MarkHogan/20140721/221458/Unity_Optimizing_For_Mobile_Using_SubTile_Meshes.php

antialiasing iPhone OpenGLES

I need in antialiasing in iPhone 3G (OpenGL ES1.1), NOT iPhone 3Gs with OpenGL ES.2.0.
I've draw 3d model and have next: pixels on the edges of the model look like teeth.
I've try set any filters for texture, but this filters making ONLY texture INSIDE look better.
How can i make good antialising ?
May be i should use any smooth for drawing triangles ? If yes, then how it possible in OpenGL ES1.1 ?
thanks.
As of iOS 4.0, full-screen anti-aliasing is directly supported via an Apple extension to OpenGL. The basic concept is similar to epatel's suggestion: render the scene onto a larger framebuffer, then copy that down to a screen-sized framebuffer, then copy that buffer to the screen. The difference is, instead of creating a texture and rendering it onto a quad, the copy/sample operation is performed by a single function call (specifically, glResolveMultisampleFramebufferAPPLE()).
For details on how to set up the buffers and modify your drawing code, you can read a tutorial on the Gando Games blog which is written for OpenGL ES 1.1; there is also a note on Apple's Developer Forums explaining the same thing.
Thanks to Bersaelor for pointing this out in another SO question.
You can render into a larger FBO and then use that as a texture on a square.
Have a look at this article for an explanation.
Check out the EGL_SAMPLE_BUFFERS and EGL_SAMPLES parameters to eglChooseConfig(), as well as glEnable(GL_MULTISAMPLE).
EDIT: Hrm, apparently you're out of luck, at least as far as standardized approaches go. As mentioned in that thread you can render to a large off-screen texture and scale to a smaller on-screen quad or jitter the view matrix several times.
We found another way to achieve this. If you edit your textures and add for example a 2 pixel frame of transparent pixels, the colored pixels in the texture are blended with the transparent pixels when necessary giving a basic anti-aliasing effect. You can read the full article here in our blog.
The advantage of this approach is that you are not rendering a bigger image, or copying a buffer, or even worse, making a texture from a buffer, so there is no impact in performance.

OpenGL ES. Scrolling 3 layer starfield textures gets me from 60 -> 40 FPS

I need to draw the background for a 2D space scrolling shooter. I need to implement 3 layers of stars: one distant nebula (moving really slow) in the background, one layer of far away stars (moving slow) and one layer of close stars (moving normal) on top of the other two.
The way i first tried this was using 3 textures of 320 x 480 that were transparent pngs of the stars. I used GL_BLEND and SRC_ALPHA, ONE_MINUS_SRC_ALPHA.
The results were not great even on the 3GS. On the first generation devices the FPS dropped to 40..50 so i think i'm doing this the wrong way.
When i disable the GL_BLEND everything works great even on the 1st gen devices and the FPS is back to 60 again... so it's must be the fact that i'm trying to belnd large transparent textures.
The problem is i don't know how to do it some other way...
Should i draw only the first nebula like an opaque texture and then try to emulate the middle and top star layer with small points moving around the screen?
Is there any other approach on the blending issue? How can i speed up the rendering process? Is one big texture (tileset) the answer?
Please help me cuz i'm stuck here and i can't get out.
I don't know how you want your stars to look like, but you might want to try to move them from a texture to geometry by using GL_POINTS in the DrawElements or DrawArrays maybe just replace the top two layers with layers of geometry. You can manipulate the points using PointSize, PointSizePointerOES and PointParameter to modify the rendering of the points.
You might want to use multi-texturing to see if that speeds it up. Each multi-texture stage can be assigned a unique transformation matrix, so you should be able to translate each layer at different speeds.
I believe all iPhone models support two texture stages, so you should be able to combine two of your layers into a single draw call. You might still need to resort to blending for the third layer.
Also note that alpha testing could be faster than alpha blending.
Good luck!
The back nebula should definitely be opaque; everything else is getting drawn on top of it, and I assume the only thing behind it is black. Also, prideout has a point: assuming your star layers can have effectively 1-bit alpha, that's definitely something you can try. Failing that, the GL_POINTS technique Harald mentions would work as well.

Large scrolling background in OpenGL ES

I am working on a 2D scrolling game for iPhone. I have a large image background, say 480×6000 pixels, of only a part is visible (exactly one screen’s worth, 480×320 pixels). What is the best way to get such a background on the screen?
Currently I have the background split into several textures (to get around the maximum texture size limit) and draw the whole background in each frame as a textured triangle strip. The scrolling is done by translating the modelview matrix. The scissor box is set to the window size, 480×320 pixels. This is not meant to be fast, I just wanted a working code before I get to optimizing.
I thought that maybe the OpenGL implementation would be smart enough to discard the invisible portion of the background, but according to some measuring code I wrote it looks like background takes 7 ms to draw on average and 84 ms at maximum. (This is measured in the simulator.) This is about a half of the whole render loop, ie. quite slow for me.
Drawing the background should be as easy as copying some 480×320 pixels from one part of the VRAM to another, or, in other words, blazing fast. What is the best way to get closer to such performance?
That's the fast way of doing it. Things you can do to improve performance:
Try different texture-formats. Presumably the SDK docs have details on the preferred format, and presumably smaller is better.
Cull out entirely offscreen tiles yourself
Split the image into smaller textures
I'm assuming you're drawing at a 1:1 zoom-level; is that the case?
Edit: Oops. Having read your question more carefully, I have to offer another piece of advice: Timings made on the simulator are worthless.
The quick solution:
Create a geometry matrix of tiles (quads preferably) so that there is at least one row/column of off-screen tiles on all sides of the viewable area.
Map textures to all those tiles.
As soon as one tile is outside the viewable area you can release this texture and bind a new one.
Move the tiles using a modulo of the tile width and tile height as position (so that the tile will reposition itself at its starting pos when it have moved exactly one tile in length). Also remember to remap the textures during that operation. This allows you to have a very small grid/very little texture memory loaded at any given time. Which I guess is especially important in GL ES.
If you have memory to spare and are still plagued with slow load speed (although you shouldn't for that amount of textures). You could build a texture streaming engine that preloads textures into faster memory (whatever that may be on your target device) when you reach a new area. Mapping as textures will in that case go from that faster memory when needed. Just be sure that you are able to preload it without using up all memory and remember to release it dynamically when not needed.
Here is a link to a GL (not ES) tile engine. I haven't used it myself so I cannot vouch for its functionality but it might be able to help you: http://www.mesa3d.org/brianp/TR.html

Why do images for textures on the iPhone need to have power-of-two dimensions?

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?
The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.
Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.
You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.
I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.
Are you using PVRTC compression? That requires powers of 2 and square images.
Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.