Large scrolling background in OpenGL ES - iphone

I am working on a 2D scrolling game for iPhone. I have a large image background, say 480×6000 pixels, of only a part is visible (exactly one screen’s worth, 480×320 pixels). What is the best way to get such a background on the screen?
Currently I have the background split into several textures (to get around the maximum texture size limit) and draw the whole background in each frame as a textured triangle strip. The scrolling is done by translating the modelview matrix. The scissor box is set to the window size, 480×320 pixels. This is not meant to be fast, I just wanted a working code before I get to optimizing.
I thought that maybe the OpenGL implementation would be smart enough to discard the invisible portion of the background, but according to some measuring code I wrote it looks like background takes 7 ms to draw on average and 84 ms at maximum. (This is measured in the simulator.) This is about a half of the whole render loop, ie. quite slow for me.
Drawing the background should be as easy as copying some 480×320 pixels from one part of the VRAM to another, or, in other words, blazing fast. What is the best way to get closer to such performance?

That's the fast way of doing it. Things you can do to improve performance:
Try different texture-formats. Presumably the SDK docs have details on the preferred format, and presumably smaller is better.
Cull out entirely offscreen tiles yourself
Split the image into smaller textures
I'm assuming you're drawing at a 1:1 zoom-level; is that the case?
Edit: Oops. Having read your question more carefully, I have to offer another piece of advice: Timings made on the simulator are worthless.

The quick solution:
Create a geometry matrix of tiles (quads preferably) so that there is at least one row/column of off-screen tiles on all sides of the viewable area.
Map textures to all those tiles.
As soon as one tile is outside the viewable area you can release this texture and bind a new one.
Move the tiles using a modulo of the tile width and tile height as position (so that the tile will reposition itself at its starting pos when it have moved exactly one tile in length). Also remember to remap the textures during that operation. This allows you to have a very small grid/very little texture memory loaded at any given time. Which I guess is especially important in GL ES.
If you have memory to spare and are still plagued with slow load speed (although you shouldn't for that amount of textures). You could build a texture streaming engine that preloads textures into faster memory (whatever that may be on your target device) when you reach a new area. Mapping as textures will in that case go from that faster memory when needed. Just be sure that you are able to preload it without using up all memory and remember to release it dynamically when not needed.
Here is a link to a GL (not ES) tile engine. I haven't used it myself so I cannot vouch for its functionality but it might be able to help you: http://www.mesa3d.org/brianp/TR.html

Related

is non-square tilable atlas effective

I have a building model with 6K faces and I want to texture with some pretty high detail 512x512 tilable textures (which represent about 32cm x 32cm), and I'd like to be as mobile-friendly as possible, but not necessarily with old phones but for like GearVR capable phones.
The model happens to have mostly long horizontal quads eg
|-----------|---|----------------|
|-----------|---|----------------|
|-----------| |----------------|
|-----------| |----------------|
|-----------|---|----------------|
|-----------|---|----------------|
So the uv's of each of those horizontal sections can be stacked on one tileable texture, to achieve both horizontal and vertical tiling.
Further, if the tiles were 512x512 textures, I could stack 8 of them in a 512x4096 non-square (but power of two) texture.
That way I could texture the main mesh with a single texture or one extra for metalic.
Is this reasonable, or should I keep them as separate 512x512 textures? Wouldn't separate textures mean like 8x the draw calls which would be far worse than a non-square 512x4096 texture?
After some research, I found the technique of stacking textures and tiling horizontally is called using a trim sheet, and is very much valid, and used extensively game development to be able to re-use high-detail textures on many different objects.
https://www.youtube.com/watch?v=IziIY674NAw
The trim sheet info I found though did not cover 'non square' which is the main question. But from several sources I found that some devices do not support non-quare and some do, and some do but don't do compression well on non-square, so it's a 'check your target devices' issue.
Assuming a device does support non-square, it should in fact save memory to have a strip of textures, and should save draw calls, but your engine may just 'repeat them horizontally until square' for you when importing the texture to 'be safe' (so again, check target devices and engines). It would perhaps be wise to limit to 4 rather than 16 stacked textures, to avoid 'worst case scenarios'.
Hopefully, the issue will be addressed by either having video cards able to do several materials in 1 draw call, or by more universal handling the texture strips well, but it seems state of the art has not focused on that yet.
Another solution is more custom, but some people have created custom shaders that use vertex color information on a mesh to choose which part of a texture to use, and then tile from there. Apparently the overhead turned out to be quite low, and it was a success, so it's good to have an idea about 'backup plans'. This however would be an engine/environment/device specific kind of optimization, not a general modeling practice.
http://www.gamasutra.com/blogs/MarkHogan/20140721/221458/Unity_Optimizing_For_Mobile_Using_SubTile_Meshes.php

zoom in the GLPaint sample code

i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i don´t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.

Handling large maps with CCTMXLayer

I'm just getting things going in my game and I'm using CCTMXLayer for my tiled background. Everything is going fine when my map is 30x30 tiles, but my world is about 500x500 tiles. I would just use a map that size, but it lags terribly during animation. Any ideas as to handle a really large, tiled map without having lag?
Being biased here: check out Koboldtouch, specifically what features I added to make tilemaps more useful. Among them no limits on map size, tilesets, layers - as much as can fit in memory.
The only alternative is HKTMXTiledMap. I never actually used it, the forum thread is full of (unresolved?) issues.
The CCTMXTiledMap is not only slow, you can only create 128x128 tiles tilemap with a single layer and all tiles set to non-empty. 500x500 is only possible if you leave enough empty tiles so you never go above 16,384 tiles on the map. Unlikely. Restrictive.

OpenGL ES. Scrolling 3 layer starfield textures gets me from 60 -> 40 FPS

I need to draw the background for a 2D space scrolling shooter. I need to implement 3 layers of stars: one distant nebula (moving really slow) in the background, one layer of far away stars (moving slow) and one layer of close stars (moving normal) on top of the other two.
The way i first tried this was using 3 textures of 320 x 480 that were transparent pngs of the stars. I used GL_BLEND and SRC_ALPHA, ONE_MINUS_SRC_ALPHA.
The results were not great even on the 3GS. On the first generation devices the FPS dropped to 40..50 so i think i'm doing this the wrong way.
When i disable the GL_BLEND everything works great even on the 1st gen devices and the FPS is back to 60 again... so it's must be the fact that i'm trying to belnd large transparent textures.
The problem is i don't know how to do it some other way...
Should i draw only the first nebula like an opaque texture and then try to emulate the middle and top star layer with small points moving around the screen?
Is there any other approach on the blending issue? How can i speed up the rendering process? Is one big texture (tileset) the answer?
Please help me cuz i'm stuck here and i can't get out.
I don't know how you want your stars to look like, but you might want to try to move them from a texture to geometry by using GL_POINTS in the DrawElements or DrawArrays maybe just replace the top two layers with layers of geometry. You can manipulate the points using PointSize, PointSizePointerOES and PointParameter to modify the rendering of the points.
You might want to use multi-texturing to see if that speeds it up. Each multi-texture stage can be assigned a unique transformation matrix, so you should be able to translate each layer at different speeds.
I believe all iPhone models support two texture stages, so you should be able to combine two of your layers into a single draw call. You might still need to resort to blending for the third layer.
Also note that alpha testing could be faster than alpha blending.
Good luck!
The back nebula should definitely be opaque; everything else is getting drawn on top of it, and I assume the only thing behind it is black. Also, prideout has a point: assuming your star layers can have effectively 1-bit alpha, that's definitely something you can try. Failing that, the GL_POINTS technique Harald mentions would work as well.

How should I organize OpenGL ES 1.x 2D layer tree?

I'm developing a cute puzzle app - http://gotoandplay.freeblog.hu/categories/compactTangram/ - , and for performance reasons I decided to render the view with OpenGL. I started to learning it, I'm ok with buffers, vertices, textures in a really basic way.
The situation:
In the game user manipulates 7 puzzlePiece, each has 5 sublayers to get some pretty lighting feel. Most of the textures are 256x256. The user manipulates only one piece at a time, so the rest is unchanged during play. A skeleton of app without any graphic here: http://gotoandplay.freeblog.hu/archives/2009/11/11/compactTangram_v10_-_puzzle_completement_test/
The question:
How should I organize them? Is it a good idea to "predraw" the actual piece states in separate framebuffers(?)/textures(?) or I can simply redraw every piece/layers (1+7*5=36 sprite) in a timestep? If I use "predraw", then what should I do? Drawing to a puzzePiece framebuffer? Then how can I draw it into the scene framebuffer? Or is there a simplier way to "merge" textures?
Hope you can understand my question, if it seems too dim please take a look at my idea on how render an actual piece in my blog (there is a simple flash implemetation of what I'm gonna do) here: http://gotoandplay.freeblog.hu/archives/2010/01/07/compactTangram_072_-_tan_rendering_labs/
A common way of handling textures is to pack all your images into a 'texture atlas' at the start of the game/level.
Your maximum texture size is 1024x1024 and you can have about three of them in memory on the iPhone.
When you have all the images in these 'super textures' you can just draw the relevant area of the large texture. This has the advantage that you have to bind textures less often and you gain better performance, as well as cutting out any excess space used by the necessity to put small images in power-of-two size textures.