Appropriate use of GLKBaseEffect - ios5

Ahoy!
I've been looking into updating some old test code in an attempt to brush up on the new features added to GLKit. So far i've managed to set up a GLKViewController and start rendering some basic shapes but have struggled to find any decent information regarding GLKBaseEffect.
The GLKBaseEffect documentation states:
At initialization time, your application first creates an OpenGL ES 2.0 context and makes it current. Then, it allocates and initializes a new effect object, configures its properties, and calls its prepareToDraw method. Binding an effect causes a shader to be compiled and bound to the current OpenGL ES context. The base effect also requires vertex data to be supplied by your application. To supply vertex data, create one or more vertex array objects. For each attribute required by the shader, the vertex array object should enable the attribute and point to data stored in a vertex buffer object.
What i'm struggling to discern is;
Do I need a GLKBaseEffect object for each "model" I'm rendering? Or do I use a single GLKBaseEffect for each "scene" and simply change the properties on the fly before calling prepareToDraw?
I've seen a few tutorials for game engines and renderers that simply use a single GLKBaseEffect for each model but this seems wholly inefficient if the same could be achieved with a single instance instead.
From reading the documentation it almost seems like this is the best approach but considering i've seen so many people using multiple instances, i'm starting to think that this isn't the case.
Can anyone shed any light on this? GLKit is still fairly new to iOS (and to me) so any information would be greatly appreciated.

No, you should not create a unique GLKBaseEffect for each object. For example, if you are drawing a maze, each brick in that maze may be its own object, but they can all share the same GLKBaseEffect. Remember though, that GLKBaseEffect also stores information in location as well as texture, lighting, fog etc. So if you want to draw the bricks in more than one place (which I assume you do :-) you tweak their transformation matrix and then call the 'prepareToDraw' API.
I agree we need more tutorials written by folks who have used GLKBaseEffect extensively to get more information on Best Practices for this new framework.
Happy sailing..

Each change of the "fundamental" properties (lightingType, lightModelTwoSided, colorMaterialEnabled, ...) will cause a new shader program to be loaded with the next "prepareToDraw" call.
So if you don't use a rendering order then it pretty much doesn't matter if you use one effect for each rendered object or a single changing effect for all objects. In both cases you will end up with an unnecessary glUseProgram call and lots of unnecessary OpenGL state changes for each object drawn. (use the "OpenGL ES Analysis" template of instruments to investigate the generated OpenGL calls)
That said, your primary conern should be to order your objects for rendering. At least group all objects that use the same shader program. Then create and use one GLKBaseEffect object for each of those groups.
If you're not sure if a GLKBaseEffect property change will cause a new shader program being loaded then I recommend using Instruments to investigate the OpenGL calls.

Related

SpriteKit where to load texture atlases for thousands of sprites

In my game I have thousands of "tile" nodes which make up a game map (think simcity), I am wondering what the most frame-rate/memory efficient route for texturing and animating each node would be? There a a handful of unique tile "types" which each have their own texture atlas / animations, so making sure textures are being reused when possible is key.
All my tile nodes are children of a single map node, should the map node handle recognising a tile type and loading the necessary atlas & animations (e.g. by loading texture & atlas names from a plist?)
Alternatively, each tile type is a certain subclass. Would it be better for each SKSpriteNode tile to handle their own sprite atlas loading e.g. [tileInstance texturise]; (how does sprite kit handle this? would this method result in the same texture atlas being loaded into memory for each instance of a certain tile type?)
I have been scrounging the docs for a deeper explanation of atlases and texture reusage but I don't know what the typical procedure is for a scenario like this. Any help would be appreciated, thanks.
Memory first: there won't be any noticeable difference. You have to load the tile's textures, textures will account for at least 99% of the memory of the Map+Tiles and that's that.
Texture reuse: textures are being reused/cached automatically. Two sprites using the same texture will reference the same texture rather than each having its own copy of the texture.
Framerate/Batching: this is all about batching properly. Sprite Kit approaches batching children of a node by rendering them in the order they are added to the children array. As long as the next child node uses the same texture as the previous one, they will all be batched into one draw call. Possibly the worst thing you could do is to add a sprite, a label, a sprite, a label and so on. You'll want to add as many sprites using the same texture in consecutive order as is possible.
Atlas Usage: here's where you can win the most. Commonly developers try to categorize their atlases, which is the wrong way to go about it. Instead of creating one atlas per tile (and its animations), you'll want to create as few texture atlases as possible, each containing as many tiles as possible. On all iOS 7 devices a texture atlas can be 2048x2048 and with the exception of iPhone 4 and iPad 1 all other devices can use textures with up to 4096x4096 pixels.
There are exceptions to this rule, say if you have such a large amount of textures that you can't possibly load them all at once into memory on all devices. In that case use your best judgement to find a good compromise on memory usage vs batching efficiency. For example one solution might be to create one or two texture atlases per each unique scene or rather "scenery" even if that means duplicating some tiles in other texture atlases for another scene. If you have tiles that almost always appear in any scenery it would make sense to put those in a "shared" atlas.
As for subclassing tiles, I'm a strong proponent to avoid subclassing node classes. Especially if the main reason to subclass them is to merely change which texture they are using/animating. A sprite already is a container of a texture, so you can as well change the sprite texture and animate it from the outside.
To add data or additional code to a node you can peruse its userData property by creating your own NSMutableDictionary and adding any object you need to it. A typical component-based approach would go like this:
SKSpriteNode* sprite = [SKSpriteNode spriteWithWhatever..];
[self addChild:sprite];
// create the controller object
sprite.userData = [NSMutableDictionary dictionary];
MyTileController* controller = [MyTileController controllerWithSprite:sprite];
[sprite.userData setObject: forKey:#"controller"];
This controller object then performs any custom code needed for your tiles. It could be animating the tile and whatever else. The only important bit is to make the reference to the owning node (here: sprite) a weak reference:
#interface MySpriteController
#property (weak) sprite; // weak is important to avoid retain cycle!
#end
Because the sprite retains the dictionary. The dictionary retains the controller. If the controller would retain the sprite, the sprite couldn't deallocate because there would still be a retaining reference to it - hence it will continue to retain the dictionary which retains the controller which retains the sprite.
The advantages of using a component-based approach (also favored by and implemented in Kobold Kit):
If properly engineered, works with any or multiple nodes. If what if some day you want a label, effect, shape node tile?
You don't need a subclass for every tile. Some tiles may be simple static sprites. So use simple static SKSpriteNode for those.
It lets you start/stop or add/remove individual aspects as needed. Even on tiles you didn't initially expect to have or need a certain aspect.
Components allow you to build a repertoire of functionality you're going to need often and possibly even in other projects.
Components make for better architecture. A classical OOP design mistake is to have Player and Enemy classes, then realize both need to be able to shoot arrows and equip armor. So you move the code to the root GameObject class, making the code available to all subclasses. With components you simply have an equipment and a shooting component add to those objects that need it.
The great benefit of component-based design is that you start developing individual aspects separately from other things, so they can be reused and added as needed. You'll almost naturally write better code because you approach things with a different mindset.
And from my own experience, once you modularize a game into components you get far fewer bugs and they're easier to solve because you don't have to look at or consider other component's code - unless used by a component but even then when one component triggers another you have a clear boundary, ie is the passed value still correct when the other component takes over? If not, the bug must be in the first component.
This is a good introduction on component-based design. The hybrid approach is certainly the way to go. Here are more resources on component based design but I strongly advice against straying from the path and looking into FRP as the "accepted answer's author" suggests - FRP is an interesting concept but has no real world application (yet) in game development.

Writing a GLKBaseEffect Substitute

Recently, I've been trying to learn how to use OpenGL ES 2.0 and GLKit to create simple 2D games. I've been following this tutorial by Ray Wenderlich and it's been very helpful so far. However, upon profiling my project (and his) for leaks I found that GLKBaseEffect's prepareToDraw: (specifically, GLKShaderBlockNode's copyWithZone) is leaking everywhere - I'm using ARC, by the way. After searching around quite a bit it seems that this is a bug in GLKBaseEffect and that I can't do anything about it. Is this true? The only solution I've found suggested is scrapping GLKBaseEffect entirely.
If that's the case, I have to roll my own custom vertex and fragment shaders as a result. However, I have no idea how to do this. I would appreciate any resources or help on creating custom shaders and adapting the code in the above tutorial to use those instead.
Thank you very much for your time. :)
For starters, in XCode do File->New Project and select "OpenGL Game".
Run it if you choose, you get 2 cubes going around each other.
Take a look at shader.fsh and shader.vsh. In viewcontroller.m, examine compilerShader, linkProgram and validateProgram (these compile the shader).
Examining that sample app should be enough to get you "in the door" on how to get a shader running, and from that point forward search for some OpenGL ES 2.0 shader tutorials or check out some of sample apps in the Apple code library.
Note: Going from Apple's built-in easy effects to shaders is a significantly wide "canyon".
Well, after a decent amount of procrastination, I decided to bite the bullet and just do it. The leaks are gone now, although it took me a while to get it working. I read through OpenGL ES 2.0 for iPhone, Chapter 4 and learned about vertex and fragment shaders along with how to compile them and link them. After understanding how it worked, I put the author's GLProgram helper class into my project to handle all the boilerplate stuff and got to work quickly. Then, I created two shaders, nearly identical to the ones found here.
I followed his instructions on getting the attribute and uniform locations and stored them in a structure, passing them all at once to my sprite objects as they were initialized. Then, when it came time to render I passed in all the information for my attributes as I had done earlier when I was using GLKBaseEffect; the only difference (aside from manually binding the texture to a texture unit) was that I had to pass in the modelviewMatrix and projectionMatrix uniforms in myself instead of setting a GLKBaseEffect property.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.

RPG Game loop and class structure (cocos2D for iPhone)

I'm looking to make an RPG with Cocos2D on the iPhone. I've done a fair bit of research, and I really like the model Cocos2D uses for scenes. I can instantiate a scene, set up my characters etc. and it all works really nicely... what I have problems with is structuring a game loop and separating the code from the scenes.
For example, where do I put my code that will maintain the state of the game across multiple scenes? and do I put the code for events that get fired in a scene in that scene's class? or do I have some other class that separates the init code from the logic?
Also, I've read a lot of tutorials that mention changing scenes, but I've read none that talk about updating a scene - taking input from the user and updating the display based on that. Does that happen in the scene object, or in a separate display engine type class.
Thanks in advance!
It sounds like you might do well to read up on the Model-View-Controller pattern. You don't have to adhere slavishly to it (for example, in some contexts it makes sense to allow some overlap between Model and View), but having a good understanding of it will help you to build any program that has lots of graphical objects and logic controlling them, and the need to broadcast state or persist it to disc (game save), etc.
You also have to realize that cocos2d provides a good system for structuring the graphical scene graph and rendering it efficiently, but it doesn't provide a complete infrastructure for programming games. In that sense it's more of a graphics engine than a game engine. If you try to fit your game's architecture into the structure of cocos2d, you might not end up with the most maintainable result. Instead, you should treat cocos2d as what it is: a great tool to take care of your display and animation needs.
You should definitely have an object other than the scenes that maintain the game state, because otherwise where will all the state go when you switch between scenes? And within scenes/levels, you should simply try to use good Object Oriented design to have state distributed over objects of various classes. Each character object remembers its own state etc. Here you can see where MVC becomes useful: when you save the game to disc, you want to remember each character's health level, but probably not which exact frame index the sprite animation was showing. So you need to distinguish between the sprite and the character (model) itself. That said, as I mentioned before, for game objects that don't have a lot of logic attached to them, or which don't need to be saved, it might be ok to just fuse the Model and View together into one class (basically by subclassing CCSprite).
To pull off MVC the way it's supposed to be, you should also learn the basics of Key-Value Observing. (And you'd do well to use this replacement for Apple's interface.) In more intensely real-time games, techniques like this might be too slow, but since you're making a RPG (good choice for starting out) you could probably sacrifice performance for a more maintainable architecture.
The game scene (which is just another cocos2d sprite) plays the role of Controller, in terms of the MVC pattern. It doesn't draw anything itself, but tells everything else to draw itself based on inputs and state. It's tempting to put all kinds of logic and functionality into the game scene, but when you notice that it swells, you should ask yourself how you could separate that functionality into other classes. Analyze which type of functionality you're implementing. Is it to do with data and state (Model)? Or is it about animation and rendering (View)? Or is it about connecting logic with rendering (in which case you should try to make the View observe the Model directly)?
The game scene/Controller is basically a dispatch center, which takes input events (from the user or from sprites reporting that they've hit something, for example) and decides what to do with them: it might tell one or several of the Model objects to update themselves in some way, or it might just trigger an animation in some other sprites, for example.
In a real-time game, you'd have a "tick" or "step" method in the scene which tells all objects to update themselves. This method (the game loop) is the heart of the program and is run every time a new frame is drawn. (In modern game engines there's a lot of multi-threading but let's not think about that.) But in your case, you might want to create a module that can "play the game" completely separate from the game scene. Imagine creating a program that can play chess through the terminal, using only text input. If you create the whole game system in that manner, and then connect it to the graphics engine through a small and clean interface, you'll have a really maintainable app with lots of reusable code for future projects!
Some good rules of thumb: the model (data) shouldn't know anything about sprites or display states; the view (sprites) shouldn't contain any of the game's actual logic (the game rules) but only know how to do simple things like moving and bouncing and reporting to the scene if something complicated happens. Whenever possible, the view should react to changes in the model directly, without the controller having to interfere.

Textures not drawing if multiple EAGLViews are used

I'm having a bit of a problem with Apples EAGLView and Texture2D. If I create an instance of EAGLView and draw some textures, it works great. However, whenever I create a second instance of EAGLView, the textures in the new view(s) aren't drawn.
Being new to OpenGL, I've got absolutely now clue as to what is causing this behavior. If somebody would like to help, I've created a small project that reproduces the behavior. The project can be found at http://www.cocoabeans.se/OpenGLESBug.zip
Many thanks,
Tim Andersson
Update
I tried using sharegroups but I'm not really sure if I used them correctly. However, it did change the behavior slightly; instead of the texture drawing only in the first instantiated view, it now draws the texture in the last instantiated view and draws white rectangles in the other views. I don't know if that is better or worse, but at least something is showing up in the other views now.
This is driving me crazy and I would be very grateful if somebody could help me with this problem. I've updated the project at http://www.cocoabeans.se/OpenGLESBug.zip to reflect the changes.
Cheers,
Tim
Second Update
After trying some more things, it seems that the problem is related to Apple's Texture2D class, though I'm not sure exactly what is causing the behavior. I think the best thing to do is to write my own texture class (it will help me understand how OpenGL handles textures, which will probably come in handy).
(Haven't downloaded your code.)
The OpenGL drawing contexts are different if you just use two EAGLViews (the code in that base class creates and owns the GL context as well as render/frame/depth buffers). If you generate/bind some textures in one context, they won't be available in the other. You can share contexts if you like using a sharegroup (see this question for more: How to use OpenGL ES on a separate thread on iphone?). Or define the textures (if small) in both places, etc.