COCOS2d - Reusing Sprite & Replacing scene - iphone

I have 2 scenes both the scene shares a same back ground Image so I have this below code in both scenes init method and added. Everything works prefect. My question is , since the same image file is read twice , allocated memory and deallocated so why cant we store it one common place (spriteManager) using singleton pattern and reuse that instance? If I do this way will the memory will be released when when the scene one get replaced with scene 2.
One this strike my mind is Retain but not sure how and where to handle this retain. If my approach is wrong for handling Common reusable sprite then please suggest me the proper way of doing the same. I am new to this cocos2d so I havent read the concept of sprite sheet , for now I want to keep everything simple.
CCSprite *bg = [CCSprite spriteWithFile:#"bg.png"];
2nd Question , I read while replacing two heavy scene there would be some chances of memory leak due to overlap in releasing first scene memory and allocating the next scene memory. So to avoid this I read we have to give a pause for seconds load a light weight scene (Loading scene). Is that a good solution or the replacing scene itself will not create any problems.
Would be great if there is a good article about this post title (Reusing Sprite & Replacing scene). .

To answer your first question, you don't need a spriteManager because cocos2D already have a default texture caching mechanism(refer to ccTextureCache class), and when you call spriteWithFile next time it will just load the texture from texture cache. Note: ccTextureCache usually clear its textures when there is a memory warning.
Secondly there shouldn't be any memory leaks when replacing scenes if you have deallocated the objects properly. Can you provide a link to where you read about this?

Related

Should I remove all SKSpriteNodes and Labels when I switch from one node to another

My iOS game has a couple of scenes. I've noticed some lag between switching scenes, and I was wondering if it might be because I'm not removing all nodes and labels from parents when I transition to another scene. Is it good practice to remove all nodes from their parent when transitioning to another scene?
Also, I've noticed that when I do remove all nodes, the transition effect is kind of ruined, as the screen goes all black during the transition.
Is it possible to delete nodes(of previous scene) after the transition to next scene?
When you perform the transition, the scene and its nodes will be released from memory, unless you have a strong reference cycle. Also, you should know that SpriteKit has its own cache system for the SKTextures, so not all memory will freed.
The lag could be caused by a lot of thing, some possibilities:
If you instantiate the new scene on touchesEnded (or your custom button callback closure), the lag could be caused because you're doing too much work on the initialization. This can be solved by, for example, preloading the scene with a closure that run in background and, when you have to run the transition, you already have everything loaded. An example follows:
Maybe you're using assets that are too large and because they take longer to be loaded, you have the lag. You could solve this by, for example, converting images that don't need an alpha channel to .jpeg.
Another solution would be to preload assets. A code example follows.

Does the entire scene get drawn on screen?

Ok so i'm building a game and just using the WebPlayer.
It plays just fine for me and others with no performance issues.
I loaded it to an iPhone 5 to see how it would handle performance and it's far from acceptable. Likely due to the nature of it and all of the objects and effects being drawn.
Is the entire scene loaded at once or are the items that are out of range only drawn when the camera is in that area?
Here is the game
http://burningfistentertainment.com/3D/DevilsNote/index.html
Any pointers would be great.
Is the entire scene loaded at once
Yes it is. When you load a scene everything contained is loaded into memory. Eventually try to split the scene.
are the items that are out of range only drawn when the camera is in
that area
Each active GameObject inside camera frustum with a Renderer component will be rendered. Always animated Animator components can affect performances even if not rendered.
I loaded it to an iPhone 5 to see how it would handle performance and
it's far from acceptable. Likely due to the nature of it and all of
the objects and effects being drawn.
It's hard to say what could be the bottleneck without knowing more about your code. If you think the problem is GPU related, here's
some tips for mobile:
reduce draw-calls (it depends on the device, I'd say no more than 50-70) => reduce materials count(use atlas), mark static objects as static (this way can be static batched),...
limit overdraw: reduce size and number of transparent objects (what about rain? how do you implemented it?)
consider using occlusion culling (probably not a problem in your game, but if the depth complexity increases it could save you a lot of GPU workload)

SpriteKit where to load texture atlases for thousands of sprites

In my game I have thousands of "tile" nodes which make up a game map (think simcity), I am wondering what the most frame-rate/memory efficient route for texturing and animating each node would be? There a a handful of unique tile "types" which each have their own texture atlas / animations, so making sure textures are being reused when possible is key.
All my tile nodes are children of a single map node, should the map node handle recognising a tile type and loading the necessary atlas & animations (e.g. by loading texture & atlas names from a plist?)
Alternatively, each tile type is a certain subclass. Would it be better for each SKSpriteNode tile to handle their own sprite atlas loading e.g. [tileInstance texturise]; (how does sprite kit handle this? would this method result in the same texture atlas being loaded into memory for each instance of a certain tile type?)
I have been scrounging the docs for a deeper explanation of atlases and texture reusage but I don't know what the typical procedure is for a scenario like this. Any help would be appreciated, thanks.
Memory first: there won't be any noticeable difference. You have to load the tile's textures, textures will account for at least 99% of the memory of the Map+Tiles and that's that.
Texture reuse: textures are being reused/cached automatically. Two sprites using the same texture will reference the same texture rather than each having its own copy of the texture.
Framerate/Batching: this is all about batching properly. Sprite Kit approaches batching children of a node by rendering them in the order they are added to the children array. As long as the next child node uses the same texture as the previous one, they will all be batched into one draw call. Possibly the worst thing you could do is to add a sprite, a label, a sprite, a label and so on. You'll want to add as many sprites using the same texture in consecutive order as is possible.
Atlas Usage: here's where you can win the most. Commonly developers try to categorize their atlases, which is the wrong way to go about it. Instead of creating one atlas per tile (and its animations), you'll want to create as few texture atlases as possible, each containing as many tiles as possible. On all iOS 7 devices a texture atlas can be 2048x2048 and with the exception of iPhone 4 and iPad 1 all other devices can use textures with up to 4096x4096 pixels.
There are exceptions to this rule, say if you have such a large amount of textures that you can't possibly load them all at once into memory on all devices. In that case use your best judgement to find a good compromise on memory usage vs batching efficiency. For example one solution might be to create one or two texture atlases per each unique scene or rather "scenery" even if that means duplicating some tiles in other texture atlases for another scene. If you have tiles that almost always appear in any scenery it would make sense to put those in a "shared" atlas.
As for subclassing tiles, I'm a strong proponent to avoid subclassing node classes. Especially if the main reason to subclass them is to merely change which texture they are using/animating. A sprite already is a container of a texture, so you can as well change the sprite texture and animate it from the outside.
To add data or additional code to a node you can peruse its userData property by creating your own NSMutableDictionary and adding any object you need to it. A typical component-based approach would go like this:
SKSpriteNode* sprite = [SKSpriteNode spriteWithWhatever..];
[self addChild:sprite];
// create the controller object
sprite.userData = [NSMutableDictionary dictionary];
MyTileController* controller = [MyTileController controllerWithSprite:sprite];
[sprite.userData setObject: forKey:#"controller"];
This controller object then performs any custom code needed for your tiles. It could be animating the tile and whatever else. The only important bit is to make the reference to the owning node (here: sprite) a weak reference:
#interface MySpriteController
#property (weak) sprite; // weak is important to avoid retain cycle!
#end
Because the sprite retains the dictionary. The dictionary retains the controller. If the controller would retain the sprite, the sprite couldn't deallocate because there would still be a retaining reference to it - hence it will continue to retain the dictionary which retains the controller which retains the sprite.
The advantages of using a component-based approach (also favored by and implemented in Kobold Kit):
If properly engineered, works with any or multiple nodes. If what if some day you want a label, effect, shape node tile?
You don't need a subclass for every tile. Some tiles may be simple static sprites. So use simple static SKSpriteNode for those.
It lets you start/stop or add/remove individual aspects as needed. Even on tiles you didn't initially expect to have or need a certain aspect.
Components allow you to build a repertoire of functionality you're going to need often and possibly even in other projects.
Components make for better architecture. A classical OOP design mistake is to have Player and Enemy classes, then realize both need to be able to shoot arrows and equip armor. So you move the code to the root GameObject class, making the code available to all subclasses. With components you simply have an equipment and a shooting component add to those objects that need it.
The great benefit of component-based design is that you start developing individual aspects separately from other things, so they can be reused and added as needed. You'll almost naturally write better code because you approach things with a different mindset.
And from my own experience, once you modularize a game into components you get far fewer bugs and they're easier to solve because you don't have to look at or consider other component's code - unless used by a component but even then when one component triggers another you have a clear boundary, ie is the passed value still correct when the other component takes over? If not, the bug must be in the first component.
This is a good introduction on component-based design. The hybrid approach is certainly the way to go. Here are more resources on component based design but I strongly advice against straying from the path and looking into FRP as the "accepted answer's author" suggests - FRP is an interesting concept but has no real world application (yet) in game development.

Equivalent of -viewDidAppear for CCLayer?

Question:
In the realm of cocos2d for iPhone, what's the equivalent of UIKit's -viewDidAppear callback for CCLayer?
*And if no equivalent exists (as seems to be the case in the docs), what's your recommended way knowing when a CCLayer has been rendered?
There are these two methods which are quite similar to
- (void)onEnter;
- (void)onEnterTransitionDidFinish;
Well, a CCLayer is rendered, usually, when it is created and you add sprites to it. You should do all this up front before the game play begins. Then you can move the layer around and animate its objects without worrying about "when" it will be finished rendering (usually).
Scenes are a different matter. A scene is what sets up the layers and creates them, loads images, sprites, etc, and tha can take a bit. For that you have a few options. One effective option is the the onEnter and onExit family of methods. However, another really good way is to simply have an intermediate scene, such as a mostly empty, lean Loading CCScene that you load, and then that scene loads the big scene you are trying to load. When you do this, you are freeing up old memory before adding new memory.
It really depends on what you are trying to accomplish with your equivalent of viewDidAppear.

Textures not drawing if multiple EAGLViews are used

I'm having a bit of a problem with Apples EAGLView and Texture2D. If I create an instance of EAGLView and draw some textures, it works great. However, whenever I create a second instance of EAGLView, the textures in the new view(s) aren't drawn.
Being new to OpenGL, I've got absolutely now clue as to what is causing this behavior. If somebody would like to help, I've created a small project that reproduces the behavior. The project can be found at http://www.cocoabeans.se/OpenGLESBug.zip
Many thanks,
Tim Andersson
Update
I tried using sharegroups but I'm not really sure if I used them correctly. However, it did change the behavior slightly; instead of the texture drawing only in the first instantiated view, it now draws the texture in the last instantiated view and draws white rectangles in the other views. I don't know if that is better or worse, but at least something is showing up in the other views now.
This is driving me crazy and I would be very grateful if somebody could help me with this problem. I've updated the project at http://www.cocoabeans.se/OpenGLESBug.zip to reflect the changes.
Cheers,
Tim
Second Update
After trying some more things, it seems that the problem is related to Apple's Texture2D class, though I'm not sure exactly what is causing the behavior. I think the best thing to do is to write my own texture class (it will help me understand how OpenGL handles textures, which will probably come in handy).
(Haven't downloaded your code.)
The OpenGL drawing contexts are different if you just use two EAGLViews (the code in that base class creates and owns the GL context as well as render/frame/depth buffers). If you generate/bind some textures in one context, they won't be available in the other. You can share contexts if you like using a sharegroup (see this question for more: How to use OpenGL ES on a separate thread on iphone?). Or define the textures (if small) in both places, etc.