This is for Unity3D 4.3+
I have a ridiculously large background I wish to use for a 2D scroller game. The background is 10 times the width of a landscape device (10240 x 1024). (The basic loop background goes behind that and is not an issue.)
I understand I can cut the background into 10 images of 1024 x 1024 each (basic sprites). But, I'm unsure of the best approach to go forward...
One way is to pre-load all the background sprites and then do a simple scrolling of them all. But take too much memory.
However, keeping in mind this is aimed for mobiles and tablets, isn't possible to do a loading/offloading of the background as the player progresses? Like this: Initially load 2 background images (bg-1 and bg-2).
Once the camera has passed bg-1, the unload bg-1 and load bg-3. Then when player passes bg-2, then offload bg-2 and load bg-4 and repeat. Thus only 2 bg images loaded at a time.
The player can not go backwards, so that helps me in this scenario.
Any thoughts on the best approach?
Thank you.
You can use Resources.Load function to load assets dynamically(link). Or just load them all in a list and reference from there.
Im just wondering what would the best way to display multiple instances of a small (10x1) image. I have an array of about 480 points and I would like to draw the image at each of these points to draw path. Would it be faster to use Core Graphics or should I be using something like cocos2d?
It depends on whether you need it to animate. Core Graphics is probably fine if you are drawing it once and then displaying it as an image, but it it will be really slow if you need to redraw it each frame.
UIKit is actually much quicker because UIView drawing is hardware accelerated, so you could just add a UIImageView for each point in the graph, but from my own experiments that will probably be too slow for realtime interaction if there is more than about 200 image views (at least if you want it to run on anything older than an iPhone 4S).
If you do need realtime performance, that really only leaves OpenGL, which is quite fiddly to set up unless you use a library like Cocos2D or Sparrow to simplify it. I'd suggest Sparrow for your purposes because Sparrow views can be used in a regular UIKit application, whereas Cocos2D provides a whole app framework and is harder to use for just a single view in an otherwise regular UIKit app.
http://www.sparrow-framework.org/
Without more context, another option is to use OpenGL and create a display list for the composite image.
Guys. I'm writing a scrolling shooter game that has a very large Background image chopped up into smaller images to make it more manageable. Right now it is setup so when the GameNode is init'd. A big long list of Images are added to the ParallaxNode and it is added to the Game.
I thought I might save a lot of memory if I only kept one or two loaded into memory and swap them out as the Hero Character moves through the level. The problem is I can't seem to create a parallax background in another method that's called later. No errors it just doesn't show and seems to be the same code. Any Ideas?
Well I created the parallax layer having four layers in it and added it to the level layer which is extending from GameNode.
If you can post some code, that may help.
I'm developing a cute puzzle app - http://gotoandplay.freeblog.hu/categories/compactTangram/ - , and for performance reasons I decided to render the view with OpenGL. I started to learning it, I'm ok with buffers, vertices, textures in a really basic way.
The situation:
In the game user manipulates 7 puzzlePiece, each has 5 sublayers to get some pretty lighting feel. Most of the textures are 256x256. The user manipulates only one piece at a time, so the rest is unchanged during play. A skeleton of app without any graphic here: http://gotoandplay.freeblog.hu/archives/2009/11/11/compactTangram_v10_-_puzzle_completement_test/
The question:
How should I organize them? Is it a good idea to "predraw" the actual piece states in separate framebuffers(?)/textures(?) or I can simply redraw every piece/layers (1+7*5=36 sprite) in a timestep? If I use "predraw", then what should I do? Drawing to a puzzePiece framebuffer? Then how can I draw it into the scene framebuffer? Or is there a simplier way to "merge" textures?
Hope you can understand my question, if it seems too dim please take a look at my idea on how render an actual piece in my blog (there is a simple flash implemetation of what I'm gonna do) here: http://gotoandplay.freeblog.hu/archives/2010/01/07/compactTangram_072_-_tan_rendering_labs/
A common way of handling textures is to pack all your images into a 'texture atlas' at the start of the game/level.
Your maximum texture size is 1024x1024 and you can have about three of them in memory on the iPhone.
When you have all the images in these 'super textures' you can just draw the relevant area of the large texture. This has the advantage that you have to bind textures less often and you gain better performance, as well as cutting out any excess space used by the necessity to put small images in power-of-two size textures.
I'm developing a 2D game for the iPhone using OpenGL ES and I'd like to use a 320x480 bitmapped image as a persistent background.
My first thought was to create a 320x480 quad and then map a texture onto it that represents the background. So... I created a 512x512 texture with a 320x480 image on it. Then I mapped that to the 320x480 quad.
I draw this background every frame and then draw animated sprites on top of it. This works fine except that the drawing of all of these objects (background + sprites) is too slow.
I did some testing and discovered that my slowdown is in the pixel pipeline. Not surprisingly, the large background image is the main culprit. To prove this, I removed the background draw and everything else rendered very fast.
I am looking for advice on how to keep my background and also improve performance.
Here's some more info:
1) I am currently testing on the Simulator (still waiting on Apple for the license)
2) The background is a PVR texture squeezed down to 128k
3) I had hoped that there might be a way to cache this background into a color buffer but haven't had any luck with that. that may be due to my inexperience with OpenGL ES or it just might be a stupid idea that won't work :)
4) I realize that the entire background does not always have to refresh, just the parts that have been drawn over by the moving sprites. I started to look into techniques for refreshing (as necessary) parts of the the background either as separate textures or with a scissor box, however this seems less than elegant.
Any tips/advice would be greatly appreciated...
Thank you.
Do not do performance testing on the simulator. Ever!
The differences to the real hardware are huge. In both directions.
If you draw the background every frame:
Do not clear the framebuffer. The background will overdraw the whole thing anyway.
Do you really need a background texture ?
What about using a color gradient via vertex colors ?
Try using the 2bit mode for the texture.
Turn of all render steps that you do not need for the background.
E.g.: Lighting, Blending, Depth-Test, ...
If you could post some of your drawing code it would be a lot easier to help you.
If you're making a 2D game, is there any reason you aren't using an existing library? Specifically, the cocos2d for iPhone may be worth your time. I can't answer your question about how to fix the issue doing it all yourself, but I can say that I've done exactly what you're talking about (having one full screen background with sprites on top) with cocos2d and it works great. (Assuming 60 fps is fast enough for you.) You may have your reasons for doing it yourself, but if you can, I would highly suggest at least doing a quick prototype with cocos2d and seeing if that doesn't help you along. (Details and source for the iPhone version are here: http://code.google.com/p/cocos2d-iphone/)
Thanks to everyone who provided info on this. All of the advice helped out in one way or another.
However, I wanted to make it clear that the main issue here turned out to be the behavior of simulator itself (as implied by Andreas in his response). Once I was able to get the application on the device, it performed much, much better. I mention this because, prior to developing my game, I had seen a lot of posts that indicated that the device was much slower than the simulator. This might be true in some instances (e.g. general application logic) but in my experience, animation (particularly 3d transformations) are much faster on the device.
I dont have much experience with OpenGL ES, but this problem occurs generally.
Your idea about the 'color buffer' is good intuition, essentially you want to be storing your background as a frame buffer and loading it directly onto your rendering buffer before drawing the foreground.
In OpenGL this is fairly straight forward with Frame Buffer Objects (FBO's). Unfortunatly I dont think OpenGL ES supports them, but it might give you somewhere to start looking.
you may want to try using VBOs (Vertex Buffer Objects) and see if that speeds up things. Tutorial is here
In addition, I just saw, that since OpenGL ES v1.1, there is a function called glDrawTex (Draw Texture) that is designed for
fast rendering of background paintings, bitmapped font glyphs, and 2D framing elements in games
You could use frame buffer objects similar to the GLPaint example from Apple.
Use a texture atlas to minimize the number of draw calls you make. You can use glTexCoordPointer for setting your texture coordinates that maps each image to its correct position. Remember to set your vertex buffer too. Ideally one draw call will render your entire 2D scene.
Avoid enabling/disabling states where possible.