iPhone - Should I composite two images at runtime, or pre-render them at the cost of memory - iphone

I am building a cocos2d iPhone game.
There will be 6 'enemy spaceship sprites' that vary only by colour. I.e. all the sprites will have the same shape only some parts of the interior will have different colours.
My two options are:
1)
Create a template shape with a transparent interior.
At runtime, draw this shape on top of a small block of colour X.
The interior of the sprite will be colour X.
2)
Pre-render 6 different sprites
At run time, simply draw the sprite of a given colour.
What is the advantages and disadvantages of each method? Is there a best practice?
If I later wanted to animate the sprites, or dynamically change their colours, would this effect my choice of method?
Thanks!

I think first you need to figure out what it is that you're trying to do... Animation or a large number of color combinations make pre-rendering unfeasible. On the other hand, pre-rendering makes sense if you have a large number of ships on-screen at the same time, because you can use this technique to cut the number of drawing operations in half.

Related

Different ways to detect size of image on mesh versus size of mesh

I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts

Unity - Avoid quad clipping or set rendering order

I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Can Core Animation and CALayer and CATransformLayer show an object with "thickness"?

It seems that we can show layers and even use a different zPosition for different layers in Core Animation -- however, is it true that there is no easy way to show something with some thickness?
For example, a slice of cheese with a 2mm thickness, or a push button or a coin that is tilted and therefore show a 1mm thickness? Somehow the thickness has to be shown by adding yet another layer to imitate the thickness? So this 2.5D is a more basic 2.5D, where it is a 3D world limited to flat 2D images... while some 2.5D, such as some RPG games, (like Diablo), which are sometimes also called 2.5D but objects in those situation (such as a building) can actually have width, length, and height (thickness). So actually, those are actually quite 3D to me... except most objects are on a 2D map that is tilted sideway.
So back to the question... is it true that in iOS, it is fairly limited to a 3D world of flat 2D images, and going to any width x length x height will require going into OpenGL / CAEAGLLayer?
Yes it's true. Core Animation does 3D animation of 2D objects (layers). You can simulate thickness by building a complex assembly of objects, where you add layers for the edges of your object, but it's a pain.
OpenGL is a much better platform for doing 3D.

what is better: one big sprite or many small

I'm new to game programming. And i have a question. I want to have a dotted circle to be drawn on the screen. I can use one big sprite (for example 256x256 pixels) which contains all the circle or i can use many small sprites representing dots.
I use cocos2d libs and i'm able to render using batch. So what is the best way to perform such tasks ?
In my opinion your best bet (if all the dots are the same) is to have one sprite of the dot, and repeat it in the shape you are looking for.
Generally you'll want a single asset for each unique graphic. You can combine those assets into a single sprite and reuse them. This allows for more flexibility as well as speed.
Most of todays graphics hardware is optimized to texture dimensions that are a power of two. Your sprites are likely to have other dimensions. By using sprites, you can minimize the padding that is needed to fill this space (and thus, minimize CPU/GPU cycles spent on correcting this internally). Besides that, the file size will be smaller, since you need less overhead and compression is likely to be more effective.
Go with one large sprite. It's fewer calls into the rendering engine, and adds flexibility to change the look (for example, if you decide to have the circle made of dashed lines rather than dots).