Whats the difference between the two?
I'm sure they have pros and cons, and situations they are better performers in.
Any resources that compare the two?
Is one better for animation (I imagine the CATransform3D)? Why?
Also I think I read somewhere that text clarity can be an issue, is one better at scaling text?
As MSN said, they are used in different cases. CGAffineTransform is used for 2-D manipulation of NSViews, UIViews, and other 2-D Core Graphics elements.
CATransform3D is a Core Animation structure that can do more complex 3-D manipulations of CALayers. CATransform3D has the same internal structure as an OpenGL model view matrix, which makes sense when you realize that Core Animation is built on OpenGL (CALayers are wrappers for OpenGL textures, etc.). I've found that this similarity of internal structure, combined with some nice helper functions that Apple provides, can let you do some neat OpenGL optimizations, as I post here.
When it comes down to choosing which do use, ask yourself if you're going to work with views directly in a 2-D space (CGAffineTransform) or with the underlying Core Animation layers in 3-D (CATransform3D). I use CATransform3D more frequently, but that's because I spend a lot of time with Core Animation.
One is for linear 2d transformations, the other is for three dimensional projected transformations. At least that's what I could glean from the documentation.
If you don't need to render 3d projected onto the screen, use the affine transform. Otherwise, use the 3d transform. The 3d transform is essentially a 4x4 matrix, while the 2d affine one is 3x2.
Related
There does not seem to be a simple way to apply an affine transformation to a node in SpriteKit. (For example, in VB, I am used to setting a transform matrix as a property of e.graphics)
I've tried to look up how to do it, but the only answer I can find is this:
SpriteKit missing linear transformation matrices
However, the answer seems to be very complex for what I am trying to achieve, and perhaps it is outdated? Is there a simple way of applying a transformation matrix to any SKNode?
Whilst SpriteKit is likely a tight wrapper around some of Core Animation (which does have Affine Transformations) the 3D matrix capabilities of Core Animation have not been brought over.
This is why your example is complex, he's "faking" the results of a 3D transformation by using a filter.
Your best possible solution (and staying with Sprite Kit) is to use Scene Kit and render your SpriteKit content onto/into SceneKit objects/planes with full 3D transformation abilities...
However, whilst these frameworks have been designed to work in this manner, there are many bugs and issues, and very few people doing it, and even fewer working on it at Apple. So it's not necessarily stable, nor easy to find how to do it in your way.
Here's a starting point, point 3, using SpriteKit scenes as materials in SceneKit:
http://code.tutsplus.com/tutorials/combining-the-power-of-spritekit-and-scenekit--cms-24049
As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!
I am working on making a sprite class in OpenGL ES 2.0 and have succeeded to a point. Currently I have a render method for the sprite and it's called by the render method in my EAGL layer at intervals. I was creating new vertex buffer and index buffer every time render was called but it isn't efficient so I called glremovebuffer. Unfortunately when I do that the frame-rate is slowed down significantly.
So currently I have the vbo and ibo created at initialization which works fine in terms of frame-rate and memory consumption but is unable to update position.
I'm at a bit of a loss as I'm just beggining with OpenGL, any help is appreciated.
Typically you want to create your sprite with VBOs and IBOs once, located at the model origin. To translate, rotate, and scale, you would then use the model matrix to transform your sprite into a desired location.
I'm fairly certain that iphone sdk provides some nice functions to do that, but I don't know any of them :) Basically, in your shader, you take your position coordinates and you multiply it by one or more matrices, one of those matrices is the model matrix, which you can change to be a translate, rotate, scale, or any combination of those matrices (in fact, it can be any matrix you want and it will produce different results).
There's a lot of resources out there that explain these transformation matrices. Here's one for instance:
http://db-in.com/blog/2011/04/cameras-on-opengl-es-2-x/
My advise is to find a tutorial that speaks on the same level as your understand and learn from there...
I'd like to fill the background of my app with animated clouds. I did some research and stumbled upon the perlin noise algorithm which seems to be fitting. However even in the first test it was extremely expensive to generate a 512x512 (2D) cloud map. I tried simplex noise but it didn't fix it.
According to http://freespace.virgin.net/hugo.elias/models/m_clouds.htm generating clouds is done by adding some perlin/simplex noise maps together. Impossible to do it on a iPhone in my app: I need fluid graphics (my optimistic expectation is 60 FPS on an A4).
So my question: Is there a lighter algorithm to generate animated clouds that does not make my frame rate drop too much?
Thanks in advance!
Paul
Unless all you're doing is generating clouds you'll definitely want them precomputed. Perlin noise can make for nice 2d animations by traversing a set of 3d data, but you could just scroll a 2d image of some noise or a fractal like is generated by the diamond-square algorithm. Either way, you should probably precompute it.
If you want some more variation, I would experiment with putting a noise filter over the precomputed clouds.
Pre-generate the clouds and create 2d sprites using core animation or otherwise. You can then animate these around cheaply. You may not get 60 fps, but you should get close depending on how complex movement you want or what other animations are going on at the time. Either way, it's going to be faster than generating clouds yourself.
I'd like to hear what people think the optimal draw calls are for Open GL ES (on the iphone).
Specifically I've read in many places that it is best to minimise the number of calls to glDrawArrays/glDrawElements - I think Apple say 10 should be the max in their recent WWDC presentation. As I understand it to do this you need to put all the vertices into one array if possible, so you only need to make the drawArrays call once.
But I am confused because this surely means you can't use the translate, rotate, scale functions, because it would apply across the whole geometry. Which is fine except doesn't that mean you need to pre-calculate every vertex position yourself, rather than getting open gl to do it?
Also, doesn't it mean you can't use any of the fan/strip settings unless you just have a continuous shape?
These drawbacks make me think I'm not understanding something correctly, so I guess I'm looking for confirmation that I should:
Be trying to make an uber array of all triangles to draw.
Resign myself to the fact I'll have to work out all the vertex positions myself.
Forget about push'ing and pop'ing each thing to draw into it's desired location
Is that what others do?
Thanks
Vast question, batching is always a matter of compromise.
The ideal structure for performance would be, as you mention, to one single array containing all triangles to draw.
Starting from here, we can start adding constraints :
One additional constraint is that
having vertex indices in 16bits saves
bandwidth and memory, and probably
the fast path for your platform. So
you could consider grouping triangles
in chunks of 65536 vertices.
Then, if you want to switch the
shader/material/glState used to draw
geometry, you have no choice (*) but
to emit one draw call per
shader/material/glState. So grouping
triangles could consider grouping by
shaderID/materialID/glStateID.
Next, if you want to animate things,
you have no choice (*) but to
transmit your transform matrix to GL,
and then issue a draw call. So
grouping triangles could consider
grouping triangles by 'transform
groups', for example, all static
geometry together, animated geometry
that have common transforms can be
grouped too.
In these cases, you'd have to transform the vertices yourself (using CPU) before merging the meshes together.
Regarding triangle strips, you can transform any mesh in strips, even if it has discontinuities in its topology, by introducing degenerate triangles. So this is a technique that always apply.
All in all, reducing draw calls is a game of compromises, some techniques might work well for a 3d model, while others may be more suited for other 3d models. IMHO, the key is to be creative and to carefully benchmark your application to see if your changes actually improve performance on your target platform.
HTH, cheers,
(*) actually there are techniques that allow to reduce the number of draw calls in these cases, such as :
texture atlases to group different textures in a single one, to prevent
switching textures in GL, thus
allowing to limit draw calls
(pseudo) hardware instancing that allow shaders to fetch transforms
from various sources to transform
mesh instances in different ways.
...