There does not seem to be a simple way to apply an affine transformation to a node in SpriteKit. (For example, in VB, I am used to setting a transform matrix as a property of e.graphics)
I've tried to look up how to do it, but the only answer I can find is this:
SpriteKit missing linear transformation matrices
However, the answer seems to be very complex for what I am trying to achieve, and perhaps it is outdated? Is there a simple way of applying a transformation matrix to any SKNode?
Whilst SpriteKit is likely a tight wrapper around some of Core Animation (which does have Affine Transformations) the 3D matrix capabilities of Core Animation have not been brought over.
This is why your example is complex, he's "faking" the results of a 3D transformation by using a filter.
Your best possible solution (and staying with Sprite Kit) is to use Scene Kit and render your SpriteKit content onto/into SceneKit objects/planes with full 3D transformation abilities...
However, whilst these frameworks have been designed to work in this manner, there are many bugs and issues, and very few people doing it, and even fewer working on it at Apple. So it's not necessarily stable, nor easy to find how to do it in your way.
Here's a starting point, point 3, using SpriteKit scenes as materials in SceneKit:
http://code.tutsplus.com/tutorials/combining-the-power-of-spritekit-and-scenekit--cms-24049
Related
I am beating my head a little bit here for a while but I still could bot find a way to set up a matrix that projects my Unity game in a Tibianeske like manner:
Reading on tutorials on internet I could figure out how a normal orthographic perspective works, but tibia's one is kind of odd.
Digging over webs I found in here a guy (Clint Bellanger) who describes really well how to get the same perspective in blender's render according to him:
Start with a scene in 45 degree isometric. Video game style, where
the camera angle is Blender (60,0,45).
In Blender if you look at Buttons Window -> Scene -> Render Buttons ->
Format, you can set the render aspect ratio. Set AspY to half of
AspX. This is the same as taking regular rendered output and scaling
X by 50%. If you rendered a cube, the top of the cube will be a
perfect square (though at a 45 degree angle).
We can then use Blender nodes to rotate the result 45 degrees. The
output:
Note this started as a cube, so there's a lot of "vertical"
distortion. So you might have to scale meshes to 50% Z before using
this method. Also notice the Edge seems to be applied after the
Aspect, so the edge isn't distorted.
Blend file: http://clintbellanger.net/images/temp/UltimaVII.blend (I'm
a Nodes noob so there might be a smarter setup).
For kicks, here is that tower again. I pulled it into the above
workflow scene and scaled Z by 50%. Click "Re-render this layer" on
the first node to create the composite.
On his method, he used stuff like rescaling the render and changing the scale of models, Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Hope someone more experienced with perks of 3D maths could help me to figure it out. Thank you! =D
What you ask for is a simple parallel projection. The typical orthographic projection is just a special case where the projection rays are perpendicular to the image plane. However, every parallel projection can be represented by an affine shear transformation followed by a standard orthogonal projection.
Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Yes. Using default GL conventions here, all you have to do is to take the standard ortho matrix, post-multiply it by an appropriate shear matrix and use that as the projection matrix.
A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.
As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!
I am working on making a sprite class in OpenGL ES 2.0 and have succeeded to a point. Currently I have a render method for the sprite and it's called by the render method in my EAGL layer at intervals. I was creating new vertex buffer and index buffer every time render was called but it isn't efficient so I called glremovebuffer. Unfortunately when I do that the frame-rate is slowed down significantly.
So currently I have the vbo and ibo created at initialization which works fine in terms of frame-rate and memory consumption but is unable to update position.
I'm at a bit of a loss as I'm just beggining with OpenGL, any help is appreciated.
Typically you want to create your sprite with VBOs and IBOs once, located at the model origin. To translate, rotate, and scale, you would then use the model matrix to transform your sprite into a desired location.
I'm fairly certain that iphone sdk provides some nice functions to do that, but I don't know any of them :) Basically, in your shader, you take your position coordinates and you multiply it by one or more matrices, one of those matrices is the model matrix, which you can change to be a translate, rotate, scale, or any combination of those matrices (in fact, it can be any matrix you want and it will produce different results).
There's a lot of resources out there that explain these transformation matrices. Here's one for instance:
http://db-in.com/blog/2011/04/cameras-on-opengl-es-2-x/
My advise is to find a tutorial that speaks on the same level as your understand and learn from there...
Whats the difference between the two?
I'm sure they have pros and cons, and situations they are better performers in.
Any resources that compare the two?
Is one better for animation (I imagine the CATransform3D)? Why?
Also I think I read somewhere that text clarity can be an issue, is one better at scaling text?
As MSN said, they are used in different cases. CGAffineTransform is used for 2-D manipulation of NSViews, UIViews, and other 2-D Core Graphics elements.
CATransform3D is a Core Animation structure that can do more complex 3-D manipulations of CALayers. CATransform3D has the same internal structure as an OpenGL model view matrix, which makes sense when you realize that Core Animation is built on OpenGL (CALayers are wrappers for OpenGL textures, etc.). I've found that this similarity of internal structure, combined with some nice helper functions that Apple provides, can let you do some neat OpenGL optimizations, as I post here.
When it comes down to choosing which do use, ask yourself if you're going to work with views directly in a 2-D space (CGAffineTransform) or with the underlying Core Animation layers in 3-D (CATransform3D). I use CATransform3D more frequently, but that's because I spend a lot of time with Core Animation.
One is for linear 2d transformations, the other is for three dimensional projected transformations. At least that's what I could glean from the documentation.
If you don't need to render 3d projected onto the screen, use the affine transform. Otherwise, use the 3d transform. The 3d transform is essentially a 4x4 matrix, while the 2d affine one is 3x2.