mathematical movable mesh in swift with SceneKit - swift

I am a mathematician who wants to program a geometric game.
I have the exact coordinates, and math formulae, of a few meshes I need to display and of their unit normals.
I need only one texture (colored reflective metal) per mesh.
I need to have the user move pieces, i.e. change the coordinates of a mesh, again by a simple math formula.
So I don't need to import 3D files, but rather I can compute everything.
Imagine a kind of Rubik cube. Cube coordinates are computed, and cubelets are rotated by the user. I have the program functioning in Mathematica.
I am having a very hard time, for sleepless days now, trying to find exactly how to display a computed mesh in SceneKit - with each vertex and normal animated separately.
ANY working example of, say, a single triangle with computed coordinates (rather than a stock provided shape), displayed with animatable coordinates by SceneKit would be EXTREMELY appreciated.
I looked more, and it seems that individual points of a mesh may not be movable in SceneKit. I like from SceneKit (unlike OpenGL) the feature that one can get the objects under the user's finger. Can one mix together OpenGL and SceneKit in a project?
I could take over from there....

Animating vertex positions individually is, in general, a tricky problem. But there are good ways to approach it in SceneKit.
A GPU really wants to have vertex data all uploaded in one chunk before it starts rendering a frame. That means that if you're continually calculating new vertex positions/normals/etc on the CPU, you have the problem of schlepping all that data over to the GPU every time even just part of it changes.
Because you're already describing your surface mathematically, you're in a good position to do that work on the GPU itself. If each vertex position is a function of some variable, you can write that function in a shader, and find a way to pass the input variable per vertex.
There are a couple of options you could look at for this:
Shader modifiers. Start with a dummy geometry that has the topology you need (number of vertices & how they're connected as polygons). Pass your input variable as an extra texture, and in your shader modifier code (for the geometry entry point), lookup the texture, do your function, and set the vertex position with the result.
Metal compute shaders. Create a geometry source backed by a Metal buffer, then at render time, enqueue a compute shader that writes vertex data to that buffer according to your function. (There's skeletal code for part of that at the link.)
Update: From your comments it sounds like you might be in an easier position.
If what you have is geometry composed of pieces that are static with respect to themselves and move with respect to each other — like the cubelets of a Rubik's cube — computing vertices at render time is overkill. Instead, you can upload the static parts of your geometry to the GPU once, and use transforms to position them relative to each other.
The way to do this in SceneKit is to create separate nodes, each with its own (static) geometry for each piece, then set node transforms (or positions/orientations/scales) to move the nodes relative to one another. To move several nodes at once, use node hierarchy — make several of them children of another node. If some need to move together at one moment, and a different subset need to move together later, you can change the hierarchy.
Here's a concrete example of the Rubik's cube idea. First, creating some cubelets:
// convenience for creating solid color materials
func materialWithColor(color: NSColor) -> SCNMaterial {
let mat = SCNMaterial()
mat.diffuse.contents = color
mat.specular.contents = NSColor.whiteColor()
return mat
}
// create and arrange a 3x3x3 array of cubelets
var cubelets: [SCNNode] = []
for x in -1...1 {
for y in -1...1 {
for z in -1...1 {
let box = SCNBox()
box.chamferRadius = 0.1
box.materials = [
materialWithColor(NSColor.greenColor()),
materialWithColor(NSColor.redColor()),
materialWithColor(NSColor.blueColor()),
materialWithColor(NSColor.orangeColor()),
materialWithColor(NSColor.whiteColor()),
materialWithColor(NSColor.yellowColor()),
]
let node = SCNNode(geometry: box)
node.position = SCNVector3(x: CGFloat(x), y: CGFloat(y), z: CGFloat(z))
scene.rootNode.addChildNode(node)
cubelets += [node]
}
}
}
Next, the process of doing a rotation. This is one specific rotation, but you could generalize this to a function that does any transform of any subset of the cubelets:
// create a temporary node for the rotation
let rotateNode = SCNNode()
scene.rootNode.addChildNode(rotateNode)
// grab the set of cubelets whose position is along the right face of the puzzle,
// and add them to the rotation node
let rightCubelets = cubelets.filter { node in
return abs(node.position.x - 1) < 0.001
}
rightCubelets.map { rotateNode.addChildNode($0) }
// animate a rotation
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(2)
rotateNode.eulerAngles.x += CGFloat(M_PI_2)
SCNTransaction.setCompletionBlock {
// after animating, remove the cubelets from the rotation node,
// and re-add them to the parent node with their transforms altered
rotateNode.enumerateChildNodesUsingBlock { cubelet, _ in
cubelet.transform = cubelet.worldTransform
cubelet.removeFromParentNode()
scene.rootNode.addChildNode(cubelet)
}
rotateNode.removeFromParentNode()
}
SCNTransaction.commit()
The magic part is in the cleanup after the animation. The cubelets start out as children of the scene's root node, and we temporarily re-parent them to another node so we can transform them together. Upon returning them to be the root node's children again, we set each one's local transform to its worldTransform, so that it keeps the effect of the temporary node's transform changes.
You can then repeat this process to grab whatever set of nodes are in a (new) set of world space positions and use another temporary node to transform those.
I'm not sure quite how Rubik's-cube-like your problem is, but it sounds like you can probably generalize a solution from something like this.

Related

How to drag SCNNode along specific axis after rotation?

I am currently working on Swift ARKit project.
I am trying to figure out how can I drag node object at specific axis even after some rotations. For example I want to move node along Y axis after rotations but It's axis directions stays same so even if I change Y position it's still move along World Y. SCNNode.localup is static and returns SCNVector3(0, 1, 0) and as far as I see there is no function for node's local up. If I remember correctly, it was enough to increase the local axis to drag after rotating in Unity.
Node object before rotation
Before applying some rotations to drag object all you need to do is increasing or decreasing specific axis.
Node object after rotation
After rotate green Y axis rotates too but when I increase or decrease local Y value object still moves along World Y.
Sorry for my bad English. Thanks for your helps.
Out of curiosity, how are you currently applying the rotation?
A straightforward way to achieve this without needing to dig into quaternion math would be to wrap your node in question inside a parent node, and apply those transformations separately. You can apply the rotation to the parent node, and then the drag motion along the axis to the child node.
If introducing this layer would be problematic outside of this operation, you can add/rotate/translate/remove as a single atomic operation, using node.convertPosition(_:to:) to interchange between local and world coordinates after you've applied all the transformations.
let parent = SCNNode()
rootNode.addChildNode(parent)
parent.simdPosition = node.simdPosition
node.simdPosition = .zero
parent.simdRotation = /../
node.simdPosition = simd_float3(0, localYAxisShift, 0)
node.simdPosition = rootNode.convertPosition(node.simdPosition, from: parent)
rootNode.addChildNode(node)
rootNode.removeChildNode(parent)
I didn't test the above code, but the general approach should work. In general, compound motion as you describe is a bit more complex to do directly on the node itself, and under the hood SceneKit is doing all of that for you when using the above approach.
Edit
Here's a version that just does the matrix transform directly rather than relying on the built in accommodations.
let currentTransform = node.transform
let yShift = SCNMatrix4MakeTranslation(0, localYAxisShift, 0)
node.transform = SCNMatrix4Mult(yShift, currentTransform)
This should shift your object along the 'local' y axis. Note that matrix multiplication is non-commutative, i.e. the order of parameters in the SCNMatrix4Mult call is important (try reversing them to illustrate).

Cross section shader for box bounding using amplify Shader

I am trying to create shader through amplify shader for a cube to cut through plane or any mesh when cross section. I know that I should be using size, rotation and position for that but what exactly to do with them that I don't know. Yup by that it means that I am new to amplify shader and also in shader programming so please don't provide shader code as I need to make it customizable for future so please help me out in amplify shader nodes.
Currently I have this effect but I want to make it more box bounding specific not plane normals based.
I want not this effect but the box effect shown below. This was achieved through ray marching concept but this I want to achieve with Amplify Shader. Kindly guide me through this.
This is what I have done so far with the amplify nodes
Result:
Here is the result of doing the shader using "Amplify Shader":
Solution:
First we'll call the green cube the "intersector" and the red cube the "intersectee".
So as you've done with the plane, the cutout works because the back face of the intersector is shown when inside the intersectee and the intersectee front face is show when it is inside the intersector.
Create a shader (which is used by both cubes) and put them into two seperate materials - apply individual materials to each cube. After this we can get into the actually shader node stuff.
First we need to make sure "Cull Mode" is off (Output Node > Cull Mode > off). This will ensure the back face is actually rendered (This can be optimized by decided depending on where the cube is in the intersector).
Next we need to get the surface point in object space:
Most of the variables will be defined in script. The rotation matrix is used to rotate a point. However, it is inversed as the rotation matrix rotates the cube into world space, therefore, inversing this would rotate the world space point into object space. We also get a "_Cubepos" which is the position of the cube to intersect with (E.g it would be the intersector if shader is on the intersectee). This is subracted by the world pos as the rotation matrix rotates around the origin. After this it is added back to be in the correct position.
This leads to the next section where the extents are added and subtracted to the "_Cubepos" and "_CubeExtent" to find the minimum and maximum extents.
Unfortunately, Amplify shader has no good way to check if a vector lies within two vectors. So we have to break it into components. (I encourage you to learn how to write shaders). Each compare with range returns 1 if the point in object space is within the extents for each axis. If one returns 0 we use the last multiply node to make sure the final output will be 0.
Finally, we get to the last part of the shader. The "IsIntersector" is set in script to be 1 or 0 depending on whether the cube we are refering to is used to intersect or is an intersectee. Depending on the scenario, here we set the opacity mask to 1 or 0.
After this we have to define the script to attach to each object. Add a new script and type the following in:
[ExecuteInEditMode]
public class SetVar : MonoBehaviour
{
//Transform of opposite cube
public Transform intersectingCube;
//Is this an intersector or intersectee
public bool isIntersector;
//Material of object
public Material mat;
// Start is called before the first frame update
void Start()
{
//Get material
mat = GetComponent<Renderer>().material;
}
// Update is called once per frame
void OnRenderObject()
{
//Calculate rotation matrix
Matrix4x4 m = Matrix4x4.TRS(-intersectingCube.position, intersectingCube.rotation, Vector3.one);
//Set shader variables
mat.SetMatrix("RotationMatrix", m);
mat.SetVector("_Cubepos", intersectingCube.position);
mat.SetVector("_CubeExtent", intersectingCube.localScale / 2.0f);
mat.SetFloat("_IsIntersector", (isIntersector) ? 0 : 1);
}
}
Then we can set the correct inspector values depending if the cube is an intersector or intersectee. Here is an example for the intersector cube:
Make sure to have the IsIntersector ticked depending if the cube is an intersector or not.
Here is a link to the shader: http://paste.amplify.pt/view/raw/4b248bc3. Also to do this for any mesh is a very complicated operation - too complicated for nodes. Learn about shader code and use a raycasting algorithm to determine if the point is inside the cube.
Also, alternatively for any convex shape. You could calculate each plane and then using your method already used, can check if the world position point works for every plane. For a cube there would be 6 planes, however, its a bit slower than the above method (as it is optimized for a cube).

Unity, Relative dimensions of gameobjects

I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale.
Then how could I set the overall relative dimensions between the gameobjects?
For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius.
Also, consider Bounds.extents, that's relative to the objects's bounding box.
Again, considering the Sphere, you can obtain the diameter with:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Bounds bounds = mesh.bounds;
float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object.
The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size).
But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1.
You should try to learn some 3D modelling application to create arbitrary objects.
Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling.
Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number.
So typically 1 m = 1 unit.
All things that are 1 unit tall are 1 m tall.
If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall.
A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option.
Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also.
This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.

ARKit project point with previous device position

I'm combining ARKit with a CNN to constantly update ARKit nodes when they drift. So:
Get estimate of node position with ARKit and place a virtual object in the world
Use CNN to get its estimated 2D location of the object
Update node position accordingly (to refine it's location in 3D space)
The problem is that #2 takes 0,3s or so. Therefore I can't use sceneView.unprojectPoint because the point will correspond to a 3D point from the device's world position from #1.
How do I calculate the 3D vector from my old location to the CNN's 2D point?
unprojectPoint is just a matrix-math convenience function similar to those found in many graphics-oriented libraries (like DirectX, old-style OpenGL, Three.js, etc). In SceneKit, it's provided as a method on the view, which means it operates using the model/view/projection matrices and viewport the view currently uses for rendering. However, if you know how that function works, you can implement it yourself.
An Unproject function typically does two things:
Convert viewport coordinates (pixels) to the clip-space coordinate system (-1.0 to 1.0 in all directions).
Reverse the projection transform (assuming some arbitrary Z value in clip space) and the view (camera) transform to get to 3D world-space coordinates.
Given that knowledge, we can build our own function. (Warning: untested.)
func unproject(screenPoint: float3, // see below for Z depth hint discussion
modelView: float4x4,
projection: float4x4,
viewport: CGRect) -> float3 {
// viewport to clip: subtract viewport origin, divide by size,
// scale/offset from 0...1 to -1...1 coordinate space
let clip = (screenPoint - float3(viewport.x, viewport.y, 1.0))
/ float3(viewport.width, viewport.height, 1.0)
* float3(2) - float3(1)
// apply the reverse of the model-view-projection transform
let inversePM = (projection * modelView).inverse
let result = inversePM * float4(clip.x, clip.y, clip.z, 1.0)
return float3(result.x, result.y, result.z) / result.w // perspective divide
}
Now, to use it... The modelView matrix you pass to this function is the inverse of ARCamera.transform, and you can also get projectionMatrix directly from ARCamera. So, if you're grabbing a 2D position at one point in time, grab the camera matrices then, too, so that you can work backward to 3D as of that time.
There's still the issue of that "Z depth hint" I mentioned: when the renderer projects 3D to 2D it loses information (one of those D's, actually). So you have to recover or guess that information when you convert back to 3D — the screenPoint you pass in to the above function is the x and y pixel coordinates, plus a depth value between 0 and 1. Zero is closer to the camera, 1 is farther away. How you make use of that sort of depends on how the rest of your algorithm is designed. (At the very least, you can unproject both Z=0 and Z=1, and you'll get the endpoints of line segment in 3D, with your original point somewhere along that line.)
Of course, whether this can actually be put together with your novel CNN-based approach is another question entirely. But at least you learned some useful 3D graphics math!

Speeding up rendering in SceneKit

So, I am using SceneKit to render a collection of parametric surfaces (the sum of which make an object). To put these on screen I am creating custom geometries by sampling the points and creating triangles. Here is a quick over view of how I do it.
Loop through the collection of surfaces
Generate a random color C
For each surface calculate a grid of N x N points (both positions and normals)
Assign all vertexes for that surface the color C
Add groups of 3 vertexes from this surface to the face index list
And that seems to work. After I get all this data, I make it into the proper structures (SCNGeometrySource and SCNGeometryElement) and make a SCNGeometry like so
SCNGeometry(sources: [vertexSource, normalSource, colorSource], elements: [element])
This works and displays my surfaces on the screen fine as one single geometry element. My problem is, I have some really complicated objects that I am trying to work with and it is just running really slow to move the camera around when looking at the object. Rendering is taking around 500 ms. Which is making my frame rate and experience awful.
So the question is, what steps can I take to speed up SceneKit performance? I did this same project with WebGL using Three.js with the same amount of data and was able to use an orbiting camera fine, so I can't believe that scene kit couldn't at least compete with that. What features can I tweak and turn off to speed up performance? I am using the triangle primitive type, the allowsCameraControl = true for the orbiting camera, and metal for the SCNView.
For those curious, the model I am struggling on generated 231,900 vertices and 347,850 indices for faces (11.1312 MB of vertex data (position and normal) and 1.3914 MB of face data (essentially just index positions of vertexes in order for triangles.))
1) If you are "standing" on center of your generated surface, then your problem maybe that you drawing alot offscreen (no frustum culling) and you need to split your sufrface (single node) into subsurfaces (child nodes), so only nodes that is visible in camera view space is drawn.
That being said, 231,900 vertices is really not much, I draw several milions #60fps with SceneKit Metal renderer (+20% faster than using OpenGL renderer) on OSX.
2) If you are looking on your surfaces from distance and have bad performance, check what ammount of bytesPerComponent: you feeding when creating SCNGeometrySource. I experienced big performance drop when using CGFloat (double) instead of plain float on GeForce GTX (while okay on integrated Intel graphics).