How to drag SCNNode along specific axis after rotation? - swift

I am currently working on Swift ARKit project.
I am trying to figure out how can I drag node object at specific axis even after some rotations. For example I want to move node along Y axis after rotations but It's axis directions stays same so even if I change Y position it's still move along World Y. SCNNode.localup is static and returns SCNVector3(0, 1, 0) and as far as I see there is no function for node's local up. If I remember correctly, it was enough to increase the local axis to drag after rotating in Unity.
Node object before rotation
Before applying some rotations to drag object all you need to do is increasing or decreasing specific axis.
Node object after rotation
After rotate green Y axis rotates too but when I increase or decrease local Y value object still moves along World Y.
Sorry for my bad English. Thanks for your helps.

Out of curiosity, how are you currently applying the rotation?
A straightforward way to achieve this without needing to dig into quaternion math would be to wrap your node in question inside a parent node, and apply those transformations separately. You can apply the rotation to the parent node, and then the drag motion along the axis to the child node.
If introducing this layer would be problematic outside of this operation, you can add/rotate/translate/remove as a single atomic operation, using node.convertPosition(_:to:) to interchange between local and world coordinates after you've applied all the transformations.
let parent = SCNNode()
rootNode.addChildNode(parent)
parent.simdPosition = node.simdPosition
node.simdPosition = .zero
parent.simdRotation = /../
node.simdPosition = simd_float3(0, localYAxisShift, 0)
node.simdPosition = rootNode.convertPosition(node.simdPosition, from: parent)
rootNode.addChildNode(node)
rootNode.removeChildNode(parent)
I didn't test the above code, but the general approach should work. In general, compound motion as you describe is a bit more complex to do directly on the node itself, and under the hood SceneKit is doing all of that for you when using the above approach.
Edit
Here's a version that just does the matrix transform directly rather than relying on the built in accommodations.
let currentTransform = node.transform
let yShift = SCNMatrix4MakeTranslation(0, localYAxisShift, 0)
node.transform = SCNMatrix4Mult(yShift, currentTransform)
This should shift your object along the 'local' y axis. Note that matrix multiplication is non-commutative, i.e. the order of parameters in the SCNMatrix4Mult call is important (try reversing them to illustrate).

Related

(UNITY) Plane not rotating to normal vector of three points?

I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why.
drastic movements of the individual points hardly effect the planes rotation.
the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around.
This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction!
And then it
Rotates the transform so the forward vector points at worldPosition.
Doesn't sound like what you are trying to achieve.
If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation
transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht);
alternatively to this you can also simply assign the according vector directly like e.g.
transform.forward = doNormal(cpit, cmit, ctht);
or
transform.up = doNormal(cpit, cmit, ctht);
depending on your needs

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

ARKit project point with previous device position

I'm combining ARKit with a CNN to constantly update ARKit nodes when they drift. So:
Get estimate of node position with ARKit and place a virtual object in the world
Use CNN to get its estimated 2D location of the object
Update node position accordingly (to refine it's location in 3D space)
The problem is that #2 takes 0,3s or so. Therefore I can't use sceneView.unprojectPoint because the point will correspond to a 3D point from the device's world position from #1.
How do I calculate the 3D vector from my old location to the CNN's 2D point?
unprojectPoint is just a matrix-math convenience function similar to those found in many graphics-oriented libraries (like DirectX, old-style OpenGL, Three.js, etc). In SceneKit, it's provided as a method on the view, which means it operates using the model/view/projection matrices and viewport the view currently uses for rendering. However, if you know how that function works, you can implement it yourself.
An Unproject function typically does two things:
Convert viewport coordinates (pixels) to the clip-space coordinate system (-1.0 to 1.0 in all directions).
Reverse the projection transform (assuming some arbitrary Z value in clip space) and the view (camera) transform to get to 3D world-space coordinates.
Given that knowledge, we can build our own function. (Warning: untested.)
func unproject(screenPoint: float3, // see below for Z depth hint discussion
modelView: float4x4,
projection: float4x4,
viewport: CGRect) -> float3 {
// viewport to clip: subtract viewport origin, divide by size,
// scale/offset from 0...1 to -1...1 coordinate space
let clip = (screenPoint - float3(viewport.x, viewport.y, 1.0))
/ float3(viewport.width, viewport.height, 1.0)
* float3(2) - float3(1)
// apply the reverse of the model-view-projection transform
let inversePM = (projection * modelView).inverse
let result = inversePM * float4(clip.x, clip.y, clip.z, 1.0)
return float3(result.x, result.y, result.z) / result.w // perspective divide
}
Now, to use it... The modelView matrix you pass to this function is the inverse of ARCamera.transform, and you can also get projectionMatrix directly from ARCamera. So, if you're grabbing a 2D position at one point in time, grab the camera matrices then, too, so that you can work backward to 3D as of that time.
There's still the issue of that "Z depth hint" I mentioned: when the renderer projects 3D to 2D it loses information (one of those D's, actually). So you have to recover or guess that information when you convert back to 3D — the screenPoint you pass in to the above function is the x and y pixel coordinates, plus a depth value between 0 and 1. Zero is closer to the camera, 1 is farther away. How you make use of that sort of depends on how the rest of your algorithm is designed. (At the very least, you can unproject both Z=0 and Z=1, and you'll get the endpoints of line segment in 3D, with your original point somewhere along that line.)
Of course, whether this can actually be put together with your novel CNN-based approach is another question entirely. But at least you learned some useful 3D graphics math!

Orient SCNNode in an ARKit scene using real-world bearing

I have a simple SCNNode that I want to place in the real-world position, the node corresponds to a landmark with known coordinates.
The ARKit configuration has the worldAlignment property set to .gravityAndHeading so the x and z axes should be oriented with the world already.
After creating the node, I am setting the position at 100m away from the
node.position = SCNVector3(0, 0, -100)
Then I would like to project the node but with the correct bearing (from user and landmark coordinates). I am trying to rotate the node on the y-axis rotation(yaw)
node.eulerAngles = SCNVector3Make(0, bearingRadians, 0)
However the node still points to north direction, no matter what values I have for bearingRadians.
Do I need to do an extra transformation?
With eulerAngles you just rotate your node in its own coord system.
What you actually need is to perform a full transform of your node relative to the camera position. The transformation is a translation on -z axis followed by a (negative) rotation on y-axis according to your bearing.
Check out this ARKit WindRose repo which align and project the cardinal directions in the real world: https://github.com/vasile/ARKit-CompassRose

mathematical movable mesh in swift with SceneKit

I am a mathematician who wants to program a geometric game.
I have the exact coordinates, and math formulae, of a few meshes I need to display and of their unit normals.
I need only one texture (colored reflective metal) per mesh.
I need to have the user move pieces, i.e. change the coordinates of a mesh, again by a simple math formula.
So I don't need to import 3D files, but rather I can compute everything.
Imagine a kind of Rubik cube. Cube coordinates are computed, and cubelets are rotated by the user. I have the program functioning in Mathematica.
I am having a very hard time, for sleepless days now, trying to find exactly how to display a computed mesh in SceneKit - with each vertex and normal animated separately.
ANY working example of, say, a single triangle with computed coordinates (rather than a stock provided shape), displayed with animatable coordinates by SceneKit would be EXTREMELY appreciated.
I looked more, and it seems that individual points of a mesh may not be movable in SceneKit. I like from SceneKit (unlike OpenGL) the feature that one can get the objects under the user's finger. Can one mix together OpenGL and SceneKit in a project?
I could take over from there....
Animating vertex positions individually is, in general, a tricky problem. But there are good ways to approach it in SceneKit.
A GPU really wants to have vertex data all uploaded in one chunk before it starts rendering a frame. That means that if you're continually calculating new vertex positions/normals/etc on the CPU, you have the problem of schlepping all that data over to the GPU every time even just part of it changes.
Because you're already describing your surface mathematically, you're in a good position to do that work on the GPU itself. If each vertex position is a function of some variable, you can write that function in a shader, and find a way to pass the input variable per vertex.
There are a couple of options you could look at for this:
Shader modifiers. Start with a dummy geometry that has the topology you need (number of vertices & how they're connected as polygons). Pass your input variable as an extra texture, and in your shader modifier code (for the geometry entry point), lookup the texture, do your function, and set the vertex position with the result.
Metal compute shaders. Create a geometry source backed by a Metal buffer, then at render time, enqueue a compute shader that writes vertex data to that buffer according to your function. (There's skeletal code for part of that at the link.)
Update: From your comments it sounds like you might be in an easier position.
If what you have is geometry composed of pieces that are static with respect to themselves and move with respect to each other — like the cubelets of a Rubik's cube — computing vertices at render time is overkill. Instead, you can upload the static parts of your geometry to the GPU once, and use transforms to position them relative to each other.
The way to do this in SceneKit is to create separate nodes, each with its own (static) geometry for each piece, then set node transforms (or positions/orientations/scales) to move the nodes relative to one another. To move several nodes at once, use node hierarchy — make several of them children of another node. If some need to move together at one moment, and a different subset need to move together later, you can change the hierarchy.
Here's a concrete example of the Rubik's cube idea. First, creating some cubelets:
// convenience for creating solid color materials
func materialWithColor(color: NSColor) -> SCNMaterial {
let mat = SCNMaterial()
mat.diffuse.contents = color
mat.specular.contents = NSColor.whiteColor()
return mat
}
// create and arrange a 3x3x3 array of cubelets
var cubelets: [SCNNode] = []
for x in -1...1 {
for y in -1...1 {
for z in -1...1 {
let box = SCNBox()
box.chamferRadius = 0.1
box.materials = [
materialWithColor(NSColor.greenColor()),
materialWithColor(NSColor.redColor()),
materialWithColor(NSColor.blueColor()),
materialWithColor(NSColor.orangeColor()),
materialWithColor(NSColor.whiteColor()),
materialWithColor(NSColor.yellowColor()),
]
let node = SCNNode(geometry: box)
node.position = SCNVector3(x: CGFloat(x), y: CGFloat(y), z: CGFloat(z))
scene.rootNode.addChildNode(node)
cubelets += [node]
}
}
}
Next, the process of doing a rotation. This is one specific rotation, but you could generalize this to a function that does any transform of any subset of the cubelets:
// create a temporary node for the rotation
let rotateNode = SCNNode()
scene.rootNode.addChildNode(rotateNode)
// grab the set of cubelets whose position is along the right face of the puzzle,
// and add them to the rotation node
let rightCubelets = cubelets.filter { node in
return abs(node.position.x - 1) < 0.001
}
rightCubelets.map { rotateNode.addChildNode($0) }
// animate a rotation
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(2)
rotateNode.eulerAngles.x += CGFloat(M_PI_2)
SCNTransaction.setCompletionBlock {
// after animating, remove the cubelets from the rotation node,
// and re-add them to the parent node with their transforms altered
rotateNode.enumerateChildNodesUsingBlock { cubelet, _ in
cubelet.transform = cubelet.worldTransform
cubelet.removeFromParentNode()
scene.rootNode.addChildNode(cubelet)
}
rotateNode.removeFromParentNode()
}
SCNTransaction.commit()
The magic part is in the cleanup after the animation. The cubelets start out as children of the scene's root node, and we temporarily re-parent them to another node so we can transform them together. Upon returning them to be the root node's children again, we set each one's local transform to its worldTransform, so that it keeps the effect of the temporary node's transform changes.
You can then repeat this process to grab whatever set of nodes are in a (new) set of world space positions and use another temporary node to transform those.
I'm not sure quite how Rubik's-cube-like your problem is, but it sounds like you can probably generalize a solution from something like this.