Can't make node move along x-axis in pointofView - swift

I am working on an ARKit playground project and I just can't get an SCNNode to move along the axis of sceneView.pointofview. When I try with constants like 0.04 etc it adjusts the position properly but when I provide the coordinates relative to the frame of pointOfView I can't get to position it anywhere else but in the centre.
Here is the code for that part:
let winglevMain = button(ButtonType: .wing)
let wingLevButton = winglevMain.button1
wingLevButton.name = "wing"
let x = (sceneView.pointOfView?.frame.width)!
let y = x/2
let z = x/5
let total = y+z
wingLevButton.position = SCNVector3(total, 0.12, -0.5)
sceneView.pointOfView?.addChildNode(wingLevButton)
P.S I used separate constants to store each value because when I tried putting em as it is into the arguments for position, I got an error signifying that it was hard for playground to calculate it that way.

Related

Manually write world file (jgw) from Leaflet.js map

I have the need to export georeferenced images from Leaflet.js on the client side. Exporting an image from Leaflet is not a problem as there are plenty of existing plugins for this, but I'd like to include a world file with the export so the resulting image can be read into GIS software. I have a working script fort his, but I can't seem to nail down the correct parameters for my world file such that the resulting georeferenced image is located exactly correctly.
Here's my current script
// map is a Leaflet map object
let bounds = map.getBounds(); // Leaflet LatLngBounds
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let width_deg = bottomRight.lng - topLeft.lng;
let height_deg = topLeft.lat - bottomRight.lat;
let width_px = $(map._container).width() // Width of the map in px
let height_px = $(map._container).height() // Height of the map in px
let scaleX = width_deg / width_px;
let scaleY = height_deg / height_px;
let jgwText = `${scaleX}
0
0
-${scaleY}
${topLeft.lng}
${topLeft.lat}`
This seems to work well at large scales (ie zoomed in to city-level or so), but at smaller scales there is some distortion along the y-axis. One thing I noticed is that all examples of world files I can find (and those produced from QGIS or ArcMap) all have the x-scale and y-scale parameters being exactly equal (oppositely signed). In my calculations, these terms are different unless you are sitting right on the equator.
Example world file produced from QGIS
0.08984380916303301 // x-scale (size of px in x direction)
0 // rotation parameter 1
0 // rotation parameter 2
-0.08984380916303301 // y-scale (size of px in y direction)
-130.8723208723141056 // x-coord of top left px
51.73651369984968085 // y-coord of top left px
Example world file produced from my calcs
0.021972656250000017
0
0
-0.015362443783773333
-130.91308593750003
51.781435604431195
Example of produced image using my calcs with correct state boundaries overlaid:
Does anyone have any idea what I'm doing wrong here?
Problem was solved by using EPSG:3857 for the worldfile, and ensuring the width and height of the map bounds was also measured in this coordinate system. I had tried using EPSG:3857 for the worldfile, but measured the width and height of the map bounds using Leaflet's L.map.distance() function. To solve the problem, I instead projected corner points of the map bounds to EPSG:3857 using L.CRS.EPSG3857.project(), the simply subtracted the X,Y values.
Corrected code is shown below, where map is a Leaflet map object (L.map)
// Get map bounds and corner points in 4326
let bounds = map.getBounds();
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let topRight = bounds.getNorthEast();
// get width and height in px of the map container
let width_px = $(map._container).width()
let height_px = $(map._container).height()
// project corner points to 3857
let topLeft_3857 = L.CRS.EPSG3857.project(topLeft)
let topRight_3857 = L.CRS.EPSG3857.project(topRight)
let bottomRight_3857 = L.CRS.EPSG3857.project(bottomRight)
// calculate width and height in meters using epsg:3857
let width_m = topRight_3857.x - topLeft_3857.x
let height_m = topRight_3857.y - bottomRight_3857.y
// calculate the scale in x and y directions in meters (this is the width and height of a single pixel in the output image)
let scaleX_m = width_m / width_px
let scaleY_m = height_m / height_px
// worldfiles need the CENTRE of the top left px, what we currently have is the TOPLEFT point of the px.
// Adjust by subtracting half a pixel width and height from the x,y
let topLeftCenterPxX = topLeft_3857.x - (scaleX / 2)
let topLeftCenterPxY = topLeft_3857.y - (scaleY / 2)
// format the text of the worldfile
let jgwText = `
${scaleX_m}
0
0
-${scaleY_m}
${topLeftCenterPxX}
${topLeftCenterPxY}
`
For anyone else with this problem, you'll know things are correct when your scale-x and scale-y values are exactly equal (but oppositely signed)!
Thanks #IvanSanchez for pointing me in the right direction :)

Retrieving bone rotations from 3D Skeleton in ARKit 3

I'm trying to get the bone rotations related to their parents, but I end up getting pretty weird angles.
I've tried everything, matrix multiplications, offsets, axis swapping, and no luck.
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
//RETRIEVE ANGLES HERE
}
In //RETRIEVE ANGLES HERE I've tried different approaches:
let n = SCNNode()
n.transform = SCNMatrix4(jointTransform)
print(n.eulerAngles)
In this try, I set the jointTransformation to a SCNNode.transform so I can retrieve the eulerAngles to make them human readable and try to understand what's happening.
I get to work some joints, but I think it's pure coincidence or luck, because the rest of the bones rotate very weird.
In other try I get them using jointModelTransforms (Model, instead of Local) so all transforms are relative to the Root bone of the Skeleton.
With this approach I do matrix multiplications like this:
LocalMatrix = Inverse(JointModelMatrix) * (ParentJointModelMatrix)
To get the rotations relative to its parent, but same situation, some bones rotate okay other rotate weird. Pure coincidence I bet.
Why do I want to get the bone rotations?
I'm trying build a MoCap app with my phone that passes to Blender the rotations, trying to build .BVH files from this, so I can use them on Blender.
This is my own rig:
I've done this before with Kinect, but I've been trying for days to do it on ARKit 3 with no luck :(
Using simd_quatf(from:to:) with the right input should do it. I had trouble with weird angles until i started normalising the vectors:
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
// First i filter out the root (Hip) joint because it doesn't have a parent
let parentIndex = skeleton.definition.parentIndices[i]
guard parentIndex >= 0 else { continue } // root joint has parent index of -1
//RETRIEVE ANGLES HERE
let jointVectorFromParent = simd_make_float3(jointTransform.columns.3)
let referenceVector: SIMD3<Float>
if skeleton.definition.parentIndices[parentIndex] >= 0 {
referenceVector = simd_make_float3(jointTransforms[parentIndex].columns.3)
} else {
// The parent joint is the Hip joint which should have
// a vector of 0 going to itself
// It's impossible to calculate an angle from a vector of length 0,
// So we're using a vector that's just pointing up
referenceVector = SIMD3<Float>(x: 0, y: 1, z: 0)
}
// Normalizing is important because simd_quatf gives weird results otherwise
let jointNormalized = normalize(jointVectorFromParent)
let referenceNormalized = normalize(referenceVector)
let orientation = simd_quatf(from: referenceNormalized, to: jointNormalized)
print("angle of joint \(i) = \(orientation.angle)")
}
One important thing to keep in mind though:
ARKit3 tracks only some joints (AFAIK the named joints in ARSkeleton.JointName). The other joints are extrapolated from that using a standardized skeleton. Which means, that the angle you get for the elbow for example won't be the exact angle the tracked persons elbow has there.
Just a guess… does this do the job?
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
print(Transform(matrix: jointTransform).rotation)
}

How can I get the yaw, pitch, roll of an ARAnchor in absolute terms?

I've been trying to figure this out for a few days now.
Given an ARKit-based app where I track a user's face, how can I get the face's rotation in absolute terms, from its anchor?
I can get the transform of the ARAnchor, which is a simd_matrix4x4.
There's a lot of info on how to get the position out of that matrix (it's the 3rd column), but nothing on the rotation!
I want to be able to control a 3D object outside of the app, by passing YAW, PITCH and ROLL.
The latest I thing I tried actually works somewhat:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let faceMatrix = SCNMatrix4.init(faceAnchor.transform)
let node = SCNNode()
node.transform = faceMatrix
let rotation = node.worldOrientation
rotation.x .y and .z have values I could use, but as I move my phone the values change. For instance, if I turn 180˚ and keep looking at the phone, the values change wildly based on the position of the phone.
I tried changing the world alignment in the ARConfiguration, but that didn't make a difference.
Am I reading the wrong parameters? This should have been a lot easier!
I've figured it out...
Once you have the face anchor, some calculations need to happen with its transform matrix, and the camera's transform.
Like this:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let projectionMatrix = arFrame.camera.projectionMatrix(for: .portrait, viewportSize: self.sceneView.bounds.size, zNear: 0.001, zFar: 1000)
let viewMatrix = arFrame.camera.viewMatrix(for: .portrait)
let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)
let modelMatrix = faceAnchor.transform
let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)
// This allows me to just get a .x .y .z rotation from the matrix, without having to do crazy calculations
let newFaceMatrix = SCNMatrix4.init(mvpMatrix)
let faceNode = SCNNode()
faceNode.transform = newFaceMatrix
let rotation = vector_float3(faceNode.worldOrientation.x, faceNode.worldOrientation.y, faceNode.worldOrientation.z)
rotation.x .y and .z will return the face's pitch, yaw, roll (respectively)
I'm adding a small multiplier and inverting 2 of the axis, so it ends up like this:
yaw = -rotation.y*3
pitch = -rotation.x*3
roll = rotation.z*1.5
Phew!
I understand that you are using front camera and ARFaceTrackingConfiguration, which is not supposed to give you absolute values. I would try to configure second ARSession for back camera with ARWorldTrackingConfiguration which does provide absolute values. The final solution will probably require values from both ARSession's. I haven't tested this hypothesis yet but it seems to be the only way.
UPDATE quote from ARWorldTrackingConfiguration -
The ARWorldTrackingConfiguration class tracks the device's movement with six degrees of freedom (6DOF): specifically, the three rotation axes (roll, pitch, and yaw), and three translation axes (movement in x, y, and z). This kind of tracking can create immersive AR experiences: A virtual object can appear to stay in the same place relative to the real world, even as the user tilts the device to look above or below the object, or moves the device around to see the object's sides and back.
Apparently, other tracking configurations do not have this ability.

ARKit: Placing an SCNText at a particular point in front of the camera

I've managed to get a cube (SCNNode) placed on a surface where the camera is pointed, however I am finding it very difficult to do the simple (?) task of also placing text in the same position.
I've created the SCNText and subsequent SCNNode, however when I add it to the rootNode the text always seems to be added above my head and off the camera to the right (which tells me thats the global origin point).
Even when I use the exact same values of position I used for the the cube, the SCNText node still gets placed above my head in the same spot.
Apologies if this is a basic question, I've never worked in SceneKit before.
The coordinate center for an SCNGeometry is its center point. But when you are creating a SCNText the center point is somewhere in the bottom left corner:
You need to center the text first. This can be done by checking the bounding box of the node containing your text and setting a pivot transform to change the texts center to its actual center:
func center(node: SCNNode) {
let (min, max) = node.boundingBox
let dx = min.x + 0.5 * (max.x - min.x)
let dy = min.y + 0.5 * (max.y - min.y)
let dz = min.z + 0.5 * (max.z - min.z)
node.pivot = SCNMatrix4MakeTranslation(dx, dy, dz)
}
Edit:
Also note this answer that explains some additional pitfalls:
A text with 16 pts font size is 16 SceneKit units tall. But in ARKit 1 SceneKit units = 1 meter!

SceneKit invert Y axis without inverting geometry normals

Generally what I'm trying to achieve: we have map data that historically was all 2D, and the coordinate system we use is the origin point (0,0) at the top left, positive x goes right, positive y goes down. We have now added 3D data by adding a z axis, positive z coming out of the screen towards you (think top-down map view). This is a left handed coordinate system, but SceneKit is a right handed coordinate system. I would like to apply some transform at the top level of my SceneKit Scene that will convert the Scene into a left handed coordinate system such that I can modify/position/add nodes to the scene in terms of our custom mapping coordinate system and things will just work.
So far I have this:
let scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.scale = SCNVector3(1,-1,1)
scene.rootNode.addChildNode(cameraNode)
This achieves exactly what I want, but has one big problem. It inverts all of the geometry faces, so my geometry's disappear unless I change their material's cullMode:
let mapLength = 1000 //max X axis
let mapWidth = 800 //max Y axis
let mapHeight = 100 //max Z axis
cameraNode.position = SCNVector3(mapLength / 2, mapWidth / 2, 2000)
let mapPlane = SCNNode()
mapPlane.position = SCNVector3(mapLength / 2, mapWidth / 2, 0)
mapPlane.geometry = SCNPlane(width: mapLength, height: mapWidth)
mapPlane.geometry?.firstMaterial?.diffuse.contents = UIColor.blackColor()
scene.rootNode.addChildNode(mapPlane)
mapPlane doesn't show at all! You have to rotate the camera to the underside of mapPlane in order to see it. You can easily fix this by adding a single line:
mapPlane.geometry?.firstMaterial?.cullMode = .Front
But I don't want to have to change the cullMode for every geometry/material. Is there a way to achieve this without requiring extra code at each geometry/material? Some transform that would invert the geometry face normals for all child nodes of rootNode? Ideally this would be achieved entirely by settings on the actual Scene, or by transforms on rootNode or the camera.