Getting SCNNode Bounding Size in Meters - swift

I'm trying to wrap an SCNPlane around a SCNNode. I'm using ARKit so everything is measured in meters, but when I get the boundingBox, I get measurements in some other unit. I looked at Apple's documentation, and they don't specify what the units are.
For example, one of nodes is roughly 3 meters wide, but it says its 26 units.
I could do a rough division to get a constant and use that to do the unit conversions, but I was wondering if there's a less hacky way to do it?
let textContainerSize = textBodyNode.boundingBox
let xSize = textContainerSize.max.x - textContainerSize.min.x
let ySize = textContainerSize.max.y - textContainerSize.min.y
print("size")
print(xSize, ySize) // <-- returns (26,2)
let planeGeometry = SCNPlane(width: xSize, height: ySize)

One SceneKit unit is one meter in ARKit but the boundingBox is defined in the nodes local coordinate system. So your node probably has a parent with a scale different from 1.

Related

Manually write world file (jgw) from Leaflet.js map

I have the need to export georeferenced images from Leaflet.js on the client side. Exporting an image from Leaflet is not a problem as there are plenty of existing plugins for this, but I'd like to include a world file with the export so the resulting image can be read into GIS software. I have a working script fort his, but I can't seem to nail down the correct parameters for my world file such that the resulting georeferenced image is located exactly correctly.
Here's my current script
// map is a Leaflet map object
let bounds = map.getBounds(); // Leaflet LatLngBounds
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let width_deg = bottomRight.lng - topLeft.lng;
let height_deg = topLeft.lat - bottomRight.lat;
let width_px = $(map._container).width() // Width of the map in px
let height_px = $(map._container).height() // Height of the map in px
let scaleX = width_deg / width_px;
let scaleY = height_deg / height_px;
let jgwText = `${scaleX}
0
0
-${scaleY}
${topLeft.lng}
${topLeft.lat}`
This seems to work well at large scales (ie zoomed in to city-level or so), but at smaller scales there is some distortion along the y-axis. One thing I noticed is that all examples of world files I can find (and those produced from QGIS or ArcMap) all have the x-scale and y-scale parameters being exactly equal (oppositely signed). In my calculations, these terms are different unless you are sitting right on the equator.
Example world file produced from QGIS
0.08984380916303301 // x-scale (size of px in x direction)
0 // rotation parameter 1
0 // rotation parameter 2
-0.08984380916303301 // y-scale (size of px in y direction)
-130.8723208723141056 // x-coord of top left px
51.73651369984968085 // y-coord of top left px
Example world file produced from my calcs
0.021972656250000017
0
0
-0.015362443783773333
-130.91308593750003
51.781435604431195
Example of produced image using my calcs with correct state boundaries overlaid:
Does anyone have any idea what I'm doing wrong here?
Problem was solved by using EPSG:3857 for the worldfile, and ensuring the width and height of the map bounds was also measured in this coordinate system. I had tried using EPSG:3857 for the worldfile, but measured the width and height of the map bounds using Leaflet's L.map.distance() function. To solve the problem, I instead projected corner points of the map bounds to EPSG:3857 using L.CRS.EPSG3857.project(), the simply subtracted the X,Y values.
Corrected code is shown below, where map is a Leaflet map object (L.map)
// Get map bounds and corner points in 4326
let bounds = map.getBounds();
let topLeft = bounds.getNorthWest();
let bottomRight = bounds.getSouthEast();
let topRight = bounds.getNorthEast();
// get width and height in px of the map container
let width_px = $(map._container).width()
let height_px = $(map._container).height()
// project corner points to 3857
let topLeft_3857 = L.CRS.EPSG3857.project(topLeft)
let topRight_3857 = L.CRS.EPSG3857.project(topRight)
let bottomRight_3857 = L.CRS.EPSG3857.project(bottomRight)
// calculate width and height in meters using epsg:3857
let width_m = topRight_3857.x - topLeft_3857.x
let height_m = topRight_3857.y - bottomRight_3857.y
// calculate the scale in x and y directions in meters (this is the width and height of a single pixel in the output image)
let scaleX_m = width_m / width_px
let scaleY_m = height_m / height_px
// worldfiles need the CENTRE of the top left px, what we currently have is the TOPLEFT point of the px.
// Adjust by subtracting half a pixel width and height from the x,y
let topLeftCenterPxX = topLeft_3857.x - (scaleX / 2)
let topLeftCenterPxY = topLeft_3857.y - (scaleY / 2)
// format the text of the worldfile
let jgwText = `
${scaleX_m}
0
0
-${scaleY_m}
${topLeftCenterPxX}
${topLeftCenterPxY}
`
For anyone else with this problem, you'll know things are correct when your scale-x and scale-y values are exactly equal (but oppositely signed)!
Thanks #IvanSanchez for pointing me in the right direction :)

ARKit node disappear after 100m

I'm currently working on ARKit (SceneKit) app. I've noticed that if I put a node at 100m, the node will show just fine but if I set it to 101m or farther, it won't show.
Is this the distance limit?
var translation = matrix_identity_float4x4
translation.columns.3.x = 1
translation.columns.3.y = 1
translation.columns.3.z = -100
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(name: "test", transform: transform)
sceneView.session.add(anchor: anchor)
Is there any way to increase this range?
For increasing a Camera's range use Far attribute in Z Clipping area of Attributes Inspector.
The default value is 100 meters.
var zFar: Double { get set }
Excerpt from Developer Documentation: The far value determines the maximal distance between the camera and a visible surface. If a surface is farther from the camera than this distance, the surface is clipped and does not appear. The default far value is 100.0.
let camera = SCNCamera()
camera.zFar = 1000
This post provides an important info.
Looks like there is no way to update the Z maximum range for SpriteKit. Only SceneKit allows you to modify this by updating the zfar property from the camera. Thanks to Gigantic for your help!

ARKit: Placing an SCNText at a particular point in front of the camera

I've managed to get a cube (SCNNode) placed on a surface where the camera is pointed, however I am finding it very difficult to do the simple (?) task of also placing text in the same position.
I've created the SCNText and subsequent SCNNode, however when I add it to the rootNode the text always seems to be added above my head and off the camera to the right (which tells me thats the global origin point).
Even when I use the exact same values of position I used for the the cube, the SCNText node still gets placed above my head in the same spot.
Apologies if this is a basic question, I've never worked in SceneKit before.
The coordinate center for an SCNGeometry is its center point. But when you are creating a SCNText the center point is somewhere in the bottom left corner:
You need to center the text first. This can be done by checking the bounding box of the node containing your text and setting a pivot transform to change the texts center to its actual center:
func center(node: SCNNode) {
let (min, max) = node.boundingBox
let dx = min.x + 0.5 * (max.x - min.x)
let dy = min.y + 0.5 * (max.y - min.y)
let dz = min.z + 0.5 * (max.z - min.z)
node.pivot = SCNMatrix4MakeTranslation(dx, dy, dz)
}
Edit:
Also note this answer that explains some additional pitfalls:
A text with 16 pts font size is 16 SceneKit units tall. But in ARKit 1 SceneKit units = 1 meter!

swift: orient y-axis toward another point in 3-d space

Suppose you have two points in 3-D space. Call the first o for origin and the other t for target. The rotation axes of each are alligned with the world/parent coordinate system (and each other). Place a third point r coincident with the origin, same position and rotation.
How, in Swift, can you rotate r such that its y-axis points at t? If pointing the z-axis is easier, I'll take that instead. The resulting orientation of the other two axes is immaterial for my needs.
I've been through many discussions related to this but none satisfy. I have learned, from reading and experience, that Euler angles is probably not the way to go. We didn't cover this in calculus and that was 50 years ago anyway.
Got it! Incredibly simple when you add a container node. The following seems to work for any positions in any quadrants.
// pointAt_c is a container node located at, and child of, the originNode
// pointAtNode is its child, position coincident with pointAt_c (and originNode)
// get deltas (positions of target relative to origin)
let dx = targetNode.position.x - originNode.position.x
let dy = targetNode.position.y - originNode.position.y
let dz = targetNode.position.z - originNode.position.z
// rotate container node about y-axis (pointAtNode rotated with it)
let y_angle = atan2(dx, dz)
pointAt_c.rotation = SCNVector4(0.0, 1.0, 0.0, y_angle)
// now rotate the pointAtNode about its z-axis
let dz_dx = sqrt((dz * dz) + (dx * dx))
// (due to rotation the adjacent side of this angle is now a hypotenuse)
let x_angle = atan2(dz_dx, dy)
pointAtNode.rotation = SCNVector4(1.0, 0.0, 0.0, x_angle)
I needed this to replace lookAt constraints which cannot, easily anyway, be archived with a node tree. I'm pointing the y-axis because that's how SCN cylinders and capsules are directed.
If anyone knows how to obviate the container node please do tell. Everytime I try to apply sequential rotations to a single node, the last overwrites the previous one. I haven't the knowledge to formulate a rotation expression to do it in one shot.

SceneKit's performance with a cube test

In learning 3d graphics programming for games I decided to start off simple by using the Scene Kit 3D API. My first gaming goal was to build a very simplified mimic of MineCraft. A game of just cubes - how hard can it be.
Below is a loop I wrote to place a ride of 100 x 100 cubes (10,000) and the FPS performance was abysmal (~20 FPS). Is my initial gaming goal too much for Scene Kit or is there a better way to approach this?
I have read other topics on StackExchange but don't feel they answer my question. Converting the exposed surface blocks to a single mesh won't work as the SCNGeometry is immutable.
func createBoxArray(scene : SCNScene, lengthCount: Int, depthCount: Int) {
let startX : CGFloat = -(CGFloat(lengthCount) * CUBE_SIZE) + (CGFloat(lengthCount) * CUBE_MARGIN) / 2.0
let startY : CGFloat = 0.0
let startZ : CGFloat = -(CGFloat(lengthCount) * CUBE_SIZE) + (CGFloat(lengthCount) * CUBE_MARGIN) / 2.0
var currentZ : CGFloat = startZ
for z in 0 ..< depthCount {
currentZ += CUBE_SIZE + CUBE_MARGIN
var currentX = startX
for x in 0 ..< lengthCount {
currentX += CUBE_SIZE + CUBE_MARGIN
createBox(scene, x: currentX, y: startY, z: currentZ)
}
}
}
func createBox(scene : SCNScene, x: CGFloat, y: CGFloat, z: CGFloat) {
var box = SCNBox(width: CUBE_SIZE, height: CUBE_SIZE, length: CUBE_SIZE, chamferRadius: 0.0)
box.firstMaterial?.diffuse.contents = NSColor.purpleColor()
var boxNode = SCNNode(geometry: box)
boxNode.position = SCNVector3Make(x, y, z)
scene.rootNode.addChildNode(boxNode)
}
UPDATE 12-30-2014:
I modified the code so the SCNBoxNode is created once and then each additional box in the array of 100 x 100 is created via:
var newBoxNode = firstBoxNode.clone()
newBoxNode.position = SCNVector3Make(x, y, z)
This change appears to have increased FPS to ~30fps. The other statistics are as follows (from the statistics displayed in the SCNView):
10K (I assume this is draw calls?)
120K (I assume this is faces)
360K (Assuming this is the vertex count)
The bulk of the run loop is in Rendering (I'm guesstimating 98%). The total loop time is 26.7ms (ouch). I'm running on a Mac Pro Late 2013 (6-core w/Dual D500 GPU).
Given that a MineCraft style game has a landscape that constantly changes based on the players actions I don't see how I can optimize this within the confines of Scene Kit. A big disappointment as I really like the framework. I'd love to hear someone's ideas on how I can address this issue - without that, I'm forced to go with OpenGL.
UPDATE 12-30-2014 # 2:00pm ET:
I am seeing a significant performance improvement when using flattenedClone(). The FPS is now a solid 60fps even with more boxes and TWO drawing calls. However, accommodating a dynamic environment (as MineCraft supports) is still proving problematic - see below.
Since the array would change composition over time I added a keyDown handler to add an even larger box array to the existing and timed the difference between adding the array of boxes resulting in far more calls versus adding as a flattenedClone. Here's what I found:
On keyDown I add another array of 120 x 120 boxes (14,400 boxes)
// This took .0070333 milliseconds
scene?.rootNode.addChildNode(boxArrayNode)
// This took .02896785 milliseconds
scene?.rootNode.addChildNode(boxArrayNode.flattenedClone())
Calling flattenedClone() again is 4x slower than adding the array.
This results in two drawing calls having 293K faces and 878K vertices. I'm still playing with this and will update if I find anything new. Bottom line, with my additional testing I still feel Scene Kit's immutable geometric constraints mean I can't leverage the framework.
As you mentionned Minecraft, I think it's worth looking at how it works.
I have no technical details or code sample for you, but everything should be pretty straightfoward:
Have you ever played minecraft online, and the terrain is not loading allowing you to see through? That's because there is no geometry inside.
let's assume I have a 2x2x2 array of cubes. That makes 2*2*2*6*2 = 96 triangles.
However, if you test and draw only the polygons on the visible from the camera point of view, maybe by testing the normals (easy since it's cubes), this number goes down to 48 triangles.
If you find a way to see which faces are occluded by other ones (which shouldn't be too hard either considering you're working with flat, quared, grid based faces) you can only draw these. that way, we're drawing between 8 and 24 triangleS. That's up to 90% optimisation.
If you want to get really deep, you can even combine faces, to make a single N-gon out of the visible, flat faces. You can do that if you create a new way to generate the geometry on the fly that combines the two previous methods and test for adgacent visible faces on the same plane.
If you succeed, we're talking 2 to 6 polygons instead of 96, to render 8 cubes.
Note that the last method only works if your blocks are touching each other.
There is probably a ton of Minecraft-like renderer papers, a few googles will help you figure it out!
Why does drop-frame occur?
September 04, 2022
Almost 8 years passed since you asked this question, but not much has changed...
1. Polygons' count
The number of polygons in SceneKit or RealityKit scene must not exceed 100,000 triangular polygons. An ideal SceneKit's scene, that is capable of rendering all the models faster, should contain less than 50,000 polygons. Your scene contains 120,000 polygons. Do not forget that SceneKit renders models using single thread (unlike multi-threaded RealityKit renderer).
2. Shaders
In Xcode 14.0+, SceneKit's default .lightingModel of any 3D library's primitive set in Material Inspector (UI version) is .physicallyBased material. This is the most computationally intensive shader. Programmatic version of the .lightingModel for any SCN procedural geometry is .blinn shading model. The least computationally intensive shader is .constant (it doesn't depend on lighting).
3. What's inside a frustum
If all 10,000 cubes are inside the SceneKit camera frustum, then the frame rate will be 20-30 fps. But if you dollied in the cubes' matrix and see no more than a ninth part of it, then the frame rate will be 60 fps. Thus, SceneKit does not render those objects that are outside the frustum's bounds.
4. Number of meshes in SCNScene
Each model mesh results in a draw call. To achieve 60 fps each draw call should be 16 milliseconds or less. For best performance, Apple engineers advise to limit the number of meshes in a .usdz file to around 50. Unfortunately, I did not find a value for .scn files in the official documentation.
5. Lighting and shadows
Lighting and shadowing (especially shadowing) are very computationally intensive tasks. The general advice is the following – avoid using .forward shadows and hi-rez textures with fake shadows.
Look at this post for details.
SwiftUI code for testing
Xcode 14.0+, SwiftUI 4.0+, Swift 5.7
import SwiftUI
import SceneKit
struct ContentView: View {
var scene = SCNScene()
var options: SceneView.Options = [.allowsCameraControl]
var body: some View {
ZStack {
ForEach(-50...49, id: \.self) { x in
ForEach(-50...49, id: \.self) { z in
let _ = DispatchQueue.global().async {
scene.rootNode.addChildNode(createCube(x, 0, z))
}
}
}
SceneView(scene: scene, options: options)
.ignoresSafeArea()
let _ = scene.background.contents = UIColor.black
}
}
func createCube(_ posX: Int, _ posY: Int, _ posZ: Int) -> SCNNode {
let geo = SCNBox(width: 0.5, height: 0.5, length: 0.5,
chamferRadius: 0.0)
geo.firstMaterial?.lightingModel = .constant
let boxNode = SCNNode(geometry: geo)
boxNode.position = SCNVector3(posX, posY, posZ)
return boxNode
}
}
Here, all cubes are within the viewing frustum, so there are obvious reasons for a drop-frame.
And here, just a part of a scene is within the viewing frustum, so there is no drop-frame.