Where a shadow plane should be defined in Scenekit - swift

It's so confusing to me, would be grateful if anyone help me on it.
I have a shadow plane to show the shadow below the AR object. I read some article that they define this shadow in viewDidLoadand add it as the child bode to sceneView.scene. The question is, it should be defined only once for the floor surface?
for instance, I can add the shadow plane to renderer(_:didAdd:for:), it call it once when a new surface is detected. That is so cool for me. But the position of the shadow plane should be changed as well? can someone explain it to me that where it should be defined and wehere/when it should be updated?
here how I define the shadow plane
private func addShadowPlane(node: SCNNode, planeAnchor: ARPlaneAnchor) {
let anchorX = planeAnchor.center.x
let anchorY: planeAnchor.center.y
let anchorZ = planeAnchor.center.z
let floor = SCNFloor()
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(anchorX, anchorY, anchorZ)
floor.length = CGFloat(planeAnchor.extent.z)
floor.width = CGFloat(planeAnchor.extent.x)
floor.reflectivity = 0
floor.materials = [shadowMaterialStandard()]
node.addChildNode(floorNode)
}
func shadowMaterialStandard() -> SCNMaterial {
let material = SCNMaterial()
material.lightingModel = .physicallyBased
material.writesToDepthBuffer = true
material.readsFromDepthBuffer = true
material.colorBufferWriteMask = []
return material
}

The issue you might run into is: Do you want one single shadow plane in a kind of initial defined position and then remains there (or can be repositioned). Or do you want a lots of shadow planes, like on any surface captured with the ARKit? The problem might be, that all those planes will not be exact and accurate to the surfaces on top they are created (just more or less). You can make more accurate shapes for surfaces, but they are built up in an ongoing process and need more time to complete (imagine you scan a table by walking around). I also did some ARApps with Shadow planes. I usually create one single shadow plane (like 20x20 meters) on my request using a focus square. I fetch the worldPosition from the focus square, then I add a plane to that location using Scenekit (and not the Renderer for plane anchors). Keep in mind, there are many ways to do this. There is no best way.
Try to study this Apple Sample App for more information on placing objects, casting shadows etc:
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction

Related

RealityKit – Create line between two points in 3d space

How to create a line between two points in 3d space with RealityKit?
There are examples of creating lines between two points in Scenekit, however, there are basically none using RealityKit.
To create the line, I've created a rectangle model entity and placed it between my first touched point and the current touched point. From here, all I would need to do is rotate the rectangle to face the current touched point. However, using the simd_quatf(from: to:) doesn't work as intended.
rectangleModelEntity.transform.rotation = simd_quatf(from: firstTouchedPoint,
to: currTouchedPoint)
If I were to touch a point and then drag directly downwards, the rectangle model should be to be a straight line between first touched and current touched point, but it stays horizontal with a slight tilt.
To solve this, I tried getting the angle between my initially horzontal line as a vector and a vector from the first touched to current touched point
let startVec = currTouchedPoint - firstTouchedPoint
let endVec = endOfModelEntityPoint - modelEntityCenterPoint
let lengthVec = simd_length(cross(startVec, endVec))
let theta = atan2(lengthVec, dot(startVec, endVec))
This gives me the angle between two vectors in 3d space, which seems correct, when I checked it gave me 90 degrees when touching and dragging directly between it.
The problem is I don't know what the axis to rotate it on should be. Since this is 3d space, the line doesn't need to be on a 2d plane, the current touched position can be downwards and in front of the starting touch position.
rectangleModelEntity = simd_quatf(angle: theta, axis: ???)
Personally, I'm not even too sure if the above is the correct solution to creating a line between two points. In theory it's rather basic, create a rectangle with low height/depth to mimic a line, position it in the center of the starting and current touch point then rotate it so it's oriented correctly.
What should be the axis for the above degrees between two vectors?
Is there a better method of creating two lines between points in 3d space with RealityKit/ARKit?
I have implemented using a box. Let me know if you have a better way.
let midPosition = SIMD3(x:(position1.x + position2.x) / 2,
y:(position1.y + position2.y) / 2,
z:(position1.z + position2.z) / 2)
let anchor = AnchorEntity()
anchor.position = midPosition
anchor.look(at: position1, from: midPosition, relativeTo: nil)
let meters = simd_distance(position1, position2)
let lineMaterial = SimpleMaterial.init(color: .red,
roughness: 1,
isMetallic: false)
let bottomLineMesh = MeshResource.generateBox(width:0.025,
height: 0.025/2.5,
depth: meters)
let bottomLineEntity = ModelEntity(mesh: bottomLineMesh,
materials: [lineMaterial])
bottomLineEntity.position = .init(0, 0.025, 0)
anchor.addChild(bottomLineEntity)
The axis is the cross product of the direction your object is facing at the beginning and the direction it should be facing now.
Like if it's at position p1=[x1,y1,z1], initially facing d1=[0, 0, -1], and you want it to face a point p2=[x, y, z] the axis would be the cross product: |d1|✕|p2 - p1|.
May have to swap the two around, or just negate the angle though, depending on how it works out.

Swift ARKit: Get face anchor transform relative to camera

My Swift ARKit app needs the position and orientation of the face relative to the front-facing camera. If I set ARConfiguration.worldAlignment = .camera all I need to do is call for the faceAnchor.transform, which works perfectly; but I need to run in the default worldAlignment = .gravity. In this mode I can get faceAnchor.transform and camera.transform, which are both supplied in world coordinates. How can I use those transforms to get the face anchor in camera coordinates? I've tried multiplying those together as well as multiplying one by the other's inverse, in all four order combinations, but none of these results works. I just don't understand matrix operations well enough to succeed here. Can someone shed light on this for me?
I finally figured this out using SceneKit functions!
let currentFaceTransform = currentFaceAnchor!.transform
let currentCameraTransform = frame.camera.transform
let newFaceMatrix = SCNMatrix4.init(currentFaceTransform)
let newCameraMatrix = SCNMatrix4.init(currentCameraTransform)
let cameraNode = SCNNode()
cameraNode.transform = newCameraMatrix
let originNode = SCNNode()
originNode.transform = SCNMatrix4Identity
//Converts a transform from the node’s local coordinate space to that of another node.
let transformInCameraSpace = originNode.convertTransform(newFaceMatrix, to: cameraNode)
let faceTransformFromCamera = simd_float4x4(transformInCameraSpace)
Hope this helps some others out there!

spritekit how to selectively scale nodes

as background lets assume I have a map- literally a road map being rendered inside my SKScene. Roads are represented by SKShapenodes with path set to an array of CGPoints. I want the user to be able to zoom in/out so I created a camera node:
var cam: SKCameraNode = SKCameraNode()
and as the user wants to zoom in/out by scrolling on the trackpad:
let zoomInAction = SKAction.scale(to: CGFloat(scale), duration: 0.0)
camera?.run(zoomInAction)
This works great however I have an additional complexity which I'm not sure how to handle. I want some nodes (for examples road name labels, icons, map legend) to be exempt from scaling- such that as a user zooms in/out the road name label remains the same size while the road shape scales proportionally.
Not sure how to handle this? Can I have a hierarchy of scenes so one layer scales and the other doesnt scale? Can that be achieved by attaching the camera node to the "scalable" layer? Any help appreciated!
Here is the case. If you want the node scale won't change with camera, just add the node to the tree of camera. Don't forget add cameraNode to scene, otherwise, those nodes connected to camera won't be rendered.
In the following, label is rendered via camera and won't change scale.
let label = SKLabelNode.init(text: "GFFFGGG")
label.fontSize = 30
label.fontColor = UIColor.black
label.name = "cool"
label.zPosition = 100
let camera = SKCameraNode()
camera.addChild(label)
scene.addChild(camera)
scene.camera = camera
camera.position = CGPoint.init(x: 0, y: 0)
camera.xScale = 2.0
If you have nodes connecting to scene before,
you may remove the node from parent and then add to camera.
If using a function to batch handling them should not be as hard as thought.
Maybe not necessary:
You may transfer them to cameraNode tree via camera.convert(point: , from:) etc.

Swift 3 (SpriteKit): Locking the x axis of a SKSpriteNode and its physicsBody

I really need to know how to lock the x axis of an SKSpriteNode and its physicsBody. I need to keep the SKSpriteNode dynamic and affectedByGravity. The node is on a slope, so this is why it's x axis is moved due to gravity. However, I don't want the x axis of this SKSpriteNode to move due to gravity. Is there a way to lock the x axis in order to achieve this?
Thanks for any help :D
Edit: I have tried to apply a constraint to the x value like this:
let xConstraint = SKConstraint.positionX(SKRange(constantValue: 195))
node.constraints?.append(xConstraint)
However this doesn't work and I'm not sure why and how to fix it. Anyone know of a solution?
Edit 2: SKPhysicsJointPin is actually looking more promising. In the comments of the first response/answer to this question, I have been trying to figure how to properly use it in my situation.
An example of my code:
let node = SKSpriteNode(imageNamed: "node")
enum collisionType: UInt32 {
case node = 1
case ground = 2
case other = 4 //the other node is unrelated to the ground and node
}
class GameScene: SKScene, SKPhysicsContactDelegate {
override func didMove(to view: SKView) {
//Setup node physicsBody
node.physicsBody = SKPhysicsBody(rectangleOf: node.size)
node.physicsBody?.categoryBitMask = collisionType.node.rawValue
node.physicsBody?.collisionBitMask = //[other node that isn't the ground or the node]
node.physicsBody?.contactTestBitMask = //[other node that isn't the ground or the node]
node.physicsBody?.isDynamic = true
node.physicsBody?.affectedByGravity = true
addChild(node)
//Physics Setup
physicsWorld.contactDelegate = self
}
The node is on top of the ground, and the ground is composed of various SKSpriteNode lines that have a volumeBased physicsBody. The ground keeps adding new lines at the front, and removing the ones at the back, and changing the x value of each line by a negative (so the ground appears to moving - this process is performed in an SKAction). These lines (the parts of the ground) are on an angle which is why the node's x axis moves. I want the node to always be at the front of the ground (e.g. always on top of the newly created line). Currently, setting the position of the node like this locks the x axis (solving my issue):
override func didSimulatePhysics() {
//Manage node position
node.position.x = 195
node.position.y = CGFloat([yPosition of the first line of the ground - the yPosition keeps changing])
}
Note: ^This^ function is inside the GameScene class
The x axis actually stays the same like this. However, the issue is that now the physicsBody of the node is lower than the centre of the node (which didn't happen before).
A node's constraints property is nil by default. You'll need to create an array of one or more constraints and assign it to the property. For example
let xConstraint = SKConstraint.positionX(SKRange(constantValue: 195))
node.constraints = [xConstraint]
Update
You may want to use a camera node instead of moving the ground in the scene. With a camera node, you move the main character and the camera instead of the ground.
I think you could set the linearDamping property to 0.0
The linearDamping is a property that reduces the body’s linear velocity.
This property is used to simulate fluid or air friction forces on the
body. The property must be a value between 0.0 and 1.0. The default
value is 0.1. If the value is 0.0, no linear damping is applied to the
object.
You should pay attention also to the other forces applied to your SKSpriteNode. The gravitational force applied by the physics world for example where dx value, as you request , should be setted to 0.0:
CGVector(dx:0.0, dy:-4.9)
Remember also that when you apply other forces vectors like velocity you should maintain the dx property to 0.0 as constant if you want to block the x axis.
You could find more details to the official docs
Update (after your details to the comments below):
You could also anchored your sprite to the ground with an SKPhysicsJoint (I don't know your project so this is only for example):
var myJoint = SKPhysicsJointPin.joint(withBodyA: yourSprite.physicsBody!, bodyB: yourGround.physicsBody!, anchor: CGPoint(x: yourSprite.frame.minX, y: yourGround.frame.minY))
self.physicsWorld.add(myJoint)
You can work with the anchor property to create a good joint as you wish or adding more joints.

getting GKObstacleGraph to work with SKTileMapNode

I'm currently working in Xcode 8, using Swift 3 and the new SKTileMapNode from SpritKit to make a 2D dungeon crawler type of game.
I'm have trouble getting GKObstacleGraph to work with the tilemap. Please help!
I tried to loop through all the tiles within the obstacle layer of the tilemap and create a polygon for each tile and store it in the GKObstacleGraph. Each tile in obstacle layer is a wall tile. The map looks like some type of dungeon crawler, so the wall is all over the places.
I have something like below:
for row in 0..<tileMapNode.numberOfRows {
for column in 0..<tileMapNode.numberOfColumns {
let tile = tileMapNode.tileDefinition(atColumn: column, row: row)
let tileCenter = tileMapName.centerOfTile(atColumn: column, row: row)
//find 4 corners of each tile from its center
let bottomLeft = float2(CGPointMake(tileCenter.x - tile.size.width/2, tileCenter.y - tile.size.height/2))
let bottomRight = float2(CGPointMake((tileCenter.x - tile.size.width/2, tileCenter.y + tile.size.height/2))
let topRight = float2(CGPointMake((tileCenter.x + tile.size.width/2, tileCenter.y + tile.size.height/2))
let topLeft = float2(CGPointMake((tileCenter.x - tile.size.width/2, tileCenter.y + tile.size.height/2))
var vertices = [topLeft , bottomLeft , bottomRight , topRight ]
let obstacle = GKPolygonObstacle(points: &vertices, count: 4)
obstacleGraph.add(obstacle)
}
}
However, when i run the app it shows that there are over 80000 nodes, way too many pathfinding pathes.
Any help would be appreciated.
I'm not certain that GKObstacleGraph is the right choice of graph here. According to the GameplayKit documentation:
For example, you can design a level with the SpriteKit Scene Editor in Xcode and use physics bodies to mark regions that the player (or other game entities) cannot pass through, then use the obstaclesFromNodePhysicsBodies: method to generate GKPolygonObstacle objects marking impassable regions.
The function obstaclesFromNodePhysicsBodies is used like this to extract obstacles and create the graph;
let obstacles = SKNode.obstaclesFromNodePhysicsBodies(self.children)
graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: 0.0)
For a SKTilemapNode representing a cartesian grid, the GKGridGraph seems the likely choice.