How to make RealityKit to show only CollisionComponents? - swift

I am trying to see the CollisionComponents on my ARView.
I used the .showPhysics as part of the debugOptions, but since I have 20 objects on screen, I get all the normals going crazy and the color of the CollisionComponents in unclear (some form of weird pink).
Does anyone have any idea how to present only the CollisionComponents without any extra data as part of the .showPhysics?

You can extend a standard functionality of RealityKit's ARView by using simple Swift extension:
import RealityKit
import ARKit
fileprivate extension ARView.DebugOptions {
func showCollisions() -> ModelEntity {
print("Code for visualizing collision objects goes here...")
let vc = ViewController()
let box = MeshResource.generateBox(size: 0.04)
let color = UIColor(white: 1.0, alpha: 0.15)
let colliderMaterial = UnlitMaterial(color: color)
vc.visualCollider = ModelEntity(mesh: box,
materials: [colliderMaterial])
return vc.visualCollider
}
}
...and then call this method in ViewController when you're tapping on a screen:
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let anchor = AnchorEntity()
var ballEntity = ModelEntity()
var visualCollider = ModelEntity()
var sphere: MeshResource?
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
sphere = MeshResource.generateSphere(radius: 0.02)
let material = SimpleMaterial(color: .systemPink,
isMetallic: false)
ballEntity = ModelEntity(mesh: sphere!,
materials: [material])
let point: CGPoint = sender.location(in: arView)
guard let query = arView.makeRaycastQuery(from: point,
allowing: .estimatedPlane,
alignment: .any)
else { return }
let result = arView.session.raycast(query)
guard let raycastResult = result.first
else { return }
let anchor = AnchorEntity(raycastResult: raycastResult)
anchor.addChild(ballEntity)
arView.scene.anchors.append(anchor)
let showCollisions = arView.debugOptions.showCollisions() // here it is
ballEntity.addChild(showCollisions)
ballEntity.generateCollisionShapes(recursive: true)
}
}
Please consider, it's an approximate visualization. This code just shows you a way to go on.

Related

show masking on object which is between camera and wall using RealityKit

I made a video for generating a floor plan in which I need to capture the wall and floor together at a certain position if a user is too near to the wall or if any object come between the camera and wall/floor then need to show Too Close mask on that object something like display in this video.
I try to use rycast in session(_ session: ARSession, didUpdate frame: ARFrame) method but I am very new in AR and not know which method we need to use.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let query = self.arView?.makeRaycastQuery(from: self.arView?.center ?? CGPoint.zero,
allowing: .estimatedPlane,
alignment: .any)
else { return }
guard let raycastResult = self.arView?.session.raycast(query).first
else { return }
let currentPositionOfCamera = raycastResult.worldTransform.getPosition()
if currentPositionOfCamera != .zero {
let distanceFromCamera = frame.camera.transform.getPosition().distanceFrom(position: currentPositionOfCamera)
print("Distance from raycast:",distanceFromCamera)
if (distance < 0.5) {
print("Too Close")
}
}
}
I am just learning ARKit and RealityKit as well, but wouldn't your code be:
let currentPositionOfCamera = self.arView.cameraTransform.translation
if currentPositionOfCamera != .zero {
// distance is defined in simd as the distance between 2 points
let distanceFromCamera = distance(raycastResult.worldTransform.position, currentPositionOfCamera)
print("Distance from raycast:",distanceFromCamera)
if (distanceFromCamera < 0.5) {
print("Too Close")
let rayDirection = normalize(raycastResult.worldTransform.position - self.arView.cameraTransform.translation)
// This pulls the text back toward the camera from the plane
let textPositionInWorldCoordinates = result.worldTransform.position - (rayDirection * 0.1)
let textEntity = self.model(for: classification)
// This scales the text so it is of a consistent size
textEntity.scale = .one * raycastDistance
var textPositionWithCameraOrientation = self.arView.cameraTransform
textPositionWithCameraOrientation.translation = textPositionInWorldCoordinates
// self.textAnchor is defined somewhere in the class as an optional
self.textAnchor = AnchorEntity(world: textPositionWithCameraOrientation.matrix)
textAnchor.addChild(textEntity)
self.arView.scene.addAnchor(textAnchor)
} else {
guard let textAnchor = self.textAnchor else { return }
self.removeAnchor(textAnchor)
}
}
// Creates a text ModelEntity
func tooCloseModel() -> ModelEntity {
let lineHeight: CGFloat = 0.05
let font = MeshResource.Font.systemFont(ofSize: lineHeight)
let textMesh = MeshResource.generateText("Too Close", extrusionDepth: Float(lineHeight * 0.1), font: font)
let textMaterial = SimpleMaterial(color: classification.color, isMetallic: true)
let model = ModelEntity(mesh: textMesh, materials: [textMaterial])
// Center the text
model.position.x -= model.visualBounds(relativeTo: nil).extents.x / 2
return model
}
This code is adapted from Apple's Visualizing Scene Semantics.

Converting worldPosition into a SceneView position with SceneKit/Swift

iOS 15, Swift 5
I am trying to get sceneView [SwiftUI SceneKit plugin] to work with adding/finding nodes within my scene. And I feel I am almost there and yet I am not.
Sadly the hits function here finds nothing, which is why I ended up commenting it out. I tried used projectPoint after that, but the results make no sense?
Now I did get this working using UIViewRespresentable, but that solution doesn't use sceneView at all.
I want to convert the drag coordinates to the SCNView coordinates since since I think I can find/match objects.
import SwiftUI
import SceneKit
struct ContentView: View {
let settings = SharedView.shared
var (scene,cameraNode) = GameScene.shared.makeView()
var body: some View {
GameView(scene: scene, cameraNode: cameraNode)
}
}
struct GameView: View {
let settings = SharedView.shared
#State var scene: SCNScene
#State var cameraNode: SCNNode
var body: some View {
SceneView(
scene: scene,
pointOfView: cameraNode,
options: [.autoenablesDefaultLighting, .rendersContinuously ], delegate: SceneDelegate())
.gesture(DragGesture(minimumDistance: 0)
.onChanged({ gesture in
settings.location = gesture.location
}).sequenced(before: TapGesture().onEnded({ _ in
GameScene.shared.tapV()
}))
)
}
}
class SharedView: ObservableObject {
#Published var location:CGPoint!
#Published var view:UIView!
#Published var scene:SCNScene!
#Published var sphereNode:SCNNode!
static var shared = SharedView()
}
#MainActor class SceneDelegate: NSObject, SCNSceneRendererDelegate {
}
class GameScene: UIView {
static var shared = GameScene()
let settings = SharedView.shared
var view: SCNView!
var scene: SCNScene!
func makeView() -> (SCNScene,SCNNode) {
let material = SCNMaterial()
material.diffuse.contents = UIColor.red
let sphere = SCNSphere(radius: 1.0)
sphere.materials = [material]
let sphereNode = SCNNode()
sphereNode.geometry = sphere
sphere.name = "sphereNode"
let camera = SCNCamera()
camera.fieldOfView = 90.0
let light = SCNLight()
light.color = UIColor.white
light.type = .omni
let cameraNode = SCNNode()
cameraNode.simdPosition = SIMD3<Float>(0.0, 0.0, 6)
cameraNode.camera = camera
cameraNode.light = light
scene = SCNScene()
scene.background.contents = UIColor.black
scene.rootNode.addChildNode(sphereNode)
scene.rootNode.addChildNode(cameraNode)
view = SCNView()
view.scene = scene
view.pointOfView = cameraNode
settings.sphereNode = sphereNode
return (scene, cameraNode)
}
func tapV() {
let location = settings.location
let hits = view.hitTest(location!, options: [.boundingBoxOnly: true, .firstFoundOnly: true, .searchMode: true])
print("hits \(hits.count)")
// for hit in hits {
let material = SCNMaterial()
material.diffuse.contents = UIColor.yellow
let geometry = SCNSphere(radius: 0.1)
geometry.materials = [material]
let node = SCNNode()
node.geometry = geometry
// node.simdPosition = hit.simdWorldCoordinates
let projectedOrigin = view.projectPoint(SCNVector3Zero)
var worldPoint = view.unprojectPoint(SCNVector3(location!.x, location!.y, CGFloat(0)))
node.worldPosition = worldPoint
view.scene!.rootNode.addChildNode(node)
print("node \(node.position) \(hits.count)")
// }
}
}

AR objects not anchoring or sizing correctly in RealityKit

I have an AR scene with two objects, one brown cow and one black one. They're both supposed to be displayed in the scene, distanced a little apart. I originally only had the brown cow, which was just a little bit too big. I changed something, which I can't remember, and now my scene is from the inside of the cow, and I can't exit the cow's corpse. It seems like it moves around when I do. I think the issue is because of a positive number for the [minimum bounds]but I'm not entirely sure. I've set the z axis for the cow as well.How can I make the cow a little bit smaller and about 5-7 yards away from me at spawn?
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
showModel()
overlayCoachingView()
setupARView()
arView.addGestureRecognizer(UITapGestureRecognizer(target: self, action:
#selector(handleTap(recognizer:))))
}
func showModel(){
let anchorEntity = AnchorEntity(plane: .horizontal, minimumBounds:[0.7, 0.7])
let entity = try! Entity.loadModel(named: "COW_ANIMATIONS")
entity.setParent(anchorEntity)
arView.scene.addAnchor(anchorEntity)
}
func overlayCoachingView () {
let coachingView = ARCoachingOverlayView(frame: CGRect(x: 0, y: 0, width:
arView.frame.width, height: arView.frame.height))
coachingView.session = arView.session
coachingView.activatesAutomatically = true
coachingView.goal = .horizontalPlane
view.addSubview(coachingView)
}
// Load the "Box" scene from the "Experience" Reality File
// let boxAnchor = try! Experience.loadBox()
// Add the box anchor to the scene
//arView.scene.anchors.append(boxAnchor)
func setupARView(){
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
//object placement
#objc
func handleTap(recognizer: UITapGestureRecognizer){
let location = recognizer.location(in:arView)
let results = arView.raycast(from: location, allowing: .estimatedPlane, alignment: .horizontal)
if let firstResult = results.first {
let anchor = ARAnchor(name: "COW_ANIMATIONS", transform: firstResult.worldTransform)
arView.session.add(anchor: anchor)
} else {
print("Object placement failed - couldn't find surface.")
//cow animations
let robot = try! ModelEntity.load(named: "COW_ANIMATIONS")
let anchor = AnchorEntity()
anchor.children.append(robot)
arView.scene.anchors.append(anchor)
robot.playAnimation(robot.availableAnimations[0].repeat(duration: .infinity),
transitionDuration: 0.5,
startsPaused: false)
//start cow animation
let brownCow = try! ModelEntity.load(named: "COW_ANIMATIONS")
let blackCow = try! ModelEntity.load(named: "Cow")
brownCow.position.x = -1.0
blackCow.position.x = 1.0
brownCow.setParent(anchor)
blackCow.setParent(anchor)
arView.scene.anchors.append(anchor)
let cowAnimationResource = brownCow.availableAnimations[0]
let horseAnimationResource = blackCow.availableAnimations[0]
brownCow.playAnimation(cowAnimationResource.repeat(duration: .infinity),
transitionDuration: 1.25,
startsPaused: false)
blackCow.playAnimation(horseAnimationResource.repeat(duration: .infinity),
transitionDuration: 0.75,
startsPaused: false)
//end cow animations
func placeObject(named entityName: String, for anchor: ARAnchor) {
let entity = try! ModelEntity.loadModel(named: entityName)
entity.generateCollisionShapes(recursive: true)
arView.installGestures([.rotation, .translation], for: entity)
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(entity)
arView.scene.addAnchor(anchorEntity)
}
}
extension ViewController: ARSessionDelegate {
func session( session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors {
if let anchorName = anchor.name, anchorName == "COW_ANIMATIONS" {
placeObject(named: anchorName, for: anchor)
} }
}
}
First step
In RealityKit, if a model was tethered with its personal anchor (the case when one anchor holds just one model), you have two ways to scale it:
cowEntity.scale = [0.7, 0.7, 0.7]
// or
cowAnchor.scale = SIMD3<Float>([1, 1, 1] * 0.7)
and you have minimum two ways to position cow model along any axis (for instance along Z axis):
cowEntity.position = SIMD3<Float>(0, 0,-2)
// or
cowAnchor.position.z = -2.0
So, as you see, when you transform cowAnchor, all its children get this transformation as well.
Second step
You need to appropriately place a model's pivot point in 3D authoring app. At the moment RealityKit doesn't have a tool to fix pivot's position as you can do in SceneKit using simdPivot instance property.

Smooth object rotation in ARSCNView

I'm trying to make my pan gesture to be as smooth as the jigspace app when rotating 3d objects in AR. Here's what I have right now:
#objc func rotateObject(sender: UIPanGestureRecognizer) {
let sceneView = sender.view as! ARSCNView
var currentAngleY: Float = 0.0
let translation = sender.translation(in: sceneView)
var newAngleY = Float(translation.x)*Float(Double.pi)/180
sceneView.scene.rootNode.enumerateChildNodes { (node, stop) in
if sender.state == .changed {
newAngleY -= currentAngleY
node.eulerAngles.y = newAngleY
} else if sender.state == .ended {
currentAngleY = newAngleY
node.removeAllActions()
}
}
}
There seems to be a delay when I'm using it and I'm trying to figure out how to make the rotation as smooth as possible, again, kinda like jigspace or the Ikea app.
I've also noticed that when I try to rotate the object when it's in a certain angle, it could get quite awkward.
Looking at your rotate object function it seems like some of the logic is not quite right.
Firstly, I believe that the var currentAngleY: Float = 0 should be outside of your function body.
Secondly you should be adding the currentAngleY to the newAngleY variable e.g:
/// Rotates The Models On Their YAxis
///
/// - Parameter gesture: UIPanGestureRecognizer
#objc func rotateModels(_ gesture: UIPanGestureRecognizer) {
let translation = gesture.translation(in: gesture.view!)
var newAngleY = (Float)(translation.x)*(Float)(Double.pi)/180.0
newAngleY += currentAngleY
DispatchQueue.main.async {
self.sceneView.scene.rootNode.enumerateChildNodes { (node, _) in
node.eulerAngles.y = newAngleY
}
}
if(gesture.state == .ended) { currentAngleY = newAngleY }
}
An example therefore of this in a working context would be like so:
class ViewController: UIViewController {
#IBOutlet var augmentedRealityView: ARSCNView!
var currentAngleY: Float = 0
//-----------------------
// MARK: - View LifeCycle
//-----------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Generate Our Three Box Nodes
generateBoxNodes()
//2. Create Our Rotation Gesture
let rotateGesture = UIPanGestureRecognizer(target: self, action: #selector(rotateModels(_:)))
self.view.addGestureRecognizer(rotateGesture)
//3. Run The Session
let configuration = ARWorldTrackingConfiguration()
augmentedRealityView.session.run(configuration)
}
//------------------------
// MARK: - Node Generation
//------------------------
/// Generates Three SCNNodes With An SCNBox Geometry
func generateBoxNodes(){
//1. Create An Array Of Colours For Each Face
let colours: [UIColor] = [.red, .green, .blue, .purple, .cyan, .black]
//2. Create An SCNNode Wih An SCNBox Geometry
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0.01)
boxNode.geometry = boxGeometry
//3. Create A Different Material For Each Face
var materials = [SCNMaterial]()
for i in 0..<5{
let faceMaterial = SCNMaterial()
faceMaterial.diffuse.contents = colours[i]
materials.append(faceMaterial)
}
//4. Set The Geometries Materials
boxNode.geometry?.materials = materials
//5. Create Two More Nodes By Cloning The First One
let secondBox = boxNode.flattenedClone()
let thirdBox = boxNode.flattenedClone()
//6. Position Them In A Line & Add To The Scene
boxNode.position = SCNVector3(-0.2, 0, -1.5)
secondBox.position = SCNVector3(0, 0, -1.5)
thirdBox.position = SCNVector3(0.2, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(boxNode)
self.augmentedRealityView.scene.rootNode.addChildNode(secondBox)
self.augmentedRealityView.scene.rootNode.addChildNode(thirdBox)
}
//----------------------
// MARK: - Node Rotation
//----------------------
/// Rotates The Models On Their YAxis
///
/// - Parameter gesture: UIPanGestureRecognizer
#objc func rotateModels(_ gesture: UIPanGestureRecognizer) {
let translation = gesture.translation(in: gesture.view!)
var newAngleY = (Float)(translation.x)*(Float)(Double.pi)/180.0
newAngleY += currentAngleY
DispatchQueue.main.async {
self.augmentedRealityView.scene.rootNode.enumerateChildNodes { (node, _) in
node.eulerAngles.y = newAngleY
}
}
if(gesture.state == .ended) { currentAngleY = newAngleY }
}
}
Hope it helps...

Access Geometry Child Node Scenekit

I am trying to access the childnode: boxNode, and then rotate the node using the input from a pan gesture recognizer. How should I access this node so I can manipulate it? Should I declare the cube node in a global scope and modify that way? I am new to Swift so I apologize if this code looks sloppy. I would like to add rotation actions inside of my panGesture() function. Thanks!
import UIKit
import SceneKit
class GameViewController: UIViewController {
var scnView: SCNView!
var scnScene: SCNScene!
override func viewDidLoad() {
super.viewDidLoad()
setupView()
setupScene()
}
// override var shouldAutorotate: Bool {
// return true
// }
//
// override var prefersStatusBarHidden: Bool {
// return true
// }
func setupView() {
scnView = self.view as! SCNView
}
func setupScene() {
scnScene = SCNScene()
scnView.scene = scnScene
scnView.backgroundColor = UIColor.blueColor()
initCube()
initLight()
initCamera()
}
func initCube () {
var cubeGeometry:SCNGeometry
cubeGeometry = SCNBox(width: 1.0, height: 1.0, length: 1.0, chamferRadius: 0.0)
let boxNode = SCNNode(geometry: cubeGeometry)
scnScene.rootNode.addChildNode(boxNode)
}
func initLight () {
let light = SCNLight()
light.type = SCNLightTypeAmbient
let lightNode = SCNNode()
lightNode.light = light
light.castsShadow = false
lightNode.position = SCNVector3(x: 1.5, y: 1.5, z: 1.5)
scnScene.rootNode.addChildNode(lightNode)
}
func initCamera () {
let camera = SCNCamera()
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.position = SCNVector3(x: 0.0, y: 0.0, z: 5.0)
scnScene.rootNode.addChildNode(cameraNode)
}
func initRecognizer () {
let panTestRecognizer = UIPanGestureRecognizer(target: self, action: #selector(GameViewController.panGesture(_:)))
scnView.addGestureRecognizer(panTestRecognizer)
}
//TAKE GESTURE INPUT AND ROTATE CUBE ACCORDINGLY
func panGesture(sender: UIPanGestureRecognizer) {
//let translation = sender.translationInView(sender.view!)
}
}
First of all you should update your Xcode. It looks like this is an older Swift version you are using. SCNLightTypeAmbient is .ambient in Swift 3.
So the way you should go is by giving your nodes names to identify them:
myChildNode.name = "FooChildBar"
and then you can call on your parent node
let theNodeYouAreLookingFor = parentNode.childNode(withName: "FooChildBar", recursively: true)
You can also use this on your scene's root node
let theNodeYouAreLookingFor = scene.rootNode.childNode(withName: "FooChildBar", recursively: true)
which will look at all nodes in your scene and return the one that you gave the name.