ARKit – Environment Occlusion - swift

In Unity we can implement occlusion with Environment Depth. Which uses ARKit behind the scene. How can I achieve same behaviour in iOS ARkit.
I know we can configure frame semantics with depth, but I doubt it is really same as unity environment depth occlusion?
// Build the set of required frame semantics.
let semantics: ARConfiguration.FrameSemantics = [.sceneDepth]
configuration.frameSemantics = semantics
session.run(configuration)

In ARKit implement sceneReconstruction option, and in RealityKit turn on .occlusion.
The only drawback is an ugly mask with soft dilated edges around real-world objects...
import RealityKit
import SwiftUI
import ARKit
struct ContentView: View {
var body: some View {
return ARContainer().ignoresSafeArea()
}
}
struct ARContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.cameraMode = .ar
arView.automaticallyConfigureSession = false
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.session.run(config)
arView.environment.sceneUnderstanding.options = .occlusion
let box: MeshResource = .generateBox(size: 0.5)
let material = SimpleMaterial(color: .green, isMetallic: true)
let entity = ModelEntity(mesh: box, materials: [material])
let anchor = AnchorEntity(world: [0,0,-1.5])
anchor.addChild(entity)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}

Related

Import USDZ and manipulate materials for each surface separately at runtime

Basically, my requirement is to import a usdz, for an example, a car. Then apply different materials for each body part. Eg: material A to body, material B to side mirrors, material C to wheels etc.
I could iterate through the materials in the ModelEntity's ModelComponent and assign material as below.
func updateUIView(_ uiView: ARView, context: Context) {
let entity = try! Entity.loadModel(named: "CarScene")
var material = PhysicallyBasedMaterial()
material.baseColor = PhysicallyBasedMaterial.BaseColor(tint:.red)
material.roughness = PhysicallyBasedMaterial.Roughness(floatLiteral: 0.0)
material.metallic = PhysicallyBasedMaterial.Metallic(floatLiteral: 1.0)
for i in 0...(entity.model?.materials.count ?? 1) - 1 {
entity.model?.materials[i] = material
}
let anchor = AnchorEntity(plane: .horizontal)
anchor.addChild(entity)
uiView.scene.addAnchor(carAnchor)
}
But, I don't know which material is which. If I have multiple usdz files, I want to be able to accurately assign materials for each body part. Is that doable?
Do I need to break the usdz model and assign identifiers before importing to my Xcode project? Any help would be really appreciated.
Use the following sample code which shows you how to modify materials at runtime:
import SwiftUI
import RealityKit
struct ARViewContainer: UIViewRepresentable {
let arView = ARView(frame: .zero)
let fiat = try! Entity.load(named: "Fiat_Uno")
func makeUIView(context: Context) -> ARView {
print(fiat)
fiat.scale /= 5
fiat.orientation = .init(angle: .pi/1.5, axis: [0,1,0])
let anchor = AnchorEntity()
anchor.addChild(fiat)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ view: ARView, context: Context) {
DispatchQueue.main.asyncAfter(deadline: .now() + 0.75) {
let wheels = fiat.findEntity(named: "Rodas_Material_001_0")?
.children[0] as? ModelEntity
wheels?.model?.materials[0] = UnlitMaterial(color: .green)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.75) {
let body = fiat.findEntity(named: "Fiat_UNO_Material_001_0")?
.children[0] as? ModelEntity
body?.model?.materials[0] = UnlitMaterial(color: .blue)
}
}
}
}
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}

Move entity to center of nonAR view

I'm recreating the ARQuickLook controller in code. One of its behaviors is to move the model to the visible center when entering Obj mode. I've hacked the ARViewContainer of the default Xcode Augmented Reality App to demonstrate what I'm trying to do.
I think that moving the entity to 0,0,0 will generally not do the right thing because the world origin will be elsewhere. What I'm not clear on is how to specify the translation for entity.move() in the code. I'm assuming I'll need to raycast using a CGPoint describing view center to obtain the appropriate translation but I'm not sure about the details.
Thanks for any help with this.
struct ARViewContainer: UIViewRepresentable {
let arView = ARView(frame: .zero)
let boxAnchor = try! Experience.loadBox()
func makeUIView(context: Context) -> ARView {
arView.scene.anchors.append(boxAnchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
DispatchQueue.main.asyncAfter(deadline: .now() + 4) {
arView.environment.background = .color(.white)
arView.cameraMode = .nonAR
if let entity = boxAnchor.children.first {
let translation = SIMD3<Float>(x: 0, y: 0, z: 0 )
let transform = Transform(scale: .one, rotation: simd_quatf(), translation: translation)
entity.move(to: transform, relativeTo: nil, duration: 2, timingFunction: .easeInOut)
}
}
}
}

How to zoom in to a node of a scene that's being displayed in swift ui

I have a Scenekit scene in swift UI, I made the scene a UIViewRepresentable, how can I zoom in on one of the nodes of the scene when the user touches that specific node in the Scenekit scene?
import SceneKit
struct HouseView : UIViewRepresentable {
func makeUIView(context: Context) -> SCNView {
return SCNView(frame: .zero)
}
func updateUIView(_ scnView: SCNView, context: Context) {
let HouseScene = SCNScene(named: "House.scn")
scnView.scene = HouseScene
// allows the user to manipulate the camera
scnView.allowsCameraControl = true
// show statistics such as fps and timing information
scnView.showsStatistics = false
scnView.backgroundColor = UIColor.systemBackground
scnView.defaultCameraController.maximumVerticalAngle = 10
scnView.defaultCameraController.minimumVerticalAngle = -10
scnView.defaultCameraController.maximumHorizontalAngle = 180
scnView.defaultCameraController.minimumHorizontalAngle = -10
scnView.isJitteringEnabled = true
let CameraNode = HouseScene?.rootNode.childNode(withName: "CameraNode", recursively: true)
CameraNode?.position = SCNVector3(x: 12, y:2, z: 0)
}
}
struct HouseView_Previews: PreviewProvider {
static var previews: some View {
HouseView()
}
}`''
Please see this post: 54058938
Create a basic camera class and you can focus on the node touched and set the distance away from it.

How to make RealityKit to show only CollisionComponents?

I am trying to see the CollisionComponents on my ARView.
I used the .showPhysics as part of the debugOptions, but since I have 20 objects on screen, I get all the normals going crazy and the color of the CollisionComponents in unclear (some form of weird pink).
Does anyone have any idea how to present only the CollisionComponents without any extra data as part of the .showPhysics?
You can extend a standard functionality of RealityKit's ARView by using simple Swift extension:
import RealityKit
import ARKit
fileprivate extension ARView.DebugOptions {
func showCollisions() -> ModelEntity {
print("Code for visualizing collision objects goes here...")
let vc = ViewController()
let box = MeshResource.generateBox(size: 0.04)
let color = UIColor(white: 1.0, alpha: 0.15)
let colliderMaterial = UnlitMaterial(color: color)
vc.visualCollider = ModelEntity(mesh: box,
materials: [colliderMaterial])
return vc.visualCollider
}
}
...and then call this method in ViewController when you're tapping on a screen:
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let anchor = AnchorEntity()
var ballEntity = ModelEntity()
var visualCollider = ModelEntity()
var sphere: MeshResource?
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
sphere = MeshResource.generateSphere(radius: 0.02)
let material = SimpleMaterial(color: .systemPink,
isMetallic: false)
ballEntity = ModelEntity(mesh: sphere!,
materials: [material])
let point: CGPoint = sender.location(in: arView)
guard let query = arView.makeRaycastQuery(from: point,
allowing: .estimatedPlane,
alignment: .any)
else { return }
let result = arView.session.raycast(query)
guard let raycastResult = result.first
else { return }
let anchor = AnchorEntity(raycastResult: raycastResult)
anchor.addChild(ballEntity)
arView.scene.anchors.append(anchor)
let showCollisions = arView.debugOptions.showCollisions() // here it is
ballEntity.addChild(showCollisions)
ballEntity.generateCollisionShapes(recursive: true)
}
}
Please consider, it's an approximate visualization. This code just shows you a way to go on.

How to Add Material to ModelEntity programatically in RealityKit?

The docs for RealityKit include the structs: OcclusionMaterial, SimpleMaterial, and UnlitMaterial for adding materials to a ModelEntity.
Alternatively you can load in a model with a material attached to it.
I want to add a custom material/texture to a ModelEntity programmatically. How can I achieve this on the fly without adding the material to a model in Reality Composer or some other 3D Software?
Updated: January 26, 2023
RealityKit materials
There are 6 types of materials in RealityKit 2.0 and RealityFoundation at the moment:
SimpleMaterial
UnlitMaterial
OcclusionMaterial (read this post to find out how to setup SceneKit occlusion shader)
VideoMaterial (look at this post to find out how to setup it)
PhysicallyBasedMaterial
CustomMaterial (Medium story)
SwiftUI version
Here I used two macOS implementations (SwiftUI and Cocoa) to demonstrate how to programmatically assign RealityKit materials.
import SwiftUI
import RealityKit
struct VRContainer : NSViewRepresentable {
let arView = ARView(frame: .zero)
let anchor = AnchorEntity()
func makeNSView(context: Context) -> ARView {
var smpl = SimpleMaterial()
smpl.color.tint = .blue
smpl.metallic = 0.7
smpl.roughness = 0.2
var pbr = PhysicallyBasedMaterial()
pbr.baseColor.tint = .green
let mesh: MeshResource = .generateBox(width: 0.5,
height: 0.5,
depth: 0.5,
cornerRadius: 0.02,
splitFaces: true)
let box = ModelEntity(mesh: mesh, materials: [smpl, pbr])
box.orientation = Transform(pitch: .pi/4,
yaw: .pi/4, roll: 0.0).rotation
anchor.addChild(box)
arView.scene.anchors.append(anchor)
arView.environment.background = .color(.black)
return arView
}
func updateNSView(_ view: ARView, context: Context) { }
}
struct ContentView: View {
var body: some View {
VRContainer().ignoresSafeArea()
}
}
Cocoa version
import Cocoa
import RealityKit
class ViewController: NSViewController {
#IBOutlet var arView: ARView!
override func awakeFromNib() {
let box = try! Experience.loadBox()
var simpleMat = SimpleMaterial()
simpleMat.color = .init(tint: .blue, texture: nil)
simpleMat.metallic = .init(floatLiteral: 0.7)
simpleMat.roughness = .init(floatLiteral: 0.2)
var pbr = PhysicallyBasedMaterial()
pbr.baseColor = .init(tint: .green, texture: nil)
let mesh: MeshResource = .generateBox(width: 0.5,
height: 0.5,
depth: 0.5,
cornerRadius: 0.02,
splitFaces: true)
let boxComponent = ModelComponent(mesh: mesh,
materials: [simpleMat, pbr])
box.steelBox?.children[0].components.set(boxComponent)
box.steelBox?.orientation = Transform(pitch: .pi/4,
yaw: .pi/4,
roll: 0).rotation
arView.scene.anchors.append(box)
}
}
Read this post to find out how to load a texture for RealityKit's shaders.
RealityKit shaders vs SceneKit shaders
We know that in SceneKit there are 5 different shading models, so we can use RealityKit's SimpleMaterial, PhysicallyBasedMaterial and UnlitMaterial to generate all these five shaders that we've been accustomed to.
Let's see how it looks like:
SCNMaterial.LightingModel.blinn – SimpleMaterial(color: . gray,
roughness: .float(0.5),
isMetallic: false)
SCNMaterial.LightingModel.lambert – SimpleMaterial(color: . gray,
roughness: .float(1.0),
isMetallic: false)
SCNMaterial.LightingModel.phong – SimpleMaterial(color: . gray,
roughness: .float(0.0),
isMetallic: false)
SCNMaterial.LightingModel.physicallyBased – PhysicallyBasedMaterial()
// all three shaders (`.constant`, `UnlitMaterial` and `VideoMaterial `)
// don't depend on lighting
SCNMaterial.LightingModel.constant – UnlitMaterial(color: .gray)
– VideoMaterial(avPlayer: avPlayer)