Move entity to center of nonAR view - arkit

I'm recreating the ARQuickLook controller in code. One of its behaviors is to move the model to the visible center when entering Obj mode. I've hacked the ARViewContainer of the default Xcode Augmented Reality App to demonstrate what I'm trying to do.
I think that moving the entity to 0,0,0 will generally not do the right thing because the world origin will be elsewhere. What I'm not clear on is how to specify the translation for entity.move() in the code. I'm assuming I'll need to raycast using a CGPoint describing view center to obtain the appropriate translation but I'm not sure about the details.
Thanks for any help with this.
struct ARViewContainer: UIViewRepresentable {
let arView = ARView(frame: .zero)
let boxAnchor = try! Experience.loadBox()
func makeUIView(context: Context) -> ARView {
arView.scene.anchors.append(boxAnchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
DispatchQueue.main.asyncAfter(deadline: .now() + 4) {
arView.environment.background = .color(.white)
arView.cameraMode = .nonAR
if let entity = boxAnchor.children.first {
let translation = SIMD3<Float>(x: 0, y: 0, z: 0 )
let transform = Transform(scale: .one, rotation: simd_quatf(), translation: translation)
entity.move(to: transform, relativeTo: nil, duration: 2, timingFunction: .easeInOut)
}
}
}
}

Related

Import USDZ and manipulate materials for each surface separately at runtime

Basically, my requirement is to import a usdz, for an example, a car. Then apply different materials for each body part. Eg: material A to body, material B to side mirrors, material C to wheels etc.
I could iterate through the materials in the ModelEntity's ModelComponent and assign material as below.
func updateUIView(_ uiView: ARView, context: Context) {
let entity = try! Entity.loadModel(named: "CarScene")
var material = PhysicallyBasedMaterial()
material.baseColor = PhysicallyBasedMaterial.BaseColor(tint:.red)
material.roughness = PhysicallyBasedMaterial.Roughness(floatLiteral: 0.0)
material.metallic = PhysicallyBasedMaterial.Metallic(floatLiteral: 1.0)
for i in 0...(entity.model?.materials.count ?? 1) - 1 {
entity.model?.materials[i] = material
}
let anchor = AnchorEntity(plane: .horizontal)
anchor.addChild(entity)
uiView.scene.addAnchor(carAnchor)
}
But, I don't know which material is which. If I have multiple usdz files, I want to be able to accurately assign materials for each body part. Is that doable?
Do I need to break the usdz model and assign identifiers before importing to my Xcode project? Any help would be really appreciated.
Use the following sample code which shows you how to modify materials at runtime:
import SwiftUI
import RealityKit
struct ARViewContainer: UIViewRepresentable {
let arView = ARView(frame: .zero)
let fiat = try! Entity.load(named: "Fiat_Uno")
func makeUIView(context: Context) -> ARView {
print(fiat)
fiat.scale /= 5
fiat.orientation = .init(angle: .pi/1.5, axis: [0,1,0])
let anchor = AnchorEntity()
anchor.addChild(fiat)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ view: ARView, context: Context) {
DispatchQueue.main.asyncAfter(deadline: .now() + 0.75) {
let wheels = fiat.findEntity(named: "Rodas_Material_001_0")?
.children[0] as? ModelEntity
wheels?.model?.materials[0] = UnlitMaterial(color: .green)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.75) {
let body = fiat.findEntity(named: "Fiat_UNO_Material_001_0")?
.children[0] as? ModelEntity
body?.model?.materials[0] = UnlitMaterial(color: .blue)
}
}
}
}
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}

ARKit – Environment Occlusion

In Unity we can implement occlusion with Environment Depth. Which uses ARKit behind the scene. How can I achieve same behaviour in iOS ARkit.
I know we can configure frame semantics with depth, but I doubt it is really same as unity environment depth occlusion?
// Build the set of required frame semantics.
let semantics: ARConfiguration.FrameSemantics = [.sceneDepth]
configuration.frameSemantics = semantics
session.run(configuration)
In ARKit implement sceneReconstruction option, and in RealityKit turn on .occlusion.
The only drawback is an ugly mask with soft dilated edges around real-world objects...
import RealityKit
import SwiftUI
import ARKit
struct ContentView: View {
var body: some View {
return ARContainer().ignoresSafeArea()
}
}
struct ARContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.cameraMode = .ar
arView.automaticallyConfigureSession = false
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.session.run(config)
arView.environment.sceneUnderstanding.options = .occlusion
let box: MeshResource = .generateBox(size: 0.5)
let material = SimpleMaterial(color: .green, isMetallic: true)
let entity = ModelEntity(mesh: box, materials: [material])
let anchor = AnchorEntity(world: [0,0,-1.5])
anchor.addChild(entity)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}

How to deal with the black area beyond the SKView() (In Swift Playground)

I'm trying to use SpriteKit+SwiftUI in my work running in Swift Playground. Here are some of my codes
struct SwiftUI: View {
var body: some View {
Test()
}
}
struct Test: UIViewRepresentable {
func makeUIView(context: Context) -> SKView {
let sceneView = SKView()
let gameScene = GameScene()
gameScene.size = CGSize(width: 500, height: 600)
gameScene.scaleMode = .aspectFit
sceneView.presentScene(gameScene)
return sceneView
}
func updateUIView(_ uiView: SKView, context: Context) {
}
}
It runs well, but always have this awful Black area beyond my SKView like the image below.
Black area in the View.
I have tried to change the sceneView.backgroundcolor,or change the gameScene.size and sceneView.size but those just didn't work. 🤨
Thanks so much if you can give me some advice!
The issue is that the call to Test() also needs to have an explicit frame size set. Otherwise, the Test view takes up the entire available screen space, while the sceneView only takes up a portion of this space.
The example below is the same one that you posted, with an explicit frame set on Test() (and a dummy GameScene). This works in the playground, where the result is just a purple square with no black area beyond the view:
import SwiftUI
import SpriteKit
import PlaygroundSupport
let width: CGFloat = 500
let height: CGFloat = 500
struct ContentView: View {
var body: some View {
Test()
.frame(width: width, height: height)
}
}
struct Test: UIViewRepresentable {
func makeUIView(context: Context) -> SKView {
let sceneView = SKView()
let gameScene = GameScene()
gameScene.size = CGSize(width: width, height: height)
gameScene.scaleMode = .aspectFit
sceneView.presentScene(gameScene)
return sceneView
}
func updateUIView(_ uiView: SKView, context: Context) {
}
}
class GameScene: SKScene {
override func didMove(to view: SKView) {
let node = SKShapeNode(rect: CGRect(x: 0, y: 0, width: size.width, height: size.height))
node.fillColor = .purple
addChild(node)
}
}
PlaygroundPage.current.setLiveView(ContentView())

AR objects not anchoring or sizing correctly in RealityKit

I have an AR scene with two objects, one brown cow and one black one. They're both supposed to be displayed in the scene, distanced a little apart. I originally only had the brown cow, which was just a little bit too big. I changed something, which I can't remember, and now my scene is from the inside of the cow, and I can't exit the cow's corpse. It seems like it moves around when I do. I think the issue is because of a positive number for the [minimum bounds]but I'm not entirely sure. I've set the z axis for the cow as well.How can I make the cow a little bit smaller and about 5-7 yards away from me at spawn?
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
showModel()
overlayCoachingView()
setupARView()
arView.addGestureRecognizer(UITapGestureRecognizer(target: self, action:
#selector(handleTap(recognizer:))))
}
func showModel(){
let anchorEntity = AnchorEntity(plane: .horizontal, minimumBounds:[0.7, 0.7])
let entity = try! Entity.loadModel(named: "COW_ANIMATIONS")
entity.setParent(anchorEntity)
arView.scene.addAnchor(anchorEntity)
}
func overlayCoachingView () {
let coachingView = ARCoachingOverlayView(frame: CGRect(x: 0, y: 0, width:
arView.frame.width, height: arView.frame.height))
coachingView.session = arView.session
coachingView.activatesAutomatically = true
coachingView.goal = .horizontalPlane
view.addSubview(coachingView)
}
// Load the "Box" scene from the "Experience" Reality File
// let boxAnchor = try! Experience.loadBox()
// Add the box anchor to the scene
//arView.scene.anchors.append(boxAnchor)
func setupARView(){
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
//object placement
#objc
func handleTap(recognizer: UITapGestureRecognizer){
let location = recognizer.location(in:arView)
let results = arView.raycast(from: location, allowing: .estimatedPlane, alignment: .horizontal)
if let firstResult = results.first {
let anchor = ARAnchor(name: "COW_ANIMATIONS", transform: firstResult.worldTransform)
arView.session.add(anchor: anchor)
} else {
print("Object placement failed - couldn't find surface.")
//cow animations
let robot = try! ModelEntity.load(named: "COW_ANIMATIONS")
let anchor = AnchorEntity()
anchor.children.append(robot)
arView.scene.anchors.append(anchor)
robot.playAnimation(robot.availableAnimations[0].repeat(duration: .infinity),
transitionDuration: 0.5,
startsPaused: false)
//start cow animation
let brownCow = try! ModelEntity.load(named: "COW_ANIMATIONS")
let blackCow = try! ModelEntity.load(named: "Cow")
brownCow.position.x = -1.0
blackCow.position.x = 1.0
brownCow.setParent(anchor)
blackCow.setParent(anchor)
arView.scene.anchors.append(anchor)
let cowAnimationResource = brownCow.availableAnimations[0]
let horseAnimationResource = blackCow.availableAnimations[0]
brownCow.playAnimation(cowAnimationResource.repeat(duration: .infinity),
transitionDuration: 1.25,
startsPaused: false)
blackCow.playAnimation(horseAnimationResource.repeat(duration: .infinity),
transitionDuration: 0.75,
startsPaused: false)
//end cow animations
func placeObject(named entityName: String, for anchor: ARAnchor) {
let entity = try! ModelEntity.loadModel(named: entityName)
entity.generateCollisionShapes(recursive: true)
arView.installGestures([.rotation, .translation], for: entity)
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(entity)
arView.scene.addAnchor(anchorEntity)
}
}
extension ViewController: ARSessionDelegate {
func session( session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors {
if let anchorName = anchor.name, anchorName == "COW_ANIMATIONS" {
placeObject(named: anchorName, for: anchor)
} }
}
}
First step
In RealityKit, if a model was tethered with its personal anchor (the case when one anchor holds just one model), you have two ways to scale it:
cowEntity.scale = [0.7, 0.7, 0.7]
// or
cowAnchor.scale = SIMD3<Float>([1, 1, 1] * 0.7)
and you have minimum two ways to position cow model along any axis (for instance along Z axis):
cowEntity.position = SIMD3<Float>(0, 0,-2)
// or
cowAnchor.position.z = -2.0
So, as you see, when you transform cowAnchor, all its children get this transformation as well.
Second step
You need to appropriately place a model's pivot point in 3D authoring app. At the moment RealityKit doesn't have a tool to fix pivot's position as you can do in SceneKit using simdPivot instance property.

How to make RealityKit to show only CollisionComponents?

I am trying to see the CollisionComponents on my ARView.
I used the .showPhysics as part of the debugOptions, but since I have 20 objects on screen, I get all the normals going crazy and the color of the CollisionComponents in unclear (some form of weird pink).
Does anyone have any idea how to present only the CollisionComponents without any extra data as part of the .showPhysics?
You can extend a standard functionality of RealityKit's ARView by using simple Swift extension:
import RealityKit
import ARKit
fileprivate extension ARView.DebugOptions {
func showCollisions() -> ModelEntity {
print("Code for visualizing collision objects goes here...")
let vc = ViewController()
let box = MeshResource.generateBox(size: 0.04)
let color = UIColor(white: 1.0, alpha: 0.15)
let colliderMaterial = UnlitMaterial(color: color)
vc.visualCollider = ModelEntity(mesh: box,
materials: [colliderMaterial])
return vc.visualCollider
}
}
...and then call this method in ViewController when you're tapping on a screen:
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let anchor = AnchorEntity()
var ballEntity = ModelEntity()
var visualCollider = ModelEntity()
var sphere: MeshResource?
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
sphere = MeshResource.generateSphere(radius: 0.02)
let material = SimpleMaterial(color: .systemPink,
isMetallic: false)
ballEntity = ModelEntity(mesh: sphere!,
materials: [material])
let point: CGPoint = sender.location(in: arView)
guard let query = arView.makeRaycastQuery(from: point,
allowing: .estimatedPlane,
alignment: .any)
else { return }
let result = arView.session.raycast(query)
guard let raycastResult = result.first
else { return }
let anchor = AnchorEntity(raycastResult: raycastResult)
anchor.addChild(ballEntity)
arView.scene.anchors.append(anchor)
let showCollisions = arView.debugOptions.showCollisions() // here it is
ballEntity.addChild(showCollisions)
ballEntity.generateCollisionShapes(recursive: true)
}
}
Please consider, it's an approximate visualization. This code just shows you a way to go on.