I am using RealityKit face anchors. I downloaded a model from SketchFab but I am trying to put the model on the face it does not work and does not display anything.
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let configuration = ARFaceTrackingConfiguration()
arView.session.run(configuration)
let anchor = AnchorEntity(.face)
let model = try! Entity.loadModel(named: "squid-game")
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
One of the most common problems that AR developers can deal with is model size. In RealityKit, ARKit, RoomPlan & SceneKit, the working units are meters. Quite often models created in 3dsMax or Blender are imported into Xcode in centimeter scale. Therefore, they are 100 times bigger than they should be. You cannot see your model because you may be inside it and its inner surface of shader is not rendered in RealityKit. So, all you need is to scale the size of the model.
anchor.scale /= 100
The second common problem is a pivot point's location. In 99% of cases, the pivot should be inside the model. Model's pivot is like a "dart", and .face anchor is like "10 points". Unfortunately, RealityKit 2.0 does not have the ability to control the pivot. SceneKit does.
There are also hardware constraints. Run the following simple check:
if !ARFaceTrackingConfiguration.isSupported {
print("Your device isn't supported")
} else {
let config = ARFaceTrackingConfiguration()
arView.session.run(config)
}
I also recommend you open your .usdz model in Reality Composer app to make sure it can be successfully loaded and is not 100% transparent.
Check your model.
Is there any error when you run the demo?
You can use a .reality file to test, and you can also download a sample from the Apple Developer site.
Related
I have a very simple app which puts down an .rcproject file.
import ARKit
import RealityKit
class ViewController: UIViewController {
private var marLent: Bool = false
private lazy var arView: ARView = {
let arview = ARView()
arview.translatesAutoresizingMaskIntoConstraints = false
arview.isUserInteractionEnabled = true
return arview
}()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let scene = try! Experience.loadScene()
arView.scene.anchors.append(scene)
configureUI()
setupARView()
}
private func configureUI() {
view.addSubview(arView)
arView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
arView.topAnchor.constraint(equalTo: view.topAnchor),
arView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
arView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
arView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
])
}
private func setupARView() {
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
}
How could I create a label for the placed down Entity which looks like something like these. So basically have a text that points on the entity and the text would be the entity's name for example.
There are 4 ways to create info dots with text plates for AR scenes. Here's an animated .gif.
the first way – using Autodesk Maya with pre-installed USD plugin (it's the most preferable way, because you can apply both animation and Python scripting techniques);
the second one – using Reality Composer (it's quite fast way, but you won't be able to exactly replicate info dots animation, like in Apple's .reality sample files);
the third one – programmatically in RealityKit;
the fourth way – programmatically using Pythonic USD Schema.
Nonetheless, for brevity, let's see how we can do it in Reality Composer app.
Reality Composer's behaviors
In Reality Composer's scene, drag and drop .png files with transparency (8-bit RGBA) to create an info dot and an info plate – each file will be turned into plane with its corresponding image. After that, you can apply Reality Composer's behaviors to any separate part of your model.
Create first custom behavior with a Scene Start trigger, then add LookAtCamera and Hide actions (when scene starts, both, a cylinder primitive and info plate must be hidden).
Create second behavior with a Tap trigger, then add LookAtCamera, Show, Wait and Hide actions (three actions must be merged together). If you tap an info dot, both hidden objects will be shown with the help of fade in/out animation.
Final step: save the scene as .reality file.
Hope, now you have an idea how it's done.
I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision
I've created an app using the RealityKit template file. Inside RealityComposer there are multiple scenes, all the scenes use image recognition that activates some animations.
Inside Xcode I have to load all the scenes as anchors and append those anchors to arView.scene.anchors array. The issue is an obvious one, as I present the physical 2D image one after the other I get multiple anchors piled on top of each other which is not desirable. I'm aware of arView.scene.anchors.removeAll() prior to loading the new anchor but my issue is this:
How do I check when a certain image has appeared to therefore remove the existing anchor and load the correct one? I've tried to look for something like there is in ARKit as didUpdate but I can't see anything similar in RealityKit.
Many thanks
Foreword
RealityKit's AnchorEntity(.image) coming from RC, matches ARKit's ARImageTrackingConfig. When iOS device recognises a reference image, it creates Image Anchor (that conforms to ARTrackable protocol) that tethers a corresponding 3D model. And, as you understand, you must show just one reference image at a time (in your particular case AR app can't operate normally when you give it two or more images simultaneously).
Code snippet showing how if condition logic might look like:
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let id02Scene = try! Experience.loadID2()
print(id02Scene) // prints scene hierarchy
let anchor = id02Scene.children[0]
print(anchor.components[AnchoringComponent] as Any)
if anchor.components[AnchoringComponent] == AnchoringComponent(
.image(group: "Experience.reality",
name: "assets/MainID_4b51de84.jpeg")) {
arView.scene.anchors.removeAll()
print("LOAD SCENE")
arView.scene.anchors.append(id02Scene)
}
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
ID2 scene hierarchy printed in console:
P.S.
You should implement SwiftUI Coordinator class (read about it here), and inside Coordinator use ARSessionDelegate's session(_:didUpdate:) instance method to update anchors properties at 60 fps.
Also you may use the following logic – if anchor of scene 1 is active or anchor of scene 3 is active, just delete all anchors from collection and load scene 2.
var arView = ARView(frame: .zero)
let id01Scene = try! Experience.loadID1()
let id02Scene = try! Experience.loadID2()
let id03Scene = try! Experience.loadID3()
func makeUIView(context: Context) -> ARView {
arView.session.delegate = context.coordinator
arView.scene.anchors.append(id01Scene)
arView.scene.anchors.append(id02Scene)
arView.scene.anchors.append(id03Scene)
return arView
}
...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if arView.scene.anchors[0].isActive || arView.scene.anchors[2].isActive {
arView.scene.anchors.removeAll()
arView.scene.anchors.append(id02Scene)
print("Load Scene Two")
}
}
I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}
I'm using multiple unique marker anchors in a scene that each get a ModelEntity displayed on them. I have no problem detecting each marker individually, but once one is tracked and the model appears, the others won't track. If the tracked marker moves out of frame then suddenly another marker will start being tracked.
My suspicion is that there exists a setting for the max number of markers and it's set to 1. (Like the maximumNumberOfTrackedImages from SceneKit.) Is there a setting I'm missing here, is this a limitation of RealityKit, or am I just messing something up when I add my anchors to the scene?
I'm calling the following function for each item in an array:
class RealityViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let arView = ARView(frame: UIScreen.main.bounds)
view.addSubview(arView)
let targets = ["image1", "image2", "image3"]
for target in targets {
addTarget(target:target,arView:arView)
}
}
func addTarget(target: String, arView: ARView) {
let imageAnchor = AnchorEntity(.image(group: "Markers", name: target))
arView.scene.addAnchor(imageAnchor)
let plane = MeshResource.generatePlane(width: 0.05, height: 0.05, cornerRadius: 0.0)
let material = SimpleMaterial(color: .blue, roughness: 1.0, isMetallic: false)
let model = ModelEntity(mesh: plane, materials: [material])
imageAnchor.addChild(model)
}
}
Update:
While #ARGeo's answer did solve the original question during further testing I found with the updated code I was only able to track a maximum of 4 targets at a time. Again I'm not sure this is a hard limit of RealityKit or what but if anyone has any insight please add to the accepted answer.
Below you can see only 4 of 6 unique markers being tracked:
There's no number of markers being tracked property in ARKit and RealityKit.
So, to correct a situation, you need to use this code for adding anchors in ARView:
arView.scene.anchors.append(imageAnchor)
And you also might try this code for for-in loop (because Xcode 11 beta might incorrectly run a loop):
for i in 0..<targets.count {
addTarget(target: targets[i], arView: arView)
}
P.S.
Look at this post. Now ARKit 5.0 has an ability to track more than 4 images at a time (at the moment up to 100 images simultaneously).