I'm using QLPreviewController to show AR content. With the newer iPhones with LIDAR it seems that object occlusion is enabled by default.
Is there any way to disable object occlusion in the QLVideoController without having to build a custom ARKit view controller? Since my models are quite large (life-size buildings), they seem to disappear or get cut off at the end.
ARQuickLook is a library built for quick and high-quality AR visualization. It adopts RealityKit engine, so all supported here features, like occlusion, anchors, raytraced shadows, physics, DoF, motion blur, HDR, etc, look the same way as they look in RealityKit.
However, you can't turn on/off these features in QuickLook's API. They are on by default, if supported on your iPhone. In case you want to turn on/off People Occlusion you have to use ARKit/RealityKit frameworks, not QuickLook.
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = try! Experience.loadBox()
arView.scene.anchors.append(box)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
self.switchOcclusion()
}
fileprivate func switchOcclusion() {
guard let config = arView.session.configuration as?
ARWorldTrackingConfiguration
else { return }
guard ARWorldTrackingConfiguration.supportsFrameSemantics(
.personSegmentationWithDepth)
else { return }
switch config.frameSemantics {
case [.personSegmentationWithDepth]:
config.frameSemantics.remove(.personSegmentationWithDepth)
default:
config.frameSemantics.insert(.personSegmentationWithDepth)
}
arView.session.run(config)
}
}
Pay particular attention that People Occlusion is supported on A12 and later chipsets. And it works if you're running iOS 12 and higher.
P.S.
The only QuickLook's customisable object is an object from class ARQuickLookPreviewItem.
Use ARQuickLookPreviewItem class when you want to control the background, designate which content the share sheet shares, or disable scaling in case it's not appropriate to allow the user to scale a particular model.
Related
I am trying to create an app where I can use the depth functionalities of RealityKit but the AR drawing capabilities from SceneKit. What I would like to do, is recognize an object and place a 3d model over it (which works already).
When that is completed I would like the user to be able to draw on top of that 3d model (which works fine with SceneKit, but makes the 3d model jitter). I found SCNLine to do the drawing, but since it uses SceneKit I can not use it in the ARView of RealityKit.
I have seen this already, but it does not cover fully what I would like.
Is it possible to use both?
SceneKit and RealityKit are incompatible due to a complete dissimilarity – difference in scenes' hierarchy, difference in renderer and physics engines, difference in component content. What's stopping you from using SceneKit + ARKit (ARSCNView class)?
ARKit 6.0 has a built-in Depth API (the same API is available in RealityKit) that uses a LiDAR scanner to more accurately determine distances in a surrounding environment, allowing us to use plane detection, raycasting and object occlusion more efficiently.
For that, use sceneReconstruction instance property and ARMeshAnchors.
import ARKit
import SceneKit
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene()
sceneView.delegate = self
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.planeDetection = .horizontal
sceneView.session.run(config)
}
}
Delegate's method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor else { return }
let meshGeo = meshAnchor.geometry
// logic ...
node.addChildNode(someModel)
}
}
P. S.
This post will be helpful for you.
I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}
I am working on iOS app using ARKit.
In real world, there is a poster on the wall. The poster is a fixed thing, so any needed preprocessing may be applied.
The goal is to make this poster a window into a virtual room. So that when user approaches the poster, he can look "through" it at some virtual 3D environment (room). Of course, user cannot go through the "window" and then wander in that 3D environment. He only can observe a virtual room looking "through" the poster.
I know that it's possible to make this poster detectable by ARKit, and to play some visual effects around it, or even a movie on top of it.
But I did not find information how to turn it into a window into virtual 3D world.
Any ideas and links to sample projects are greatly appreciated.
Look at this video posted on Augmented Images webpage (use Chrome browser to watch this video).
It's easy to create that type of a virtual cube. All you need is a 3D model of simple cube primitive without a front polygon (in order to see its inner surface). Also you need a plane with a square hole. Assign an out-of-the-box RealityKit occlusion material or a hand-made SceneKit occlusion material for this plane and it will hide all the outer walls of cube behind it (look at a picture below).
In Autodesk Maya Occlusion material is a Hold-Out option in Render Stats (for Viewport 2.0 only):
When you'll be tracking your poster on a wall (with detectionImages option activated), your app must recognize a picture and "load" 3D cube and its masking plane with occlusion shader. So, if ARImageAnchor on a poster and a pivot point of 3D cube must meet, cube's pivot point has to be located on a front edge of cube (at the same level where a wall's surface is).
If you wish to download Apple's sample code containing Image Detection experience – just click a blue button on the same webpage with detectionImages.
Here is a short example of my code:
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self // for using renderer() methods of ARSCNViewDelegate
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let refImage = ARReferenceImage.referenceImages(inGroupNamed: "Poster",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImage
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
sceneView.session.run(config, options: ARSession.RunOptions(options))
}
...and, of course, a SceneKit's renderer() instance method:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if anchorsArray.first != nil {
node.addChildNode(portalNode)
}
}
I am trying to load both people occlusion and motion capture on the same app.
Since ARBodyTrackingConfiguration does not support personSegmentationWithDepth, I am creating 2 ARViews, giving each a different configuration (ARWorldTrackingConfiguration and ARBodyTrackingConfiguration).
The problem is that for some reason only one of the delegates callback is fired, and no depth data is available.
What am I doing wrong here?
Is it not OK to have more than one ARSession live at the same time?
In ARKit 4.0 both features can be run simultaneously. However they are both CPU intensive.
override func viewDidLoad() {
super.viewDidLoad()
guard ARBodyTrackingConfiguration.isSupported
else { fatalError("MoCap is supported on devices with A12 and higher") }
guard ARBodyTrackingConfiguration.supportsFrameSemantics(
.personSegmentationWithDepth)
else { fatalError("People occlusion is not supported on this device.") }
let config = ARBodyTrackingConfiguration()
config.frameSemantics = .personSegmentationWithDepth
config.automaticSkeletonScaleEstimationEnabled = true
arView.session.run(config, options: [])
}
The following attributes are returning false for me, but I am not able to understand why.
ARImageTrackingConfiguration.isSupported
ARWorldTrackingConfiguration.isSupported
I am testing it on a iPhone Xs with iOS 12.1.1, with the code built with Xcode 10.1.
Note that ARConfiguration.isSupported does return true.
Any ideas why this might be happening?
Only one ARTracking class is supported per a given time.
You should write your code this way:
import UIKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
var configuration: ARConfiguration?
//.........
//.........
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if ARWorldTrackingConfiguration.isSupported {
configuration = ARWorldTrackingConfiguration() // 6-DoF
} else {
configuration = AROrientationTrackingConfiguration() // 3-DoF
}
sceneView.session.run(configuaration!)
}
}
Also, read carefully about these 3 types of tracking configuration:
ARWorldTrackingConfiguration() (rotation and translation x-y-z) 6-DoF
AROrientationTrackingConfiguration() (only rotation x-y-z) 3-DoF
ARImageTrackingConfiguration() 6-DoF but image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera.
Because 3-DoF tracking creates limited AR experiences, you should generally not use the AROrientationTrackingConfiguration() class directly. Instead, use the subclass ARWorldTrackingConfiguration() for tracking with six degrees of freedom (6-DoF), plane detection, and hit-testing. Use 3-DoF tracking only as a fallback in situations where 6-DoF tracking is temporarily unavailable.
Hope this helps.