I want to create an AR experience with a full screen video with AR elements, and in addition a small view at the bottom of the screen to show some relevant 3D objects.
In the past, I used to implement such features with an ARSCNView for the AR part, and a SCNView (SceneKit view) for the little preview part.
Is there a way I can do that with RealityKit for those 2 views?
I understand that RealityKit needs an ARView to render, which means I would need 2 ARViews on my screen, with different cameraMode settings:
let arView = ARView(frame: self.view.frame,
cameraMode: .ar,
automaticallyConfigureSession: true)
view.addSubview(arView)
let newAnchor = AnchorEntity(world: [0, 0, -1])
let newBox = ModelEntity(mesh: .generateBox(size: 0.5))
newAnchor.addChild(newBox)
arView.scene.anchors.append(newAnchor)
let arView2 = ARView(frame: CGRect(x:50,y:300,width: 200,height:200),
cameraMode: .nonAR,
automaticallyConfigureSession: true)
view.addSubview(arView2)
let newAnchor2 = AnchorEntity(world: [0, 0,-1])
let newBox2 = ModelEntity(mesh: .generateBox(size: 0.5))
newAnchor2.addChild(newBox2)
arView2.scene.anchors.append(newAnchor2)
arView shows the video feed with the newBox as expected.
My problem is that arView2 shows the camera feed although I expected a black view with only the newBox2. In addition this camera feed is distorted (it seems to be the contents of arView resized to fit arView2)
What am I missing?
Unlike ARKit, in RealityKit 2.0 you cannot simultaneously run two ARSessions (I mean a face config as primary setting, and a world config as a secondary setting), just a single one. Therefore, the running session of the first ARView must be used by both views.
arView2.session = arView.session
arView2.environment.background = .color(.systemIndigo)
Related
I have a ModelEntity illustrating an energy grid and want to make the whole grid itself glow and emit light. I am trying to implement a PointLight and make the whole emit light from its whole model and not only from one point in the scene.
I already tried:
adding a PointLight as a child to the ModelEntity -> point source, the grid itself is not emitting the light, this makes sense
adding a light component to the modelEntity -> still a point source, the grid itself is not emitting the light. Shouldn't the grid itself be the source of light now?
let scene = try! Experience.loadEnergy()
let electricityLinesBuilding = scene.findEntity(named: "electricity_lines_building") as! ModelEntity
let lightComponent = PointLightComponent(color: .yellow, intensity: 1000, attenuationRadius: 1000.0)
electricityLinesBuilding.components.set(lightComponent)
using the PhysicallyBasedMaterial properties .emissiveColor and .emissiveIntensity
let scene = try! Experience.loadEnergy()
let electricityLinesBuilding = scene.findEntity(named: "electricity_lines_building") as! ModelEntity
var electricityLinesMaterial = PhysicallyBasedMaterial()
electricityLinesMaterial.emissiveIntensity = 1.0
electricityLinesMaterial.emissiveColor = PhysicallyBasedMaterial.EmissiveColor(color: .white)
electricityLinesBuilding.model?.materials = [electricityLinesMaterial]
I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision
I'm using multiple unique marker anchors in a scene that each get a ModelEntity displayed on them. I have no problem detecting each marker individually, but once one is tracked and the model appears, the others won't track. If the tracked marker moves out of frame then suddenly another marker will start being tracked.
My suspicion is that there exists a setting for the max number of markers and it's set to 1. (Like the maximumNumberOfTrackedImages from SceneKit.) Is there a setting I'm missing here, is this a limitation of RealityKit, or am I just messing something up when I add my anchors to the scene?
I'm calling the following function for each item in an array:
class RealityViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let arView = ARView(frame: UIScreen.main.bounds)
view.addSubview(arView)
let targets = ["image1", "image2", "image3"]
for target in targets {
addTarget(target:target,arView:arView)
}
}
func addTarget(target: String, arView: ARView) {
let imageAnchor = AnchorEntity(.image(group: "Markers", name: target))
arView.scene.addAnchor(imageAnchor)
let plane = MeshResource.generatePlane(width: 0.05, height: 0.05, cornerRadius: 0.0)
let material = SimpleMaterial(color: .blue, roughness: 1.0, isMetallic: false)
let model = ModelEntity(mesh: plane, materials: [material])
imageAnchor.addChild(model)
}
}
Update:
While #ARGeo's answer did solve the original question during further testing I found with the updated code I was only able to track a maximum of 4 targets at a time. Again I'm not sure this is a hard limit of RealityKit or what but if anyone has any insight please add to the accepted answer.
Below you can see only 4 of 6 unique markers being tracked:
There's no number of markers being tracked property in ARKit and RealityKit.
So, to correct a situation, you need to use this code for adding anchors in ARView:
arView.scene.anchors.append(imageAnchor)
And you also might try this code for for-in loop (because Xcode 11 beta might incorrectly run a loop):
for i in 0..<targets.count {
addTarget(target: targets[i], arView: arView)
}
P.S.
Look at this post. Now ARKit 5.0 has an ability to track more than 4 images at a time (at the moment up to 100 images simultaneously).
I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}
I use ARKit 1.5 and this func to highlight vertical surfaces, but it doesn't work really well.
func createPlaneNode(planeAnchor: ARPlaneAnchor) -> SCNNode {
let scenePlaneGeometry = ARSCNPlaneGeometry(device: metalDevice!)
scenePlaneGeometry?.update(from: planeAnchor.geometry)
let planeNode = SCNNode(geometry: scenePlaneGeometry)
planeNode.name = "\(currPlaneId)"
planeNode.opacity = 0.25
if planeAnchor.alignment == .vertical {
planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
}
currPlaneId += 1
return planeNode
}
It always finds some FeaturePoints on vertical objects but very rare it actually highlights the surface using the planeNode that I created.
I want to be able to detect and highlight things like a pillar or even a man. How would you approach this?
Image of object with featurePoints
Image with the result in best case scenario
In ARKit 1.5 and ARKit 2.0 there's .planeDetection instance property allowing you to enable .horizontal, .vertical, or both simultaneously .horizontal and .vertical detections.
var planeDetection: ARWorldTrackingConfiguration.PlaneDetection { get set }
ViewController's code:
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .vertical
//configuration.planeDetection = [.vertical, .horizontal]
sceneView.session.run(configuration)
If you want to successfully detect and track vertical objects in your environment, you need good lighting conditions and rich non-repetitive texture. Look at the picture below: