SceneKit LIDAR iOS: Show unscanned regions of camera view in the background with a different color/texture - swift

I'm building an app similar to Polycam, 3D Scanner App, Scaniverse, etc. I visualize a mesh for scanned regions and export it into different formats. I would like to show the user what regions are scanned, and what not. To do so, I need to differentiate between them.
My idea is to build something like Polycam does..
< Polycam blue background for unscanned regions >
I tried changing the background content property of the scene, but it causes the whole camera view to be replaced by the color.
arSceneView.scene.background.contents = UIColor.black
I'm using ARSCNView and setting up plane detection as follows:
private func setupPlaneDetection() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
arSceneView.session.run(configuration)
arSceneView.session.delegate = self
// arSceneView.scene.background.contents = UIColor.black
arSceneView.delegate = self
UIApplication.shared.isIdleTimerDisabled = true
arSceneView.showsStatistics = true
}
Thanks in advance for any help you can provide!

I’ve done this before by adding a sphere to the scene with a two-sided material (slightly transparent) and with a radius large enough that the camera and the scanned surface will always be inside of it. Here’s an example of how to do that:
let backgroundSphereNode = SCNNode()
backgroundSphereNode.geometry = SCNSphere(radius: 500)
let material = SCNMaterial()
material.isDoubleSided = true
material?.diffuse.contents = UIColor(white: 0, alpha: 0.9)
backgroundSphereNode.geometry?.materials = [material]
Note that I’m using a black color - you can obviously change this to whatever you need, but keep the alpha channel slightly transparent. And tweak the radius of the sphere so it works for your scene.

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

SceneKit – Loading HDR or EXR lightingEnvironment has no effect

I tried loading an .hdr file to use it as a skybox and use its lighting informations. This is the code I used:
backgroundColor = UIColor.gray
// check if a default skybox is added
let environment = UIImage(named: "studio_small_09_2k.hdr")
scene?.lightingEnvironment.contents = environment
scene?.lightingEnvironment.intensity = 1.0
scene?.background.contents = environment
Unfortunately I recieve a grey screen and also no errors. Has anyone experience in using hdr files in SceneKit?
XCode Version: 13.2.1
iOS version: 15.3.1
hdr file: https://polyhaven.com/a/studio_small_09
I usually use a Cube Texture Set, where each of 6 images is square (height == width).
Also, the following cube map representations are supported:
Vertical strip as single image (height == 6 * width)
Horizontal strip as single image (6 * height == width)
Spherical projection as single image (2 * height == width)
Here's a SwiftUI code:
func makeUIView(context: Context) -> SCNView {
let sceneView = SCNView(frame: .zero)
sceneView.scene = SCNScene()
// if EXR or HDR is 2:1 spherical map, it really meets the requirements
sceneView.scene?.lightingEnvironment.contents = UIImage(named: "std.exr")
sceneView.backgroundColor = .black
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
let node = SCNNode()
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.lightingModel = .physicallyBased
node.geometry?.firstMaterial?.metalness.contents = 1.0
sceneView.scene?.rootNode.addChildNode(node)
return sceneView
}
Pay particular attention – you need .physicallyBased lighting model to get HDR or EXR reflections.
And let's set it for BG:
sceneView.scene?.background.contents = UIImage(named: "std.exr")
Why your .exr doesn't work?
The solutions is simple: delete your .exr from project, empty the Trash and after that drag-and-drop .exr file, in Choose options for adding these files window choose Add to targets:
Now your .exr must work.

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

AR with iOS: putting a light in the scene makes everything black?

Ok, I am trying desperately to achieve this sort of warm lighting on my objects when added to my ARScene in Swift/Xcode - warm lighting and little glowing lights around:
To be clear, I do NOT want the objects I add to my scene to look like they belong in the surrounding room. I want them to stand out/ look warm and glow.All the tutorials on ARKit teach you how to mimic the lighting of the actual room.
Xcode has several lighting options, pulling from the surroundings gathered by the camera because with:
if let lightEstimate = session.currentFrame?.lightEstimate
I can print out the warmth, intensity, etc. And I also have these properties currently set to match the light of room:
sceneView.automaticallyUpdatesLighting = true
extension ARSCNView {
func setup() { //SCENE SETUP
antialiasingMode = .multisampling4X
autoenablesDefaultLighting = true
preferredFramesPerSecond = 60
contentScaleFactor = 1.3
if let camera = pointOfView?.camera {
camera.wantsHDR = true
camera.wantsExposureAdaptation = true
camera.exposureOffset = -1
camera.minimumExposure = -1
camera.maximumExposure = 3
}
}
}
I have tried upping the emission on my object's textures and everything but nothing achieves the effect. Adding a light just turns the objects black/no color.
What is wrong here?
To create this type of glowing red neon light result in ARKit. You can do the following.
You need to create a reactor.scnp (scenekit particle System File) and make the following changes to create the glowing red halo. This should be place in your Resources directory of the playground along with the file spark.png
These are the settings to change from the default reactor type. Leave all the other settings alone.
Change the Image animate color to red/orange/red/black
speed factor = 0.1
enable lighting checked
Emitter Shape = Sphere
Image Size = 0.5
Image Intensity = 0.1
Simulation Speed Factor = 0.1
Note: The code below is playground app I use for testing purposes. You just tap anywhere to add the Neon light into the scene. You can place as many neon lights as you like.
import ARKit
import SceneKit
import PlaygroundSupport
import SceneKit.ModelIO
class ViewController: NSObject {
var sceneView: ARSCNView
init(sceneView: ARSCNView) {
self.sceneView = sceneView
super.init()
self.setupWorldTracking()
self.sceneView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(ViewController.handleTap(_:))))
}
private func setupWorldTracking() {
if ARWorldTrackingConfiguration.isSupported {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
configuration.isLightEstimationEnabled = true
self.sceneView.session.run(configuration, options: [])
}
}
#objc func handleTap(_ gesture: UITapGestureRecognizer) {
let results = self.sceneView.hitTest(gesture.location(in: gesture.view), types: ARHitTestResult.ResultType.featurePoint)
guard let result: ARHitTestResult = results.first else {
return
}
let cylinder = SCNCylinder(radius: 0.05, height: 1)
cylinder.firstMaterial?.emission.contents = UIColor.red
cylinder.firstMaterial?.emission.intensity = 1
let spotLight = SCNNode()
spotLight.light = SCNLight()
spotLight.scale = SCNVector3(1,1,1)
spotLight.light?.intensity = 1000
spotLight.castsShadow = true
spotLight.position = SCNVector3Zero
spotLight.light?.type = SCNLight.LightType.directional
spotLight.light?.color = UIColor.white
let particleSystem = SCNParticleSystem(named: "reactor", inDirectory: nil)
let systemNode = SCNNode()
systemNode.addParticleSystem(particleSystem!)
let node = SCNNode(geometry: cylinder)
let position = SCNVector3Make(result.worldTransform.columns.3.x, result.worldTransform.columns.3.y, result.worldTransform.columns.3.z)
systemNode.position = position
node.position = position
self.sceneView.scene.rootNode.addChildNode(spotLight)
self.sceneView.scene.rootNode.addChildNode(node)
self.sceneView.scene.rootNode.addChildNode(systemNode)
}
}
let sceneView = ARSCNView()
let viewController = ViewController(sceneView: sceneView)
sceneView.autoenablesDefaultLighting = false
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.liveView = viewController.sceneView
If your looking for a neon/glowing effect in your scene... these previous answers to a similar question asked about glowing/neon lighting should give you some guidance.
As you will see from the answers provided sceneKit does not have built-in support for volumetric lighting, all the approaches are more hacks to achieve a similar effect to a glowing light.
iOS SceneKit Neon Glow
To add a "red" directional light effect to your scene... which is an alternative to using sceneView.autoenablesDefaultLighting = true
let myLight = SCNNode()
myLight.light = SCNLight()
myLight.scale = SCNVector3(1,1,1)
myLight.intensity = 1000
myLight.position = SCNVector3Zero
myLight.light?.type = SCNLight.LightType.directional
myLight.light?.color = UIColor.red
// add the light to the scene
sceneView.scene.rootNode.addChildNode(myLight)
note: This effect makes all the objects in the scene more reddish.

How to programmatically assign a material to a 3D SCNNode?

I'm trying to figure out how to programmatically assign a material to some object (SCNNode) in my scene for ARKit (XCode 9 / Swift 4). I'm trying to programmatically do this because I want the same shaped object to be rendered with way too many variants (or user-generated images) to be able to do it via the menu assignment in a scene. The object just a cube - for now, I'm just trying to get one side to display this material pulled from the Assets folder.
This is the current code that I've tried referencing prior Stack posts, but the object is just remaining white.
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "texture.jpg")
let nodeObject = self.lastUsedObject?.childNode(withName: "box", recursively: true)
// I believed this lets me grab the last thing I rendered in an ARKit scene - please
// correct me if I'm wrong. My object is also labeled "box".
nodeObject?.geometry?.materials
nodeObject?.geometry?.materials[0] = material // I wanted to grab the first face of the box
Thank you so much in advance! I've been fiddling with this for a while but I can't seem to get the grasp of programmatic methods for 3D objects / Scenes in Swift.
Overall, I was setting the materials of an object like this and it was working (to only grab one face of the box
var imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = UIImage(named: "myImage")
var cube: SCNGeometry? = SCNBox(width: 1.0, height: 1.0, length: 1, chamferRadius: 0)
var node = SCNNode(geometry: cube)
node.geometry?.materials = [imageMaterial]
So it could possibly be that you haven't been able to grab the object, as stated in the comments.