.usdz model has no texture when loaded into scene - swift

I'm loading a .usdz model (downloaded from Apple) into my ARSCNSceneView which works. But unfortunately the model is always rendered without any texture and appears black.
// Get the url to the .usdz file
guard let usdzURL = Bundle.main.url(forResource: "toy_robot_vintage", withExtension: "usdz")
else {
return
}
// Load the SCNNode from file
let referenceNode = SCNReferenceNode(url: usdzURL)!
referenceNode.load()
// Add node to scene
sceneView.scene.rootNode.addChildNode(referenceNode)

Your scene has no light, that's why the object is showing dark. Just add a directional light to your scene:
let spotLight = SCNNode()
spotLight.light = SCNLight()
spotLight.light?.type = .directional
sceneView.scene.rootNode.addChildNode(spotLight)

If you have already implemented lights in your 3D scene and these lights have necessary intensity level (default is 1000 lumens), that's Ok. If not, just use the following code for implementing an automatic lighting:
let sceneView = ARSCNView()
sceneView.autoenablesDefaultLighting = true
sceneView.automaticallyUpdatesLighting = true
But if you still don't see a shader of robot model:
in Xcode in the Scene Inspector just turn on Procedural Sky value of Environment property from drop-down menu.

Related

SceneKit LIDAR iOS: Show unscanned regions of camera view in the background with a different color/texture

I'm building an app similar to Polycam, 3D Scanner App, Scaniverse, etc. I visualize a mesh for scanned regions and export it into different formats. I would like to show the user what regions are scanned, and what not. To do so, I need to differentiate between them.
My idea is to build something like Polycam does..
< Polycam blue background for unscanned regions >
I tried changing the background content property of the scene, but it causes the whole camera view to be replaced by the color.
arSceneView.scene.background.contents = UIColor.black
I'm using ARSCNView and setting up plane detection as follows:
private func setupPlaneDetection() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
arSceneView.session.run(configuration)
arSceneView.session.delegate = self
// arSceneView.scene.background.contents = UIColor.black
arSceneView.delegate = self
UIApplication.shared.isIdleTimerDisabled = true
arSceneView.showsStatistics = true
}
Thanks in advance for any help you can provide!
I’ve done this before by adding a sphere to the scene with a two-sided material (slightly transparent) and with a radius large enough that the camera and the scanned surface will always be inside of it. Here’s an example of how to do that:
let backgroundSphereNode = SCNNode()
backgroundSphereNode.geometry = SCNSphere(radius: 500)
let material = SCNMaterial()
material.isDoubleSided = true
material?.diffuse.contents = UIColor(white: 0, alpha: 0.9)
backgroundSphereNode.geometry?.materials = [material]
Note that I’m using a black color - you can obviously change this to whatever you need, but keep the alpha channel slightly transparent. And tweak the radius of the sphere so it works for your scene.

RealityKit – Change a color of a model

I am currently looking into options on how to transform objects colour from Swift. The object has been added to the scene from Reality Composer.
I found in the documentation that I can change position, rotation, scale, however, I am unable to find a way how to change colour.
Xcode 14.2, RealityKit 2.0, Target iOS 16.2
Use the following code to change a color of a box model (found in Xcode RealityKit template):
let boxScene = try! Experience.loadBox() // Reality Composer scene
let modelEntity = boxScene.steelBox!.children[0] as! ModelEntity
var material = SimpleMaterial()
material.color = .init(tint: .green)
modelEntity.model?.materials[0] = material
arView.scene.anchors.append(boxScene)
print(boxScene)
Downcast to ModelEntity is necessary for accessing model's components:
If you need to change a transparency of your model, use the following approach.
Look into changing the material of the object.

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

Using a MTLTexture as the environment map of a SCNScene

I want to set a MTLTexture object as the environment map of a scene, as it seems to be possible according to the documentation. I can set the environment map to be a UIImage with the following code:
let roomImage = UIImage(named: "room")
scene.lightingEnvironment.contents = roomImage
This works and I see the reflection of the image on my metallic objects. I tried converting the image to a MTLTexture and setting it as the environment map with the following code:
let roomImage = UIImage(named: "room")
let loader = MTKTextureLoader(device: MTLCreateSystemDefaultDevice()!)
let envMap = try? loader.newTexture(cgImage: (roomImage?.cgImage)!, options: nil)
scene.lightingEnvironment.contents = envMap
However this does not work and I end up with a blank environment map with no reflection on my objects.
Also, instead of setting the options as nil, I tried setting the MTKTextureLoader.Option.textureUsage key with every possible value it can get, but that didn't work either.
Edit: You can have a look at the example project in this repo and use it to reproduce this use case.
Lighting SCN Environment with an MTK texture
Using Xcode 13.3.1 on macOS 12.3.1 for iOS 15.4 app.
The trick is, the environment lighting requires a cube texture, not a flat image.
Create 6 square images for MetalKit cube texture
in Xcode Assets folder create Cube Texture Set
place textures to their corresponding slots
mirror images horizontally and vertically, if needed
Paste the code:
import ARKit
import MetalKit
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let imageName = "CubeTextureSet"
let textureLoader = MTKTextureLoader(device: sceneView.device!)
let environmentMap = try! textureLoader.newTexture(name: imageName,
scaleFactor: 2,
bundle: .main,
options: nil)
let daeScene = SCNScene(named: "art.scnassets/testCube.dae")!
let model = daeScene.rootNode.childNode(withName: "polyCube",
recursively: true)!
scene.lightingEnvironment.contents = environmentMap
scene.lightingEnvironment.intensity = 2.5
scene.background.contents = environmentMap
sceneView.scene = scene
sceneView.allowsCameraControl = true
scene.rootNode.addChildNode(model)
}
}
Apply metallic materials to models. Now MTL environment lighting is On.
If you need a procedural skybox texture – use MDLSkyCubeTexture class.
Also, this post may be useful for you.

How to programmatically assign a material to a 3D SCNNode?

I'm trying to figure out how to programmatically assign a material to some object (SCNNode) in my scene for ARKit (XCode 9 / Swift 4). I'm trying to programmatically do this because I want the same shaped object to be rendered with way too many variants (or user-generated images) to be able to do it via the menu assignment in a scene. The object just a cube - for now, I'm just trying to get one side to display this material pulled from the Assets folder.
This is the current code that I've tried referencing prior Stack posts, but the object is just remaining white.
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "texture.jpg")
let nodeObject = self.lastUsedObject?.childNode(withName: "box", recursively: true)
// I believed this lets me grab the last thing I rendered in an ARKit scene - please
// correct me if I'm wrong. My object is also labeled "box".
nodeObject?.geometry?.materials
nodeObject?.geometry?.materials[0] = material // I wanted to grab the first face of the box
Thank you so much in advance! I've been fiddling with this for a while but I can't seem to get the grasp of programmatic methods for 3D objects / Scenes in Swift.
Overall, I was setting the materials of an object like this and it was working (to only grab one face of the box
var imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = UIImage(named: "myImage")
var cube: SCNGeometry? = SCNBox(width: 1.0, height: 1.0, length: 1, chamferRadius: 0)
var node = SCNNode(geometry: cube)
node.geometry?.materials = [imageMaterial]
So it could possibly be that you haven't been able to grab the object, as stated in the comments.