Getting the SCNRenderer from an instantiated SCNScene - swift

I would like to know how I am supposed to extract the SCNRenderer from an instantiated SceneKit scene. I am trying to get the AVAudioEngine which lies in the SCNRenderer so that I can apply audio filters to my nodes.
Here is the override didFinishLaunching part reduced to relevant code:
override func awakeFromNib() {
// create a new scene
let scene = SCNScene()
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// set the scene to the view
self.gameView!.scene = scene
gameView.delegate = self
}
Please if someone can give me a pointer, I would really appreciate it as I was able to run sounds in positional but now am stuck with using the AVEngine graph to do stuff like:
AVAudioInput > AVAudioUnitDistortion > AVAudioOutput and start doing some fun mixing.
Edit:
This is what I had in mind for the engine:
distortion = AVAudioUnitDistortion()
let URL = NSURL(fileURLWithPath: dataPath+"/welcome.aiff")
if(NSFileManager.defaultManager().fileExistsAtPath(dataPath+"/welcome.aiff")){
let source = SCNAudioSource(URL: URL)!
source.volume = 30.0
source.reverbBlend = 50.0
source.rate = 0.9
let clip = SCNAudioPlayer(source: source)
engine = clip.audioNode!.engine
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechRadioTower)
engine.attachNode(distortion)
engine.connect(clip.audioNode!, to: distortion, format: nil)
engine.connect(self.distortion, to: engine.outputNode, format: nil)
return clip
But I am now having a null pointer exception over the distortion AVAudioUnitDistortion instance.
Where am I going wrong ?

audioEngine is a property on the SCNSceneRenderer protocol, not on the SCNRenderer class. Since SCNScene conforms to SCNSceneRenderer, scene.audioEngine will work.
edit:
Since SCNView conforms to SCNSceneRenderer, gameView.audioEngine will work.

Related

SceneKit. How to achieve default scene light behavior without autoenablesDefaultLighting?

I tried to set autoenablesDefaultLighting=true for my SCNView and it looks good. However i want to achieve the same behavior without autoenablesDefaultLighting with setting light and adjust it a little bit.
I tried omni light with this code:
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light?.castsShadow = true
lightNode.light?.type = .omni
lightNode.light?.intensity = 10000
lightNode.position = SCNVector3(x: 0, y: 0, z: 100)
scene.rootNode.addChildNode(lightNode)
And got this:
And with autoenablesDefaultLighting=true I got this:
Custom Default Lighting
I believe, in SceneKit, the default scene lighting is a Directional Light without any shadows, attached directly to the default camera node (i.e. pointOfView node). To simulate the same lighting conditions as when the .autoenablesDefaultLighting property is true, use the following code:
Delegate's renderer method – light's position orientation will be updated 60 times per second:
import SceneKit
extension GameViewController: SCNSceneRendererDelegate {
func renderer(_ renderer: SCNSceneRenderer,
updateAtTime time: TimeInterval) {
sunNode.transform = (sceneView?.pointOfView?.worldTransform)!
let cameraAngles = (self.sceneView?.pointOfView?.eulerAngles)!
let lightAngles = self.sunNode.eulerAngles
print("Camera: " + String(format: "%.2f, %.2f, %.2f", cameraAngles.x,
cameraAngles.y,
cameraAngles.z))
print("Light: " + String(format: "%.2f, %.2f, %.2f", lightAngles.x,
lightAngles.y,
lightAngles.z))
}
}
Here's GameViewController class:
class GameViewController: NSViewController {
var sceneView: SCNView? = nil
let sunNode = SCNNode()
override func viewDidLoad() {
super.viewDidLoad()
sceneView = self.view as? SCNView
sceneView?.delegate = self
let scene = SCNScene(named: "ship.scn")!
sceneView?.scene = scene
sceneView?.scene?.lightingEnvironment.contents = .none
sceneView?.scene?.background.contents = .none
sceneView?.backgroundColor = .black
sceneView?.allowsCameraControl = true
// sceneView?.autoenablesDefaultLighting = true
sunNode.light = SCNLight()
sunNode.light?.type = .directional
sceneView?.scene?.rootNode.addChildNode(sunNode)
}
}
Explanations
I'd like to add that if there is no light in the scene at all (including the autoenablesDefaultLighting parameter), then the only uncontrollable source of light in the scene will be the non-switchable Ambient Light.
In addition to the above, the Physically Based shader always requires additional Ambient Light fixture (otherwise the physically based surface will be black). The location and orientation of this light source does not matter.
If Directional Light illuminates the surface perpendicularly, then the surface is illuminated with 100% intensity (default intensity is 1000 lumens), but if the rays of the light source are parallel to the surface, then the surface is not illuminated by this source.
As you can see, the first and last images have identical lighting environment.

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

ARAnchor for SCNNode

I'm trying to get a hold of the anchor after adding an SCNNode to the scene of an ARSCNView. My understanding is that the anchor should be created automatically, but I can't seem to retrieve it.
Below is how I add it. The node is saved in a variable called testNode.
let node = SCNNode()
node.geometry = SCNBox(width: 0.5, height: 0.1, length: 0.3, chamferRadius: 0)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sceneView.scene.rootNode.addChildNode(node)
testNode = node
Here is how I try to retrieve it. It always prints nil.
if let testNode = testNode {
print(sceneView.anchor(for: testNode))
}
Does it not create the anchor? If it does: is there another method I can use to retrieve it?
If you have a look at the Apple Docs it states that:
To track the positions and orientations of real or virtual objects
relative to the camera, create anchor objects and use the add(anchor:)
method to add them to your AR session.
As such, I think that since your aren't using PlaneDetection you would need to create an ARAnchor manually if it is needed:
Whenever you place a virtual object, always add an ARAnchor representing its position and orientation to the ARSession. After moving a virtual object, remove the anchor at the old position and create a new anchor at the new position. Adding an anchor tells ARKit that a position is important, improving world tracking quality in that area and helping virtual objects appear to stay in place relative to real-world surfaces.
You can read more about this in the following thread What's the difference between using ARAnchor to insert a node and directly insert a node?
Anyway, in order to get you started I began by creating an SCNNode called currentNode:
var currentNode: SCNNode?
Then using a UITapGestureRecognizer I created an ARAnchor manually at the touchLocation:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. If We Have Hit A Feature Point Get The Result
if let hitTest = augmentedRealityView.hitTest(currentTouchLocation, types: [.featurePoint]).last {
//2. Create An Anchore At The World Transform
let anchor = ARAnchor(transform: hitTest.worldTransform)
//3. Add It To The Scene
augmentedRealitySession.add(anchor: anchor)
}
}
Having added the anchor, I then used the ARSCNViewDelegate callback to create the currentNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if currentNode == nil{
currentNode = SCNNode()
let nodeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
currentNode?.geometry = nodeGeometry
currentNode?.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
node.addChildNode(currentNode!)
}
}
In order to test that it works, e.g being able to log the corresponding ARAnchor, I changed the tapGesture method to include this at the end:
if let anchorHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first,{
print(augmentedRealityView.anchor(for: anchorHitTest.node))
}
Which in my ConsoleLog prints:
Optional(<ARAnchor: 0x1c0535680 identifier="23CFF447-68E9-451D-A64D-17C972EB5F4B" transform=<translation=(-0.006610 -0.095542 -0.357221) rotation=(-0.00° 0.00° 0.00°)>>)
Hope it helps...

AR with iOS: putting a light in the scene makes everything black?

Ok, I am trying desperately to achieve this sort of warm lighting on my objects when added to my ARScene in Swift/Xcode - warm lighting and little glowing lights around:
To be clear, I do NOT want the objects I add to my scene to look like they belong in the surrounding room. I want them to stand out/ look warm and glow.All the tutorials on ARKit teach you how to mimic the lighting of the actual room.
Xcode has several lighting options, pulling from the surroundings gathered by the camera because with:
if let lightEstimate = session.currentFrame?.lightEstimate
I can print out the warmth, intensity, etc. And I also have these properties currently set to match the light of room:
sceneView.automaticallyUpdatesLighting = true
extension ARSCNView {
func setup() { //SCENE SETUP
antialiasingMode = .multisampling4X
autoenablesDefaultLighting = true
preferredFramesPerSecond = 60
contentScaleFactor = 1.3
if let camera = pointOfView?.camera {
camera.wantsHDR = true
camera.wantsExposureAdaptation = true
camera.exposureOffset = -1
camera.minimumExposure = -1
camera.maximumExposure = 3
}
}
}
I have tried upping the emission on my object's textures and everything but nothing achieves the effect. Adding a light just turns the objects black/no color.
What is wrong here?
To create this type of glowing red neon light result in ARKit. You can do the following.
You need to create a reactor.scnp (scenekit particle System File) and make the following changes to create the glowing red halo. This should be place in your Resources directory of the playground along with the file spark.png
These are the settings to change from the default reactor type. Leave all the other settings alone.
Change the Image animate color to red/orange/red/black
speed factor = 0.1
enable lighting checked
Emitter Shape = Sphere
Image Size = 0.5
Image Intensity = 0.1
Simulation Speed Factor = 0.1
Note: The code below is playground app I use for testing purposes. You just tap anywhere to add the Neon light into the scene. You can place as many neon lights as you like.
import ARKit
import SceneKit
import PlaygroundSupport
import SceneKit.ModelIO
class ViewController: NSObject {
var sceneView: ARSCNView
init(sceneView: ARSCNView) {
self.sceneView = sceneView
super.init()
self.setupWorldTracking()
self.sceneView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(ViewController.handleTap(_:))))
}
private func setupWorldTracking() {
if ARWorldTrackingConfiguration.isSupported {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
configuration.isLightEstimationEnabled = true
self.sceneView.session.run(configuration, options: [])
}
}
#objc func handleTap(_ gesture: UITapGestureRecognizer) {
let results = self.sceneView.hitTest(gesture.location(in: gesture.view), types: ARHitTestResult.ResultType.featurePoint)
guard let result: ARHitTestResult = results.first else {
return
}
let cylinder = SCNCylinder(radius: 0.05, height: 1)
cylinder.firstMaterial?.emission.contents = UIColor.red
cylinder.firstMaterial?.emission.intensity = 1
let spotLight = SCNNode()
spotLight.light = SCNLight()
spotLight.scale = SCNVector3(1,1,1)
spotLight.light?.intensity = 1000
spotLight.castsShadow = true
spotLight.position = SCNVector3Zero
spotLight.light?.type = SCNLight.LightType.directional
spotLight.light?.color = UIColor.white
let particleSystem = SCNParticleSystem(named: "reactor", inDirectory: nil)
let systemNode = SCNNode()
systemNode.addParticleSystem(particleSystem!)
let node = SCNNode(geometry: cylinder)
let position = SCNVector3Make(result.worldTransform.columns.3.x, result.worldTransform.columns.3.y, result.worldTransform.columns.3.z)
systemNode.position = position
node.position = position
self.sceneView.scene.rootNode.addChildNode(spotLight)
self.sceneView.scene.rootNode.addChildNode(node)
self.sceneView.scene.rootNode.addChildNode(systemNode)
}
}
let sceneView = ARSCNView()
let viewController = ViewController(sceneView: sceneView)
sceneView.autoenablesDefaultLighting = false
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.liveView = viewController.sceneView
If your looking for a neon/glowing effect in your scene... these previous answers to a similar question asked about glowing/neon lighting should give you some guidance.
As you will see from the answers provided sceneKit does not have built-in support for volumetric lighting, all the approaches are more hacks to achieve a similar effect to a glowing light.
iOS SceneKit Neon Glow
To add a "red" directional light effect to your scene... which is an alternative to using sceneView.autoenablesDefaultLighting = true
let myLight = SCNNode()
myLight.light = SCNLight()
myLight.scale = SCNVector3(1,1,1)
myLight.intensity = 1000
myLight.position = SCNVector3Zero
myLight.light?.type = SCNLight.LightType.directional
myLight.light?.color = UIColor.red
// add the light to the scene
sceneView.scene.rootNode.addChildNode(myLight)
note: This effect makes all the objects in the scene more reddish.

SceneKit SCNSceneRendererDelegate - renderer function not called

I recently asked a question which had a pretty obvious answer. I'm still working on the same project and running into another problem. I need to implement per frame logic and the SCNSceneRendererDelegate protocol worked perfectly fine on iOS, but on OSX, the renderer function is not firing. I have created a little example project to illustrate my problem. It consists of a Scene Kit View in storyboard and following code in the ViewController class:
import Cocoa
import SceneKit
class ViewController: NSViewController, SCNSceneRendererDelegate {
#IBOutlet weak var sceneView: SCNView!
let cubeNode = SCNNode()
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let sphere = SCNSphere(radius: 0.1)
sphere.firstMaterial!.diffuse.contents = NSColor.yellowColor()
let sphereNode = SCNNode(geometry: sphere)
scene.rootNode.addChildNode(sphereNode)
let cube = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
cube.firstMaterial!.diffuse.contents = NSColor.greenColor()
cubeNode.geometry = cube
cubeNode.position = SCNVector3(1,0,0)
scene.rootNode.addChildNode(cubeNode)
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(2, 1, 2)
let constraint = SCNLookAtConstraint(target: cubeNode)
cameraNode.constraints = [constraint]
scene.rootNode.addChildNode(cameraNode)
sceneView.scene = scene
sceneView.backgroundColor = NSColor(red: 0.5, green: 0, blue: 0.3, alpha: 1)
sceneView.allowsCameraControl = true
sceneView.delegate = self
sceneView.playing = true
}
func renderer(renderer: SCNSceneRenderer, updateAtTime time: NSTimeInterval) {
cubeNode.position.x += 0.1
}
}
All I want is to basically move the cube with every frame. But nothing happens. What is weird is that when I set sceneView.allowsCameraControl to true, the renderer function is called whenever I click or drag on the screen (which makes sense because it needs to update the view based on camera angles). But I would want it to be called every frame.
Is there an error I don't see or is this a bug in my Xcode?
Edit:
I have tried following the instructions in the answer below and now have the following code for the ViewController:
import Cocoa
import SceneKit
class ViewController: NSViewController {
#IBOutlet weak var sceneView: SCNView!
let scene = MyScene(create: true)
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = scene
sceneView.backgroundColor = NSColor(red: 0.5, green: 0, blue: 0.3, alpha: 1)
sceneView.allowsCameraControl = true
sceneView.delegate = scene
sceneView.playing = true
}
}
And a MyScene class:
import Foundation
import SceneKit
final class MyScene: SCNScene, SCNSceneRendererDelegate {
let cubeNode = SCNNode()
convenience init(create: Bool) {
self.init()
let sphere = SCNSphere(radius: 0.1)
sphere.firstMaterial!.diffuse.contents = NSColor.yellowColor()
let sphereNode = SCNNode(geometry: sphere)
rootNode.addChildNode(sphereNode)
let cube = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
cube.firstMaterial!.diffuse.contents = NSColor.greenColor()
cubeNode.geometry = cube
cubeNode.position = SCNVector3(1,0,0)
rootNode.addChildNode(cubeNode)
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(2, 1, 2)
let constraint = SCNLookAtConstraint(target: cubeNode)
cameraNode.constraints = [constraint]
rootNode.addChildNode(cameraNode)
}
#objc func renderer(aRenderer: SCNSceneRenderer, updateAtTime time: NSTimeInterval) {
cubeNode.position.x += 0.01
}
}
However, it is still not working. What am I doing wrong?
Edit: setting sceneView.loops = true fixes the described problem
I don't understand what's causing the problem, but I was able to replicate it. I got it to work, though, by adding a meaningless SCNAction:
let dummyAction = SCNAction.scaleBy(1.0, duration: 1.0)
let repeatAction = SCNAction.repeatActionForever(dummyAction)
cubeNode.runAction(repeatAction)
The render loop fires only if the scene is "playing" (see SKScene becomes unresponsive while being idle). I expect that setting
sceneView.isPlaying = true
(as you're already doing) would be enough to trigger the render callbacks.
The code I have above is not a solution. It's a nasty hack to work around your problem and allow you to get on with life.
For anyone still having problems, setting the delegate and playing variables will work.
sceneView.delegate = self
sceneView.isPlaying = true
I suspect the answer hinges on what Querent means by "every frame". Querent should probably clarify this, but I'll try to answer anyway because I'm like that.
The simplest interpretation is probably "every frame that would render anyway", but that seems unlikely to be what is desired unless the cube is intended as a kind of activity monitor for the renderer, which doesn't seem likely either; there are much better approaches to that.
What Querent may want is to render repeatedly while the view's playing property is YES. If that's the case, then perhaps the answer is as simple as setting the view's loops property to YES. This recently solved a problem for me in which I wanted rendering to occur while a keyboard key was held down. (I had noticed that setting playing to YES would induce a single call to my delegate.)
In addition to several helpful hints in this chain, the final one for me to get delegate called was the following: If you use the pre Swift 4 methods for the SCNSceneRendererDelegate class, it compiles fine with no errors or warnings, but the delegate is never called.
Thus the obsolete pre-Swift 4 definition:
func renderer(aRenderer:SCNSceneRenderer, updateAtTime time:TimeInterval) {...}
(which I got from a snippet on the web) compiled just fine and was never called, while the correct definition
func renderer(_ renderer:SCNSceneRenderer, updateAtTimet time:TimeInterval) {...}
compiles and gets called!
Since SCNSceneRendererDelegate is a protocol, the normal Swift protections afforded by override are not appropriate. Since SCNSceneRendererDelegate defines its methods as optional (which I like), it is not caught that way either.
I've been struggling with some similar bugs using SCNRenderer rather than SCNView, and this was the first hit on Google, so I just wanted to add a note with my solution in case it helps anyone.
I was using
.render(withViewport:commandBuffer:passDescriptor:)
but this does not call the delegate render method. Instead, use
.render(atTime:viewport:commandBuffer:passDescriptor:)
even if you are not using the time interval parameter. Then the delegate method renderer(_:updateAtTime:) will be called properly.
Try this…put your code in a scene class instead – keep the view controller clean.
final class MySCNScene:SCNScene, SCNSceneRendererDelegate
{
#objc func renderer(aRenderer:SCNSceneRenderer, updateAtTime time:NSTimeInterval)
{
}
}
Also set the view's delegate to your scene:
mySCNView!.delegate = mySCNScene