I'm trying to generate a mesh from a depth/displacement map using SceneKit
The source depth map I'm using looks like this:
I then generate a plane with an increased segment count and heavily tessellate it and apply the displacement material. Here's the code I'm using:
import Cocoa
import SceneKit
import PlaygroundSupport
// MARK: - View setup
let scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.camera?.zFar = 1000
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light!.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 10)
scene.rootNode.addChildNode(lightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = NSColor.darkGray
scene.rootNode.addChildNode(ambientLightNode)
let scnView = SCNView(frame: NSRect(x: 0, y: 0, width: 400, height: 400))
scnView.autoenablesDefaultLighting = true
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.showsStatistics = true
scnView.backgroundColor = NSColor.black
PlaygroundPage.current.liveView = scnView
// MARK: - Plane
let plane = SCNPlane(width: 200, height: 200)
plane.widthSegmentCount = 10
plane.heightSegmentCount = 10
let planeNode = SCNNode(geometry: plane)
planeNode.name = "Plane"
planeNode.geometry?.firstMaterial?.diffuse.contents = NSColor.white
let tessellator = SCNGeometryTessellator()
tessellator.tessellationFactorScale = 25
tessellator.tessellationPartitionMode = .pow2
tessellator.insideTessellationFactor = 4
tessellator.edgeTessellationFactor = 4
tessellator.smoothingMode = .phong
planeNode.geometry?.tessellator = tessellator
planeNode.geometry?.firstMaterial?.displacement.contents = "rabbit"
planeNode.geometry?.firstMaterial?.displacement.textureComponents = .red
planeNode.geometry?.firstMaterial?.displacement.intensity = 200
planeNode.geometry?.firstMaterial?.displacement.maxAnisotropy = 1
planeNode.geometry?.firstMaterial?.displacement.magnificationFilter = .none
planeNode.geometry?.firstMaterial?.lightingModel = .phong
scene.rootNode.addChildNode(planeNode)
Which mostly gives me the 3D mesh I'm aiming for:
But the resulting "blockiness" messes with the lighting. Wireframe view shows it clearer:
Is there any way to "smoothen" the resulting mesh, average vertex positions, or something similar?
I'm very tangentially familiar with 3D and basically unfamiliar with SceneKit and google/docs haven't yielded much.
Subdivision / adaptive subdivision doesn't solve the problem, neither does increasing tessellation detail to the highest count possible. It can make the shadow/highlight patches smaller, but they're still there.
Any help or pointers are much appreciated!
I figured it out!
The ladder effect is an artifact of 8-bit banding in the depth map
Converting the image to 32-bit depth and adding a slight blur pretty much fixed it:
Related
I am creating an SK3DNode inside an SKScene:
let ball: SK3DNode = {
let scnScene = SCNScene()
let ballGeometry = SCNSphere(radius: 200)
let ballNode = SCNNode(geometry: ballGeometry)
ballNode.position = SCNVector3(0, 0, 0)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "wall")
ballGeometry.materials = [material]
let light = SCNLight()
light.type = .omni
light.color = UIColor.white
let lightNode = SCNNode()
lightNode.light = light
scnScene.rootNode.addChildNode(ballNode)
scnScene.rootNode.addChildNode(lightNode)
let node = SK3DNode(viewportSize: CGSize(width: 1000, height: 1000))
node.scnScene = scnScene
node.autoenablesDefaultLighting = false
return node
}()
However, the sphere renders black. Tried it with or without the material. Is there something I am missing?
The sphere is manually placed at (0, 0, 0) and so is the light (default value). This means that the light is placed inside the sphere. This means that the surface of the sphere is facing away from the light source and thus isn't lit.
I have the following code (this can be run by replacing the standard ViewController code in the Game base project for macOS):
let scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light!.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 10)
scene.rootNode.addChildNode(lightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = NSColor.darkGray
scene.rootNode.addChildNode(ambientLightNode)
/* RELEVANT CODE BEGINS */
let boxGeo = SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
let boxMaterial = SCNMaterial()
boxMaterial.diffuse.contents = NSColor.gray
boxGeo.firstMaterial = boxMaterial
let boxNode = SCNNode(geometry: boxGeo)
scene.rootNode.addChildNode(boxNode)
boxNode.name = "box0"
let sphereGeo = SCNSphere(radius: 0.5)
let sphereMaterial = SCNMaterial()
sphereMaterial.diffuse.contents = NSColor.green
sphereGeo.firstMaterial = sphereMaterial
let sphereNode = SCNNode(geometry: sphereGeo)
boxNode.addChildNode(sphereNode)
sphereNode.name = "sphere0"
sphereNode.constraints = [SCNConstraint]()
let distance = SCNDistanceConstraint(target: boxNode)
distance.minimumDistance = 2.0
distance.maximumDistance = 5.0
sphereNode.constraints?.append(distance)
let ik = SCNIKConstraint.inverseKinematicsConstraint(chainRootNode: boxNode)
sphereNode.constraints?.append(ik)
let anim = CABasicAnimation(keyPath: "targetPosition.y")
anim.fromValue = -2.0
anim.toValue = 2.0
anim.duration = 1
anim.autoreverses = true
anim.repeatCount = .infinity
ik.addAnimation(anim, forKey: nil)
/* RELEVANT CODE ENDS */
let scnView = self.view as! SCNView
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.showsStatistics = true
scnView.backgroundColor = NSColor.black
From what I can gather from the documentation, the animation (and yes, the scene kit view animation setting is set to both play and loop in IB) should move the sphere as close as possible to the points 2.0 and -2.0 on the y-axis by rotating the cube. However, the sphere simply stays stationary. I have also tried setting the initial position of the sphere and cube by manipulating their position vectors directly instead of via the distance constraint, but again the animation did nothing.
Additionally, I have attempted to use the distance constraint in combination with the box having a lookAt constraint to make it rotate to constantly look at the sphere - these caused the rendering of the box and sphere to completely freak out.
I feel as though maybe I am missing something in the documentation here, such as another constraint or some kind of transform matrix to setup some kind of initial value. But I have encountered some other issues with constraints, animations and skeletons that is making me begin to believe that there is either a bug or some undocumented aspects of SceneKit.
You have added the sphereNode as child of the boxNode. If you move the boxNode all childs are also moved and the constraint has no effect.
I want to translate a plane without rotating the image. For any reason my image is being rotated.
var translation = matrix_identity_float4x4
translation.colum = -0.2
let transform = simd_mul(currentFrame.camera.transform, translation)
planeNode.simdWorldTransform = matrix_multiply(currentFrame.camera.transform, translation)
Also, I notice that matrix_identity_float4x4 contains 4 columns but the documentation is not available.
Why 4 columns? Are there the frame of the plane?
The simplest way to do it is to use the following code for positioning:
let planeNode = SCNNode()
planeNode.geometry = SCNPlane(width: 20, height: 20)
// At first we need to rotate a plane about its x axis in radians:
planeNode.rotation = SCNVector4(1, 0, 0, -Double.pi/2)
planeNode.geometry?.materials.first?.diffuse.contents = UIColor.red
planeNode.position.x = 10
planeNode.position.z = 10
// planeNode.position = SCNVector3(x: 10, y: 0, z: 10)
scene.rootNode.addChildNode(planeNode)
or this way:
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
scene.rootNode.addChildNode(cameraNode)
let planeNode = SCNNode()
planeNode.geometry = SCNPlane(width: 20, height: 20)
planeNode.rotation = SCNVector4(1, 0, 0, -Double.pi/2)
planeNode.geometry?.materials.first?.diffuse.contents = UIColor.red
let distance: Float = 50
planeNode.simdPosition = cameraNode.simdWorldFront * distance // -Z axis
planeNode.simdPosition = cameraNode.simdWorldRight * distance // +X axis
scene.rootNode.addChildNode(planeNode)
If you wanna know more about matrices used in ARKit and SceneKit frameworks just look at Figure 1-8 Matrix configurations for common transformations.
Hope this helps.
I'm trying to render a frame, with realistic depth of field effect. I've already tried the depth of field properties in the camera node, but it doesn't produce usable results.
Is there a switch to max-out rendering quality of the depth of field effect? Performance is not a factor, I just need to render a frame, and user can wait for it.
Realistic Depth of Field effect in SceneKit
In SceneKit you can easily accomplish cool-looking shallow/deep depth of field (DoF). And it's not extremely intense for processing. .focusDistance and .fStop parameters are crucial for applying DoF:
cameraNode.camera?.wantsDepthOfField = true
cameraNode.camera?.focusDistance = 5
cameraNode.camera?.fStop = 0.01
cameraNode.camera?.focalLength = 24
Use the following code for testing (it's macOS version):
import SceneKit
import Cocoa
class GameViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.camera?.wantsDepthOfField = true
cameraNode.camera?.focusDistance = 5
cameraNode.camera?.fStop = 0.01
cameraNode.camera?.focalLength = 24
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light!.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 10)
scene.rootNode.addChildNode(lightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = NSColor.darkGray
scene.rootNode.addChildNode(ambientLightNode)
let cylinderNode01 = SCNNode()
cylinderNode01.geometry = SCNCylinder(radius: 2, height: 10)
cylinderNode01.position = SCNVector3(0, 0, 0)
cylinderNode01.geometry?.materials.first?.diffuse.contents = NSImage(named: NSImage.Name("checker01.png"))
scene.rootNode.addChildNode(cylinderNode01)
let cylinderNode02 = SCNNode()
cylinderNode02.geometry = SCNCylinder(radius: 2, height: 10)
cylinderNode02.position = SCNVector3(5, 0, 5)
cylinderNode02.geometry?.materials.first?.diffuse.contents = NSImage(named: NSImage.Name("checker02.jpg"))
scene.rootNode.addChildNode(cylinderNode02)
let cylinderNode03 = SCNNode()
cylinderNode03.geometry = SCNCylinder(radius: 2, height: 10)
cylinderNode03.position = SCNVector3(10, 0, 10)
cylinderNode03.geometry?.materials.first?.diffuse.contents = NSImage(named: NSImage.Name("checker01.png"))
scene.rootNode.addChildNode(cylinderNode03)
let cylinderNode04 = SCNNode()
cylinderNode04.geometry = SCNCylinder(radius: 2, height: 10)
cylinderNode04.position = SCNVector3(-5, 0, -5)
cylinderNode04.geometry?.materials.first?.diffuse.contents = NSImage(named: NSImage.Name("checker02.jpg"))
scene.rootNode.addChildNode(cylinderNode04)
let cylinderNode05 = SCNNode()
cylinderNode05.geometry = SCNCylinder(radius: 2, height: 10)
cylinderNode05.position = SCNVector3(-10, 0, -10)
cylinderNode05.geometry?.materials.first?.diffuse.contents = NSImage(named: NSImage.Name("checker01.png"))
scene.rootNode.addChildNode(cylinderNode05)
let scnView = self.view as! SCNView
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.backgroundColor = NSColor.black
}
}
SceneKit isn't able to do (out of the box) heavy, high quality post processing or still image rendering computation of this type. Theoretically you could probably build a setup that uses its rendering approaches to do both. But it's not a high quality renderer. If the user can wait, and you really want to focus on quality of imagery, Unreal Engine has the capacity to do this sort of thing, built in, and far higher quality post processing, effects, lights, materials, particles and rendering.
I am getting my feet wet with iOS SceneKit, however the following sample code (which executes inside viewDidLoad) is not behaving as expected. I want it to
place a camera at origin with direction of view towards positive z axis
place a red rectangle parallel to xy-plane at z = 100
Why does the rendering not reveal the red rectangle but only a black screen?
let scene = SCNScene()
// prepare camera
let camera = SCNCamera()
camera.zNear = 90
camera.zFar = 110
let cameraNode = SCNNode()
cameraNode.position = SCNVector3Make(0, 0, 0)
cameraNode.rotation = SCNVector4Make(1, 0, 0, Float(M_PI))
cameraNode.camera = camera
scene.rootNode.addChildNode(cameraNode)
// prepare light
let light = SCNLight()
light.type = SCNLightTypeOmni
light.color = SKColor(white: 0.3, alpha: 1.0)
let lightNode = SCNNode()
lightNode.light = light;
scene.rootNode.addChildNode(lightNode)
// prepare plane
let plane = SCNPlane(width: 400, height: 400)
plane.firstMaterial!.doubleSided = true
plane.firstMaterial!.diffuse.contents = UIColor.redColor().CGColor
let planeNode = SCNNode(geometry: plane)
planeNode.position = SCNVector3Make(0, 0, 100)
scene.rootNode.addChildNode(planeNode)
// prepare view as SCNView
let sceneView = view as SCNView
sceneView.backgroundColor = SKColor.blackColor()
sceneView.scene = scene
sceneView.delegate = self
sceneView.jitteringEnabled = true // i.e. improve visual rendering
sceneView.pointOfView = cameraNode
looks like you rotate around the x axis instead of the y axis (so that the camera looks in the desired direction)