Swift, SpriteKit: Low FPS with a huge for-loop in update method - swift

Is it normal to have very low FPS (~7fps to ~10fps) with Sprite Kit using the code below?
Use case:
I'm drawing just lines from bottom to top (1024 * 64 lines). I have some delta value that determines the positions of a single line for every frame. These lines represent my CGPath, which is assigned to the SKShapeNode every frame. Nothing else. I'm wondering about the performance of SpriteKit (or maybe of Swift).
Do you have any suggestions to improve the performance?
Screen:
Code:
import UIKit
import SpriteKit
class SKViewController: UIViewController {
#IBOutlet weak var skView: SKView!
var scene: SKScene!
var lines: SKShapeNode!
let N: Int = 1024 * 64
var delta: Int = 0
override func viewDidLoad() {
super.viewDidLoad()
scene = SKScene(size: skView.bounds.size)
scene.delegate = self
skView.showsFPS = true
skView.showsDrawCount = true
skView.presentScene(scene)
lines = SKShapeNode()
lines.lineWidth = 1
lines.strokeColor = .white
scene.addChild(lines)
}
}
extension SKViewController: SKSceneDelegate {
func update(_ currentTime: TimeInterval, for scene: SKScene) {
let w: CGFloat = scene.size.width
let offset: CGFloat = w / CGFloat(N)
let path = UIBezierPath()
for i in 0 ..< N { // N -> 1024 * 64 -> 65536
let x1: CGFloat = CGFloat(i) * offset
let x2: CGFloat = x1
let y1: CGFloat = 0
let y2: CGFloat = CGFloat(delta)
path.move(to: CGPoint(x: x1, y: y1))
path.addLine(to: CGPoint(x: x2, y: y2))
}
lines.path = path.cgPath
// Updating delta to simulate the changes
//
if delta > 100 {
delta = 0
}
delta += 1
}
}
Thanks and Best regards,
Aiba ^_^

CPU
65536 is a rather large number. Telling the CPU to comprehend this many loops will always result in slowness. For example, even if I make a test Command Line project that only measures the time it takes to run an empty loop:
while true {
let date = Date().timeIntervalSince1970
for _ in 1...65536 {}
let date2 = Date().timeIntervalSince1970
print(1 / (date2 - date))
}
It will result in ~17 fps. I haven't even applied the CGPath, and it's already appreciably slow.
Dispatch Queue
If you want to keep your game at 60fps, but your rendering of specifically your CGPath may be still slow, you can use a DispatchQueue.
var rendering: Bool = false // remember to make this one an instance value
while true {
let date = Date().timeIntervalSince1970
if !rendering {
rendering = true
let foo = DispatchQueue(label: "Run The Loop")
foo.async {
for _ in 1...65536 {}
let date2 = Date().timeIntervalSince1970
print("Render", 1 / (date2 - date))
}
rendering = false
}
}
This retains a natural 60fps experience, and you can update other objects, however, the rendering of your SKShapeNode object is still quite slow.
GPU
If you'd like to speed up the rendering, I would recommend looking into running it on the GPU instead of the CPU. The GPU (Graphics Processing Unit) is much better fitted for this, and can handle huge loops without disturbing gameplay. This may require you to program it as an SKShader, in which there are tutorials for.

Check the number of subdivisions
No iOS device has a screen width over 3000 pixels or 1500 points (retina screens have logical points and physical pixels where a point is equivalent to 2 or 3 pixels depending on the scale factor; iOS works with points, but you have to also remember pixels), and the ones that even come close are those with the biggest screens (iPad Pro 12.9 and iPhone Pro Max) in landscape mode.
A typical device in portrait orientation will be less than 500 points and 1500 pixels wide.
You are dividing this width into 65536 parts, and will end up with pixel (not even point) coordinates like 0.00, 0.05, 0.10, 0.15, ..., 0.85, which will actually refer to the same pixel twenty times (my result, rounded up, in an iPhone simulator).
Your code draws twenty to sixty lines in the exact same physical position, on top of each other! Why do that? If you set N to w and use 1.0 for offset, you'll have the same visible result at 60 FPS.
Reconsider the approach
The implementation will still have some drawbacks, though, even if you greatly reduce the amount of work to be done per frame. It's not recommended to advance animation frames in update(_:) since you get no guarantees on the FPS, and you usually want your animation to follow a set schedule, i.e. complete in 1 second rather than 60 frames. (Should the FPS drop to, say, 10, a 60-frame animation would complete in 6 seconds, whereas a 1-second animation would still finish in 1 second, but at a lower frame rate, i.e. skipping frames.)
Visibly, what your animation does is draw a rectangle on the screen whose width fills the screen, and whose height increases from 0 to 100 points. I'd say, a more "standard" way of achieving this would be something like this:
let sprite = SKSpriteNode(color: .white, size: CGSize(width: scene.size.width, height: 100.0))
sprite.yScale = 0.0
scene.addChild(sprite)
sprite.run(SKAction.repeatForever(SKAction.sequence([
SKAction.scaleY(to: 1.0, duration: 2),
SKAction.scaleY(to: 0.0, duration: 0.0)
])))
Note that I used SKSpriteNode because SKShapeNode is said to suffer from bugs and performance issues, although people have reported some improvements in the past few years.
But if you do insist on redrawing the entire texture of your sprite every frame due to some specific need, that may indeed be something for custom shaders… But those require learning a whole new approach, not to mention a new programming language.
Your shader would be executed on the GPU for each pixel. I repeat: the code would be executed for each single pixel – a radical departure from the world of SpriteKit.
The shader would access a bunch of values to work with, such a normalized set of coordinates (between (0.0,0.0) and (1.0,1.0) in a variable called v_tex_coord) and a system time (seconds elapsed since the shader has been running) in u_time, and it would need to determine what color value the pixel in question would need to be – and set it by storing the value in the variable gl_FragColor.
It could be something like this:
void main() {
// default to a black color, or a three-dimensional vector v3(0.0, 0.0, 0.0):
vec3 color = vec3(0.0);
// take the fraction part of the time in seconds;
// this value will go from 0.0 to 0.9999… every second, then drop back to 0.0.
// use this to determine the relative height of the area we want to paint white:
float height = fract(u_time);
// check if the current pixel is below the height of the white area:
if (v_tex_coord.y < height) {
// if so, set it to white (a three-dimensional vector v3(1.0, 1.0, 1.0)):
color = vec3(1.0);
}
gl_FragColor = vec4(color,1.0); // the fourth dimension is the alpha
}
Put this in a file called shader.fsh, create a full-screen sprite mySprite, and assign the shader to it:
mySprite.shader = SKShader.init(fileNamed: "shader.fsh")
Once you display the sprite, its shader will take care of all of the rendering. Note, however, that your sprite will lose some SpriteKit functionalities as a result.

Related

Gravity value in SpriteKit game scene

I'm trying to create a game using Apple's SpriteKit game engine.
While implementing some physics-based calculations in the game, I noticed that the calculated results differ from what effectively then happens to objects.
Example: calculating a body's trajectory through projectile motion's equations causes the body to actually fall down much sooner/quicker than what calculated.
How can I make the physics engine match the real-world physics laws when calculating something gravity-related?
I think I know what's going on with the sample code you have supplied on GitHub, which I'll reproduce here as questions on SO should contain the code:
//
// GameScene.swift
// SpriteKitGravitySample
//
// Created by Emilio Schepis on 17/01/2020.
// Copyright © 2020 Emilio Schepis. All rights reserved.
//
import SpriteKit
import GameplayKit
class GameScene: SKScene {
private var subject: SKNode!
override func didMove(to view: SKView) {
super.didMove(to: view)
// World setup (no friction, default gravity)
// Note that this would work with any gravity set to the scene.
physicsBody = SKPhysicsBody(edgeLoopFrom: frame)
physicsBody?.friction = 0
subject = SKShapeNode(circleOfRadius: 10)
subject.position = CGPoint(x: frame.midX, y: 30)
subject.physicsBody = SKPhysicsBody(circleOfRadius: 10)
subject.physicsBody?.allowsRotation = false
// Free falling body (no damping)
subject.physicsBody?.linearDamping = 0
subject.physicsBody?.angularDamping = 0
addChild(subject)
// Set an arbitrary velocity to the body
subject.physicsBody?.velocity = CGVector(dx: 30, dy: 700)
// Inaccurate prediction of position over time
for time in stride(from: CGFloat(0), to: 1, by: 0.01) {
let inaccuratePosition = SKShapeNode(circleOfRadius: 2)
inaccuratePosition.strokeColor = .red
// These lines use the projectile motion equations as-is.
// https://en.wikipedia.org/wiki/Projectile_motion#Displacement
let v = subject.physicsBody?.velocity ?? .zero
let x = v.dx * time
let y = v.dy * time + 0.5 * physicsWorld.gravity.dy * pow(time, 2)
inaccuratePosition.position = CGPoint(x: x + subject.position.x,
y: y + subject.position.y)
addChild(inaccuratePosition)
}
// Actual prediction of position over time
for time in stride(from: CGFloat(0), to: 1, by: 0.01) {
let accuratePosition = SKShapeNode(circleOfRadius: 2)
accuratePosition.strokeColor = .green
// These lines use the projectile motion equations
// as if the gravity was 150 times stronger.
// The subject follows this curve perfectly.
let v = subject.physicsBody?.velocity ?? .zero
let x = v.dx * time
let y = v.dy * time + 0.5 * physicsWorld.gravity.dy * 150 * pow(time, 2)
accuratePosition.position = CGPoint(x: x + subject.position.x,
y: y + subject.position.y)
addChild(accuratePosition)
}
}
}
What you've done is to:
Created an object called subject with a physicsBody and placed it
on screen with a initial velocity.
Plotted predicted positions for an object with that velocity under
gravity via the inaccuratePosition node, using Newton's laws of
motion (v = ut + 1/2at²)
Plotted predicted positions for an object with that velocity under
gravity * 150 via the accuratePosition node, using Newton's laws of
motion
All this is is didMoveTo. When the simulation runs, the path of the node subject follows the accuratePosition path accurately.
I think what's happening is that you are calculating the predicted position using subject's physicsBody's velocity, which is in m/s, but the position is in points, so what you should do is convert m/s into point/s first.
So what's the scale factor? Well from Apple's documentation here; it's.... 150 which is too much of a coincidence 😀, so I think that's the problem.
Bear in mind that you set the vertical velocity of your object to 700m/s - that's 1500mph or 105000 SK point/s. You'd expect it to simply disappear out through the top of the screen at high speed, as predicted by your red path. The screen is somewhere between 1,000 and 2,000 points.
Edit - I created a sample project to demonstrate the calculated paths with and without the multiplier.
https://github.com/emilioschepis/spritekit-gravity-sample
TL;DR - When calculating something gravity-related in SpriteKit multiply the gravity of the scene by 151 to obtain an accurate result.
When trying to solve this issue I first started reading the SpriteKit documentation related to gravity:
https://developer.apple.com/documentation/spritekit/skphysicsworld/1449623-gravity
The documentation says:
The components of this property are measured in meters per second. The default value is (0.0,-9.8), which represent’s Earth’s gravity.
Gravity, however is calculated in m/s^2 and not in m/s.
Thinking that was an error in the implementation of gravity in SpriteKit I began thinking that maybe real-world-based physics laws could not be applied in the scene.
I did, however, come across another documentation page about the linear gravity field that correctly reported that gravity is measured in m/s^2 in SpriteKit.
https://developer.apple.com/documentation/spritekit/skfieldnode/1520145-lineargravityfield
I then setup a simple free falling scene where I applied an initial velocity to a physics body and then calculated the expected trajectory, while comparing it to the actual trajectory.
The x-axis calculations were accurate from the start, suggesting that the only problem was with the gravity's value.
I then tried manually modified the gravity in the scene until the actual trajectory matched the predicted one.
What I found is that there is a "magic" value of ~151 that has to be factored in when using the physics world's gravity property in the game.
Modifying, for example, the y-axis calculations for the trajectory from
let dy = velocity.dy * time + 0.5 * gravity * pow(time, 2)
to
let dy = velocity.dy * time + 0.5 * 151 * gravity * pow(time, 2)
resulted in accurate calculations.
I hope this is useful to anyone who might encounter the same problem in the future.

How do I create a blinking effect with SKEmitterNode?

I have used the particle emitter to create a background with stars. It looks ok, but I would like them to blink, or flicker. The closest I get is when I change the birthrate and lifetime variables so that particles disappear and appear at different places. I would like the particles to remain in the same place and fade in and out, randomly, though. Any ideas on how to do this? This is what I've got so far:
I don't think you can do much directly in the editor. If you're comfortable working with code for adjusting the emitter, you have a couple of possibilities: setting a particle action to animate color or alpha or scale or texture, or a custom shader to do whatever sort of animation. (I'm assuming based on your picture with a basically infinite lifetime that you don't want things to move or disappear. That may rule out keyframing, but perhaps having the keyframe sequence set to repeat mode with the frames spaced by really tiny values would work.)
Another possibility since positions are static would be to just make some fixed sprites scattered around at random and have them run actions to animate them. We've used this approach before with ~100 animated sprites against a backdrop that has a bunch of dimmer stars, and it looked pretty good. Something along these lines:
let twinklePeriod = 8.0
let twinkleDuration = 0.5
let bright = CGFloat(0.3)
let dim = CGFloat(0.1)
let brighten = SKAction.fadeAlpha(to: bright, duration: 0.5 * twinkleDuration)
brighten.timingMode = .easeIn
let fade = SKAction.fadeAlpha(to: dim, duration: 0.5 * twinkleDuration)
fade.timingMode = .easeOut
let twinkle = SKAction.repeatForever(.sequence([brighten, fade, .wait(forDuration: twinklePeriod - twinkleDuration)]))
for _ in 0 ..< 100 {
let star = SKSpriteNode(imageNamed: "star")
star.position = CGPoint(x: .random(in: minX ... maxX), y: .random(in: minY ... maxY))
star.alpha = dim
star.speed = .random(in: 0.5 ... 1.5)
star.run(.sequence([.wait(forDuration: .random(in: 0 ... twinklePeriod)), twinkle]))
addChild(star)
}
That's cut-and-pasted from various bits and simplified some, so there may be typos, but it should give the idea. If you keep the emitter, you can try something like the twinkle above as the particle action. I don't see how you can change the relative periods of particles though like you could with separate sprites, and the only offsets would come from differences in the birth time of the particles.

Spritekit SKShapeNode Randomize Scale, Swift

How would I make an SKShapeNode scale at randomized sizes forever while not exceeding a maximum set size and not smaller than a minimum set size?
One way to do this (if I understand your question correctly), would be have 3 properties of type TimeInterval that track how long it has been since the sprite was last scaled, how often the sprite should be scaled (which will initialise as 0.5s) and how long the scale action takes (default is the same as the time between scales):
var timeOfLastScale: CFTimeInterval = 0.0
var timePerScale: CFTimeInterval = 0.5
var scaleTime = timePerScale
We initialise the time since the last scale to 0, as it hasn't happened yet. I also use the timePerScale as the duration of the scale effect, so as soon as it stops scaling, a new scale action starts i.e. the node is constantly scaling. These can be modified in code for different effects.
We also need 2 properties that define the maximum and minimum scale sizes (from 0-100%) and a computed property of the overall scaling range (we make this a computed property so that if you change the max or min scale factors in your code, you don't have to re-calculate scaleRange):
var maxScale; UInt32 = 100
var minScale:UInt32 = 25
var scaleRange: Uint32 {
get {return maxScale - minScale}
}
(I'm assuming that the node can scale between 25% and 100% of it's normal size)
In update: call a function that checks to see if the scale time interval has passed. If it has, then create and run a new SKAction to scale the node and reset to time since the last scale.
If the scale time hasn't passed yet, do nothing:
override func update(_ currentTime: TimeInterval) {
scaleNode(currentTime)
// Rest of update code
}
func scaleNode(_ currentTime: CFTimeInterval) {
timeSinceLastScale = currentTime - timeOfLastScale
if timeSinceLastScale < timePerScale {return}
// Time per scale has passed, so calculate a new scale actiona and re-scale the node...
let scaleFactor = Float(arc4random_uniform(scaleRange)+minScale)/100
let scaleAction = SKAction.scale(to: scaleFactor,duration: scaleTime)
nodeToBeScaled.run(scaleAction)
timeOfLastScoreDecrement = currentTime
}

Simple SpriteKit game performance issues - Swift

Apologies in advance as I'm not sure exactly what the right question is. The problems that I'm ultimately trying to address are:
1) Game gets laggy at times
2) CPU % can get high, as much as 50-60% at times, but is also sometimes relatively low (<20%)
3) Device (iPhone 6s) can get slightly warm
I believe what's driving the lagginess is that I'm constantly creating and removing circles in the SKScene. It's pretty much unavoidable because the circles are a critical element to the game and I have to constantly change their size and physicsBody properties so there's not much I can do in terms of reusing nodes. Additionally, I'm moving another node almost constantly.
func addCircle() {
let attributes = getTargetAttributes() //sets size, position, and color of the circle
let target = /*SKShapeNode()*/SKShapeNode(circleOfRadius: attributes.size.width)
let outerPathRect = CGRect(x: 0, y: 0, width: attributes.size.width * 2, height: attributes.size.width * 2)
target.position = attributes.position
target.fillColor = attributes.color
target.strokeColor = attributes.stroke
target.lineWidth = 8 * attributes.size.width / 35
target.physicsBody = SKPhysicsBody(circleOfRadius: attributes.size.width)
addStandardProperties(node: target, name: "circle", z: 5, contactTest: ContactCategory, category: CircleCategory) //Sets physicsBody properties
addChild(target)
}
The getAttributes() function is not too costly. It does have a while loop to set the circle position, but it doesn't usually get used when the function is called. Otherwise, it's simple math.
Some other details:
1) The app runs at a constant 120 fps. I've tried setting the scene/view lower by adding view.preferredFramesPerSecond = 60 in GameScene.swift and gameScene.preferredFramesPerSecond = 60 in GameViewController. Neither one of these does anything to change the fps. Normally when I've had performance issues in other apps, the fps dipped, however, that isn't happening here.
2) I’ve tried switching the SKShapeNode initializer to use a path versus circleOfRadius and then resetting the path. I’ve also tried images, however, because I have to reset the physicsBody, there doesn’t appear to be a performance gain.
3) I tried changing the physicsWorld speed, but this also had little effect.
4) I've also used Instruments to try to identify the issue. There are big chunks of resources being used by SKRenderer, however, I can't find much information on this.
Creating SKShapeNodes are inefficient, try to use it as few times as you can. instead, create a template shape, and convert it to an SKSpriteNode.
If you need to change the size, then use xScale and yScale, if you need to change the color, then use color with colorBlendFactor of 1
If you need to have a varying color stroke, then change the below code to have 2 SKSpriteNodes, 1 SKSpriteNode that handles only the fill, and 1 SKSpriteNode that handles only the stroke. Have the stroke sprite be a child of the fill sprite with a zPosition of 0 and set the stroke color to white. You can then apply the color and colorBlendFactor to the child node of the circle to change the color.
lazy var circle =
{
let target = SKShapeNode(circleOfRadius: 1000)
target.fillColor = .white
//target.strokeColor = .black //if stroke is anything other than black, you may need to do 2 SKSpriteNodes that layer each other
target.lineWidth = 8 * 1000 / 35
let texture = SKView().texture(from:target)
let spr = SKSpriteNode(texture:texture)
spr.physicsBody = SKPhysicsBody(circleOfRadius: 1000)
addStandardProperties(node: spr, name: "circle", z: 5, contactTest:ContactCategory, category: CircleCategory) //Sets physicsBody properties
return spr
}()
func createCircle(of radius:CGFloat,color:UIColor) -> SKSpriteNode
{
let spr = circle.copy()
let scale = radius/1000.0
spr.xScale = scale
spr.yScale = scale
spr.color = color
spr.colorBlendFactor = 1.0
return spr
}
func addCircle() {
let attributes = getTargetAttributes() //sets size, position, and color of the circle
let spr = createCircle(of:attribute.width,color:attributes.color)
spr.position = attributes.position
addChild(str)
}

SceneKit's performance with a cube test

In learning 3d graphics programming for games I decided to start off simple by using the Scene Kit 3D API. My first gaming goal was to build a very simplified mimic of MineCraft. A game of just cubes - how hard can it be.
Below is a loop I wrote to place a ride of 100 x 100 cubes (10,000) and the FPS performance was abysmal (~20 FPS). Is my initial gaming goal too much for Scene Kit or is there a better way to approach this?
I have read other topics on StackExchange but don't feel they answer my question. Converting the exposed surface blocks to a single mesh won't work as the SCNGeometry is immutable.
func createBoxArray(scene : SCNScene, lengthCount: Int, depthCount: Int) {
let startX : CGFloat = -(CGFloat(lengthCount) * CUBE_SIZE) + (CGFloat(lengthCount) * CUBE_MARGIN) / 2.0
let startY : CGFloat = 0.0
let startZ : CGFloat = -(CGFloat(lengthCount) * CUBE_SIZE) + (CGFloat(lengthCount) * CUBE_MARGIN) / 2.0
var currentZ : CGFloat = startZ
for z in 0 ..< depthCount {
currentZ += CUBE_SIZE + CUBE_MARGIN
var currentX = startX
for x in 0 ..< lengthCount {
currentX += CUBE_SIZE + CUBE_MARGIN
createBox(scene, x: currentX, y: startY, z: currentZ)
}
}
}
func createBox(scene : SCNScene, x: CGFloat, y: CGFloat, z: CGFloat) {
var box = SCNBox(width: CUBE_SIZE, height: CUBE_SIZE, length: CUBE_SIZE, chamferRadius: 0.0)
box.firstMaterial?.diffuse.contents = NSColor.purpleColor()
var boxNode = SCNNode(geometry: box)
boxNode.position = SCNVector3Make(x, y, z)
scene.rootNode.addChildNode(boxNode)
}
UPDATE 12-30-2014:
I modified the code so the SCNBoxNode is created once and then each additional box in the array of 100 x 100 is created via:
var newBoxNode = firstBoxNode.clone()
newBoxNode.position = SCNVector3Make(x, y, z)
This change appears to have increased FPS to ~30fps. The other statistics are as follows (from the statistics displayed in the SCNView):
10K (I assume this is draw calls?)
120K (I assume this is faces)
360K (Assuming this is the vertex count)
The bulk of the run loop is in Rendering (I'm guesstimating 98%). The total loop time is 26.7ms (ouch). I'm running on a Mac Pro Late 2013 (6-core w/Dual D500 GPU).
Given that a MineCraft style game has a landscape that constantly changes based on the players actions I don't see how I can optimize this within the confines of Scene Kit. A big disappointment as I really like the framework. I'd love to hear someone's ideas on how I can address this issue - without that, I'm forced to go with OpenGL.
UPDATE 12-30-2014 # 2:00pm ET:
I am seeing a significant performance improvement when using flattenedClone(). The FPS is now a solid 60fps even with more boxes and TWO drawing calls. However, accommodating a dynamic environment (as MineCraft supports) is still proving problematic - see below.
Since the array would change composition over time I added a keyDown handler to add an even larger box array to the existing and timed the difference between adding the array of boxes resulting in far more calls versus adding as a flattenedClone. Here's what I found:
On keyDown I add another array of 120 x 120 boxes (14,400 boxes)
// This took .0070333 milliseconds
scene?.rootNode.addChildNode(boxArrayNode)
// This took .02896785 milliseconds
scene?.rootNode.addChildNode(boxArrayNode.flattenedClone())
Calling flattenedClone() again is 4x slower than adding the array.
This results in two drawing calls having 293K faces and 878K vertices. I'm still playing with this and will update if I find anything new. Bottom line, with my additional testing I still feel Scene Kit's immutable geometric constraints mean I can't leverage the framework.
As you mentionned Minecraft, I think it's worth looking at how it works.
I have no technical details or code sample for you, but everything should be pretty straightfoward:
Have you ever played minecraft online, and the terrain is not loading allowing you to see through? That's because there is no geometry inside.
let's assume I have a 2x2x2 array of cubes. That makes 2*2*2*6*2 = 96 triangles.
However, if you test and draw only the polygons on the visible from the camera point of view, maybe by testing the normals (easy since it's cubes), this number goes down to 48 triangles.
If you find a way to see which faces are occluded by other ones (which shouldn't be too hard either considering you're working with flat, quared, grid based faces) you can only draw these. that way, we're drawing between 8 and 24 triangleS. That's up to 90% optimisation.
If you want to get really deep, you can even combine faces, to make a single N-gon out of the visible, flat faces. You can do that if you create a new way to generate the geometry on the fly that combines the two previous methods and test for adgacent visible faces on the same plane.
If you succeed, we're talking 2 to 6 polygons instead of 96, to render 8 cubes.
Note that the last method only works if your blocks are touching each other.
There is probably a ton of Minecraft-like renderer papers, a few googles will help you figure it out!
Why does drop-frame occur?
September 04, 2022
Almost 8 years passed since you asked this question, but not much has changed...
1. Polygons' count
The number of polygons in SceneKit or RealityKit scene must not exceed 100,000 triangular polygons. An ideal SceneKit's scene, that is capable of rendering all the models faster, should contain less than 50,000 polygons. Your scene contains 120,000 polygons. Do not forget that SceneKit renders models using single thread (unlike multi-threaded RealityKit renderer).
2. Shaders
In Xcode 14.0+, SceneKit's default .lightingModel of any 3D library's primitive set in Material Inspector (UI version) is .physicallyBased material. This is the most computationally intensive shader. Programmatic version of the .lightingModel for any SCN procedural geometry is .blinn shading model. The least computationally intensive shader is .constant (it doesn't depend on lighting).
3. What's inside a frustum
If all 10,000 cubes are inside the SceneKit camera frustum, then the frame rate will be 20-30 fps. But if you dollied in the cubes' matrix and see no more than a ninth part of it, then the frame rate will be 60 fps. Thus, SceneKit does not render those objects that are outside the frustum's bounds.
4. Number of meshes in SCNScene
Each model mesh results in a draw call. To achieve 60 fps each draw call should be 16 milliseconds or less. For best performance, Apple engineers advise to limit the number of meshes in a .usdz file to around 50. Unfortunately, I did not find a value for .scn files in the official documentation.
5. Lighting and shadows
Lighting and shadowing (especially shadowing) are very computationally intensive tasks. The general advice is the following – avoid using .forward shadows and hi-rez textures with fake shadows.
Look at this post for details.
SwiftUI code for testing
Xcode 14.0+, SwiftUI 4.0+, Swift 5.7
import SwiftUI
import SceneKit
struct ContentView: View {
var scene = SCNScene()
var options: SceneView.Options = [.allowsCameraControl]
var body: some View {
ZStack {
ForEach(-50...49, id: \.self) { x in
ForEach(-50...49, id: \.self) { z in
let _ = DispatchQueue.global().async {
scene.rootNode.addChildNode(createCube(x, 0, z))
}
}
}
SceneView(scene: scene, options: options)
.ignoresSafeArea()
let _ = scene.background.contents = UIColor.black
}
}
func createCube(_ posX: Int, _ posY: Int, _ posZ: Int) -> SCNNode {
let geo = SCNBox(width: 0.5, height: 0.5, length: 0.5,
chamferRadius: 0.0)
geo.firstMaterial?.lightingModel = .constant
let boxNode = SCNNode(geometry: geo)
boxNode.position = SCNVector3(posX, posY, posZ)
return boxNode
}
}
Here, all cubes are within the viewing frustum, so there are obvious reasons for a drop-frame.
And here, just a part of a scene is within the viewing frustum, so there is no drop-frame.