How to use SpriteKit Particle File in a project developed with SceneKit? - swift

I'm making a simple ARKit App with SceneKit and need to add some particle effect. Usually I could create a SceneKit Particle File, design some particle effect and use it like this
let smoke = SCNParticleSystem(named: "xxx.scnp", inDirectory: nil)!
let hole = SCNNode()
hole.addParticleSystem(smoke)
But since Xcode12, I can only create a SpritKit Particle File whose suffix is .sks. The code above could not work anymore with this file. I am a newbie in ARKit. May anyone tell me how to integrate the particle effect into my SceneKit project? Thanks a lot.

SpriteKit particles implementation
If you wanna use 2D particles in 3D scene, there are two regular ways to show SpriteKit scenes in SceneKit. The first way is to assign SKScene as material for SceneKit's geometry:
// 2D scene
let skScene = SKScene()
guard let emitter = SKEmitterNode(fileNamed: "sparks.sks")
else { return }
emitter.position = CGPoint(x: 10, y: 25)
emitter.targetNode = skScene
skScene.addChild(emitter)
// 3D scene
let plane = SCNPlane(width: 5, height: 5)
plane.materials.first?.diffuse.contents = skScene
let planeNode = SCNNode(geometry: plane)
sceneView.scene.rootNode.addChildNode(planeNode)
The second way – assign SKScene to virtual surrounding environment of SceneKit.
sceneView.scene.lightingEnvironment.contents = skScene
SceneKit particles implementation
The regular way for working with particles in SceneKit is its native particle system:
let particleSystem = SCNParticleSystem()
particleSystem.birthRate = 50
particleSystem.particleSize = 0.5
particleSystem.particleLifeSpan = 10
particleSystem.particleColor = .green
let particlesNode = SCNNode()
particlesNode.addParticleSystem(particleSystem)
sceneView.scene.rootNode.addChildNode(particlesNode)
P. S.
About deprecated preconfigured SCNP file and its replacement read HERE.
If you want to setup such phenomenon as 3D fire, the best way to do it – do it in Scene graph. For better result use 2 or 3 particle system objects with different parameters.
For better understanding how to setup 3D fire based on png samples (sources) open Apple Motion 5 app scene and click on Inspector button.
Then, try to implement a fire with the same look in SceneKit's Scene graph.

For some unknown reason Apple removed the SCNP (SceneKit particle file) template from the recent XCode editions.
You have the following options:
Download any sample code project from Apple or GitHub that contains one or more .scnp files. Grab the file from there and implement it into your AR-Project. You can still edit the file within XCode as usual. (Particle Systems Editor)
Create your particle system entirely in code.
like so:
let myParticleSystem = SCNParticleSystem()
then you configure everything you want within the instance: myParticleSystem

Related

ARKit – How to track iPhone camera location during ARSession?

I'm very new to Xcode, so any and all help would be a godsend. I'm trying to write an app that saves the positional and rotational data from an iPhone at a set interval and saves it to a file. Right now, I'm not sure where to look when it comes to getting that data.
CoreMotion seems to not be enough so I'm using ARKit. I have a sceneView where I can see the origin and the feature points, but again, I'm stuck when it comes to where or even if the camera's position is tracked.
You can retrieve ARCamera's position and rotation via Transform Matrix (simd_float4x4). This data is contained inside every ARFrame of a running ARSession (for selfie or rear camera).
let sceneView = ARSCNView(frame: .zero)
sceneView.delegate = self
let frame: ARFrame = sceneView.session.currentFrame!
let cameraPosition: simd_float4 = frame.camera.transform.columns.3
let cameraRotation: simd_float3 = frame.camera.eulerAngles
The best place for these lines is SceneKit's renderer(_:didUpdate:for:) instance method. Take into consideration, that ARCamera transform values coming from IMU sensors are specially filtered.

Change Texture In Scene Editor Programmatically

I've added a color sprite, named "arm" to my Scene Editor but for a bunch of reasons I need to access that color sprite and change the texture programmatically. Another way of stating my goal, I want to link the node to my GameScene swift file and change texture using a texture from an array. Does anyone know how to do this?
First things first, you want to link the sprite node from your GameScene.sks file to your GameScene.swift file.
To do this, in GameScene.swift near the top (under the "class GameScene: SKScene {") use this:
var armNode: SKSpriteNode?
In your didMove(to view:) section do this:
armNode = childNode(withName: "arm") as? SKSpriteNode
Now 3 different ways to change the color/texture of the SKSpriteNode depending on what you are doing:
armNode.colour = UIColor.red
armNode.texture = SKTexture(imageNamed: "your arm image here")
armNode.texture = textureArray[2]
The 3rd version uses a texture from an array of textures

SceneKit shader modifiers

I'm writing a 3D space game using SceneKit and I'm pretty happy with it so far, but I'm starting to hit some limitations and would like some advice.
I have a sphere which represents a star that sits on a point light, and I'd like to add some effects to this to make it look more realistic. I think I should use the sphere's shaderModifiers to do this, but I'm not sure which modifiers I should be looking at i.e. to achieve a lens flare effect. Infact if anyone can give me a clear explanation of the differences between: ShaderModifiers, SCNProgram and SCNTechnique that would be great!
I'd like to draw a 1px circle to represent an orbit. I've tried using a cylinder that is really thin, but this results in some visual artefacts (the ring seems to have gaps at larger distances and break up). Any ideas how I can do this and maintain a nice smooth circle?
The shortest way to have a flare effect for SceneKit models is to use a CoreImage framework's capabilities.
For making a visible orbit use a regular PNG-image technique with premultiplied alpha channel (RGBxA). Look at this link to find out how a png image with orbits looks like (save a file). And if you wanna automatically orient a plane toward a camera use a constraint.
And here's a code:
let orbitsOnPlane = SCNPlane(width: 10, height: 10)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named:"overlayOrbits.png")
orbitsOnPlane.materials = [material]
let orbitNode = SCNNode()
orbitNode.geometry = orbitsOnPlane
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
let constraint = SCNLookAtConstraint(target: orbitNode)
cameraNode.constraints = [constraint]

What is ARAnchor exactly?

I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand.
Apple said about ARAnchor:
A real-world position and orientation that can be used for placing objects in an AR scene.
But that's not enough. So my questions are:
What is ARAnchor exactly?
What are the differences between anchors and feature points?
Is ARAnchor just part of feature points?
And how does ARKit determines its anchors?
Updated: February 02, 2023.
TL;DR
ARAnchor
ARAnchor is an invisible null-object that holds a 3D model at anchor's position. Think of ARAnchor as a parent transform node of your model that you can translate, rotate and scale like any other SceneKit node. Every 3D model has a pivot point, right? Thus, this pivot point must match a location of an ARAnchor in AR app.
If you're not using anchors in ARKit or ARCore app (in RealityKit, however, it's impossible not to use anchors because they are integral part of a scene), your 3D models may drift from where they were placed, and this will dramatically impact app’s realism and user experience. Thus, anchors are crucial elements of any AR scene.
According to ARKit 2017 documentation:
ARAnchor is a real-world position and orientation that can be used for placing objects in AR Scene. Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position.
ARAnchor is a parent class of other 10 anchors' types in ARKit, hence all those subclasses inherit from ARAnchor. Usually you do not use ARAnchor directly. I must also say that ARAnchor and Feature Points have nothing in common. Feature Points are rather special visual elements for tracking and debugging.
ARAnchor doesn't automatically track a real world target. When you need automation, you have to use renderer() or session() instance methods that can be implemented in case you comformed to ARSCNViewDelegate or ARSessionDelegate protocols, respectively.
Here's an image with visual representation of plane anchor. Keep in mind: you can neither see a detected plane nor its corresponding ARPlaneAnchor, by default. So, if want to see the anchor in your scene, you may "visualize" it using three thin SCNCylinder primitives. Each color of the cylinder represents a particular axis: so RGB is XYZ.
In ARKit you can automatically add ARAnchors to your scene using different scenarios:
ARPlaneAnchor
If horizontal and/or vertical planeDetection instance property is ON, ARKit is able to add ARPlaneAnchors to anchors' collection in the running session. Sometimes activated planeDetection considerably increases a time required for scene understanding stage.
ARImageAnchor (conforms to ARTrackable protocol)
This type of anchors contains information about a transform of a detected image – anchor is placed at image's center – on world-tracking or image-tracking config. To activate image tracking, use detectionImages instance property. In ARKit 2.0 you can totally track up to 25 images, in ARKit 3.0 / 4.0 – up to 100 images, respectively. But, in both cases, not more than just 4 images simultaneously. However, it was promised, that in ARKit 5.0 / 6.0, you can detect and track up to 100 images at a time (but it's still not implemented yet).
ARBodyAnchor (conforms to ARTrackable protocol)
You can turn on body tracking by running a session based on ARBodyTrackingConfig(). You'll get ARBodyAnchor at a Root Joint of a real performer's skeleton or, in other words, at pelvis position of a tracked character.
ARFaceAnchor (conforms to ARTrackable protocol)
Face Anchor stores information about head's topology, pose and face expression. You can track ARFaceAnchor with a help of the front TrueDepth camera. When face is detected, Face Anchor will be attached slightly behind a nose, in the center of a face. In ARKit 2.0 you can track just one face, in ARKit 3.0 and higher – up to 3 faces, simultaneously. However, the number of tracked faces depends on presence of a TrueDepth sensor and processor version: gadgets with TrueDepth camera can track up to 3 faces, gadgets with A12+ chipset, but without TrueDepth camera, can also track up to 3 faces.
ARObjectAnchor
This anchor's type keeps an information about 6 Degrees of Freedom (position and orientation) of a real-world 3D object detected in a world-tracking session. Remember that you need to specify ARReferenceObject instances for detectionObjects property of session config.
AREnvironmentProbeAnchor
Probe Anchor provides environmental lighting information for a specific area of space in a world-tracking session. ARKit's Artificial Intelligence uses it to supply reflective shaders with environmental reflections.
ARParticipantAnchor
This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use true value for isCollaborationEnabled property in ARWorldTrackingConfig. Then import MultipeerConnectivity framework.
ARMeshAnchor
ARKit and LiDAR subdivide the reconstructed real-world scene surrounding the user into mesh anchors with corresponding polygonal geometry. Mesh anchors constantly update their data as ARKit refines its understanding of the real world. Although ARKit updates a mesh to reflect a change in the physical environment, the mesh's subsequent change is not intended to reflect in real time. Sometimes your reconstructed scene can have up to 30-40 anchors or even more. This is due to the fact that each classified object (wall, chair, door or table) has its own personal anchor. Each ARMeshAnchor stores data about corresponding vertices, one of eight cases of classification, its faces and vertices' normals.
ARGeoAnchor (conforms to ARTrackable protocol)
In ARKit 4.0+ there's a geo anchor (a.k.a. location anchor) that tracks a geographic location using GPS, Apple Maps and additional environment data coming from Apple servers. This type of anchor identifies a specific area in the world that the app can refer to. When a user moves around the scene, the session updates a location anchor’s transform based on coordinates and device’s compass heading of a geo anchor. Look at the list of supported cities.
ARAppClipCodeAnchor (conforms to ARTrackable protocol)
This anchor tracks the position and orientation of App Clip Code in the physical environment in ARKit 4.0+. You can use App Clip Codes to enable users to discover your App Clip in the real world. There are NFC-integrated App Clip Code and scan-only App Clip Code.
There are also other regular approaches to create anchors in AR session:
Hit-Testing methods
Tapping on the screen, projects a point onto a invisible detected plane, placing ARAnchor on a location where imaginary ray intersects with this plane. By the way, ARHitTestResult class and its corresponding hit-testing methods for ARSCNView and ARSKView will be deprecated in iOS 14, so you have to get used to a Ray-Casting.
Ray-Casting methods
If you're using ray-casting, tapping on the screen results in a projected 3D point on an invisible detected plane. But you can also perform Ray-Casting between A and B positions in 3D scene. So, ray-casting can be 2D-to-3D and 3D-to-3D. When using the Tracked Ray-Casting, ARKit can keep refining the ray-cast as it learns more and more about detected surfaces.
Feature Points
Special yellow points that ARKit automatically generates on a high-contrast margins of real-world objects, can give you a place to put an ARAnchor on.
ARCamera's transform
iPhone's or iPad's camera position and orientation simd_float4x4 can be easily used as a place for ARAnchor.
Any arbitrary World Position
Place a custom ARWorldAnchor anywhere in your scene. You can generate ARKit's version of world anchor like AnchorEntity(.world(transform: mtx)) found in RealityKit.
This code snippet shows you how to use an ARPlaneAnchor in a delegate's method: renderer(_:didAdd:for:):
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor
else { return }
let grid = Grid(anchor: planeAnchor)
node.addChildNode(grid)
}
AnchorEntity
AnchorEntity is alpha and omega in RealityKit. According to RealityKit documentation 2019:
AnchorEntity is an anchor that tethers virtual content to a real-world object in an AR session.
RealityKit framework and Reality Composer app were announced at WWDC'19. They have a new class named AnchorEntity. You can use AnchorEntity as the root point of any entities' hierarchy, and you must add it to the Scene anchors collection. AnchorEntity automatically tracks real world target. In RealityKit and Reality Composer AnchorEntity is at the top of hierarchy. This anchor is able to hold a hundred of models and in this case it's more stable than if you use 100 personal anchors for each model.
Let's see how it looks in a code:
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let modelAnchor = try! Experience.loadModel()
arView.scene.anchors.append(modelAnchor)
return arView
}
AnchorEntity has three components:
Anchoring component
Transform component
Synchronization component
To find out the difference between ARAnchor and AnchorEntity look at THIS POST.
Here are nine AnchorEntity's cases available in RealityKit 2.0 for iOS:
// Fixed position in the AR scene
AnchorEntity(.world(transform: mtx))
// For body tracking (a.k.a. Motion Capture)
AnchorEntity(.body)
// Pinned to the tracking camera
AnchorEntity(.camera)
// For face tracking (Selfie Camera config)
AnchorEntity(.face)
// For image tracking config
AnchorEntity(.image(group: "GroupName", name: "forModel"))
// For object tracking config
AnchorEntity(.object(group: "GroupName", name: "forObject"))
// For plane detection with surface classification
AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1, 1]))
// When you use ray-casting
AnchorEntity(raycastResult: myRaycastResult)
// When you use ARAnchor with a given identifier
AnchorEntity(.anchor(identifier: uuid))
// Creates anchor entity on a basis of ARAnchor
AnchorEntity(anchor: arAnchor)
And here are only two AnchorEntity's cases available in RealityKit 2.0 for macOS:
// Fixed world position in VR scene
AnchorEntity(.world(transform: mtx))
// Camera transform
AnchorEntity(.camera)
Also it’s not superfluous to say that you can use any subclass of ARAnchor for AnchorEntity needs:
var anchor = AnchorEntity()
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
arView.session.add(anchor: faceAnchor) // ARKit Session
self.anchor = AnchorEntity(anchor: faceAnchor)
anchor.addChild(model)
arView.scene.anchors.append(self.anchor) // RealityKit Scene
}
Reality Composer's anchors:
At the moment (February 2022) Reality Composer has just 4 types of AnchorEntities:
// 1a
AnchorEntity(plane: .horizontal)
// 1b
AnchorEntity(plane: .vertical)
// 2
AnchorEntity(.image(group: "GroupName", name: "forModel"))
// 3
AnchorEntity(.face)
// 4
AnchorEntity(.object(group: "GroupName", name: "forObject"))
AR USD Schemas
And of course, I should say a few words about preliminary anchors. There are 3 preliminary anchoring types (July 2022) for those who prefer Python scripting for USDZ models – these are plane, image and face preliminary anchors. Look at this code snippet to find out how to implement a schema pythonically.
def Cube "ImageAnchoredBox"(prepend apiSchemas = ["Preliminary_AnchoringAPI"])
{
uniform token preliminary:anchoring:type = "image"
rel preliminary: imageAnchoring:referenceImage = <ImageReference>
def Preliminary_ReferenceImage "ImageReference"
{
uniform asset image = #somePicture.jpg#
uniform double physicalWidth = 45
}
}
If you want to know more about AR USD Schemas, read this story on Meduim.
Visualizing AnchorEntity
Here's an example of how to visualize anchors in RealityKit (mac version).
import AppKit
import RealityKit
class ViewController: NSViewController {
#IBOutlet var arView: ARView!
var model = Entity()
let anchor = AnchorEntity()
fileprivate func visualAnchor() -> Entity {
let colors: [SimpleMaterial.Color] = [.red, .green, .blue]
for index in 0...2 {
let box: MeshResource = .generateBox(size: [0.20, 0.005, 0.005])
let material = UnlitMaterial(color: colors[index])
let entity = ModelEntity(mesh: box, materials: [material])
if index == 0 {
entity.position.x += 0.1
} else if index == 1 {
entity.transform = Transform(pitch: 0, yaw: 0, roll: .pi/2)
entity.position.y += 0.1
} else if index == 2 {
entity.transform = Transform(pitch: 0, yaw: -.pi/2, roll: 0)
entity.position.z += 0.1
}
model.scale *= 1.5
self.model.addChild(entity)
}
return self.model
}
override func awakeFromNib() {
anchor.addChild(self.visualAnchor())
arView.scene.addAnchor(anchor)
}
}
About ArAnchors in ARCore
At the end of my post, I would like to talk about four types of anchors that are used in ARCore 1.35 and higher. Google's official documentation says the following about anchors: "ArAnchor describes a fixed location and orientation in the real world". ARCore anchors work similarly to ARKit anchors.
Let's take a look at ArAnchors' types:
Local anchors
are stored with the app locally, and valid only for that instance of the app. The user must be physically at the location where they are placing the anchor. Anchor can be attached to Trackable or ARCore Session.
Cloud Anchors
are stored in Google Cloud and may be shared between app instances. The user must be physically at the location where they are placing the anchor. Cloud Anchors are anchors that are hosted in the cloud (thanks to the Persistent Cloud Anchors API), you can create a cloud anchor that can be resolved for 1 to 365 days after creation. They can be resolved by multiple users to establish a common frame of reference across users and their devices.
Geospatial anchors
are based on geodetic latitude, longitude, and altitude, plus Google's Visual Positioning System data, to provide precise location almost anywhere in the world. These anchors may be shared between app instances. The user may place an anchor from a remote location as long as the app is connected to the internet and able to use the VPS.
Terrain anchors
is rather a subtype of Geospatial anchor that allows you to place AR objects using only latitude and longitude, leveraging information from Google Maps to find the precise altitude above ground.
When anchoring objects in ARCore, make sure that they are close to the anchor you are using. Avoid placing objects farther than 8 meters from the anchor to prevent unexpected rotational movement due to ARCore's updates to world space coordinates. If you need to place an object more than eight meters away from an existing anchor, create a new anchor closer to this position and attach the object to the new anchor.
These Kotlin code snippets show you how to use a Geospatial anchor:
fun configureSession(session: Session) {
session.configure(
session.config.apply {
geospatialMode = Config.GeospatialMode.ENABLED
}
)
}
val earth = session?.earth ?: return
if (earth.trackingState != TrackingState.TRACKING) { return }
earthAnchor?.detach()
val altitude = earth.cameraGeospatialPose.altitude - 1
val qx = 0f; val qy = 0f; val qz = 0f; val qw = 1f
earthAnchor = earth.createAnchor(latLng.latitude,
latLng.longitude,
altitude,
qx, qy, qz, qw)

SpriteKit- How to zoom-in and zoom-out of an SKScene?

If I'm making a game in SpriteKit that has a large "world", and I need the user to have the option of zooming in and out of the SKScene, how would I go about this? Or, to make things simpler, in the didMoveToView function, how can I present more of the world to the user's device's screen (without using world.runAction(SKAction.scaleTo(0.5)) or something)?
There's a SKCameraNode that's built specifically for this. The SKCameraNode defines the viewport into your scene. You create a camera node and assign it to the camera property of your scene.
let cameraNode = SKCameraNode()
cameraNode.position = CGPoint(x: scene.size.width / 2, scene.size.height / 2)
scene.addChild(cameraNode)
scene.camera = cameraNode
You can then create actions and run those actions on the camera. So to zoom in on the scene, you'd do this.
let zoomInAction = SKAction.scale(to: 0.5, duration: 1)
cameraNode.run(zoomInAction)
The cameraNode basically is a square node in the scene, that I think takes the proportions of the view by default? Cuz there's no size initializer. So when you make it smaller, the scene looks like it gets zoomed. To zoom out you'd make an action that increases the scale. Basically imagine a rectangle on your entire scene, and whatever is in the cameraNode's rectangle directly shows on your iPhone screen. You can also add moveTo actions and sequence actions and set timingModes on the actions same as if it were your normal spriteNode.
Here's the WWDC where the apple guy shows what I've just said. CameraNode bit is around 3 mins before the end.
https://developer.apple.com/videos/play/wwdc2015-604/
So, the best solution I could could find goes something like this. In the didMoveToView function, create an SKSpriteNode called World and make it whatever size you want your world to be. Then, write world.setScale(0.5) if you want a 50% zoom-out. However, if you have a player node that needs to always be centered in the screen, you'll need to add the following to your update function.
override func update(currentTime: CFTimeInterval) {
world.position.x = -player.position.x * (0.5)
world.position.y = -player.position.y * (0.5)
}