I would like to create two 3D objects on the AR scene according to the value of the string provided. The default value of the string is "1" and after that, I will jump to another ViewController and change the string value to "2" and then back to the ARViewController.
I would like to ask how can I keep the existing node and world origin of the AR scene remain unchanged when I move to another ViewController and then come back to ARViewController? It seems that every time when I come back to the AR scene, it will do the world tracking again and remove my existing node and world origin.I can only display the latest 3D object by string value "2". The 3D object created by string "1" is being removed.
Here is the code for the AR ViewController:
class ARViewController: UIViewController {
public var object = Connector()
#IBOutlet weak var ARScene: ARSCNView!
public let configuration = ARWorldTrackingConfiguration()
override func viewDidLoad() {
super.viewDidLoad()
self.ARScene.session.run(configuration)
}
override func viewDidAppear(_ animated: Bool) {
if object.stringbeingpassed == "1"{
self.addNode(newposition: SCNVector3(0,0,0))
}else{
self.addNode(newposition: SCNVector3(0.1,0.1,0.1))
}
}
func addNode(newposition:SCNVector3){
let ball = SCNNode()
ball.geometry = SCNSphere(radius: 0.02)
ball.position = newposition
self.ARScene.scene.rootNode.addChildNode(ball)
let action = SCNAction.rotateBy(x: 0, y:CGFloat(360.degreesToRadians), z: 0, duration: 4)
let forever = SCNAction.repeatForever(action)
ball.runAction(forever)
}
Related
I place 3d object in the world space. After that I try to move camera randomly. Then right now I need to know after I knew object has became inside frustum by isNode method, if the object is in center, top or bottom of camera view.
For a solution that's not a hack you can use the projectPoint: API.
It's probably better to work with pixel coordinates because this method uses the actual camera's settings to determine where the object appears on screen.
let projectedPoint = sceneView.projectPoint(self.sphereNode.worldPosition)
let xOffset = projectedPoint.x - screenCenter.x;
let yOffset = projectedPoint.y - screenCenter.y;
if xOffset * xOffset + yOffset * yOffset < R_squared {
// inside a disc of radius 'R' at the center of the screen
}
Solution
To achieve this you need to use a trick. Create new SCNCamera, make it a child of pointOfView default camera and set its FoV to approximately 10 degrees.
Then inside renderer(_:updateAtTime:) instance method use isNode(:insideFrustumOf:) method.
Here's working code:
import ARKit
class ViewController: UIViewController,
ARSCNViewDelegate,
SCNSceneRendererDelegate {
#IBOutlet var sceneView: ARSCNView!
#IBOutlet var label: UILabel!
let cameraNode = SCNNode()
let sphereNode = SCNNode()
let config = ARWorldTrackingConfiguration()
public func renderer(_ renderer: SCNSceneRenderer,
updateAtTime time: TimeInterval) {
DispatchQueue.main.async {
if self.sceneView.isNode(self.sphereNode,
insideFrustumOf: self.cameraNode) {
self.label.text = "In the center..."
} else {
self.label.text = "Out OF CENTER"
}
}
}
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.allowsCameraControl = true
let scene = SCNScene()
sceneView.scene = scene
cameraNode.camera = SCNCamera()
cameraNode.camera?.fieldOfView = 10
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.sceneView.pointOfView!.addChildNode(self.cameraNode)
}
sphereNode.geometry = SCNSphere(radius: 0.05)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
sphereNode.position.z = -1.0
sceneView.scene.rootNode.addChildNode(sphereNode)
sceneView.session.run(config)
}
}
Also, in this solution you may turn on an orthographic projection for child camera, instead of perspective one. It helps when a model is far from the camera.
cameraNode.camera?.usesOrthographicProjection = true
Here's how your screen might look like:
Next steps
The same way you can append two additional SCNCameras, place them above and below central SCNCamera, and test your object with two extra isNode(:insideFrustumOf:) instance methods.
I solved problem with another way:
let results = self.sceneView.hitTest(screenCenter!, options: [SCNHitTestOption.rootNode: parentnode])
where parentnode is the parent of target node, because I have multiple nodes.
func nodeInCenter() -> SCNNode? {
let x = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).x - sceneView.projectPoint(sphereNode.worldPosition).x) ^^ 2) < 9
let y = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).y - sceneView.projectPoint(sphereNode.worldPosition).y) ^^ 2) < 9
if x && y {
return node
}
return nil
}
I have a simple app where I'm creating a shape dynamically. This shape has physics, but starts out with it's dynamics set to false (as intended).
var dot = SKSpriteNode(imageNamed: "ShapeDot.png");
override func sceneDidLoad() {
dot.name = "MyShapeDot";
dot.size = CGSize(width: 10,height: 10);
dot.position = CGPoint(x: 0, y: 0);
dot.physicsBody = SKPhysicsBody(circleOfRadius: CGFloat(dot.size.width/2))
dot.physicsBody?.isDynamic = false;
dot.physicsBody?.allowsRotation = false;
dot.physicsBody?.pinned = false;
dot.physicsBody?.affectedByGravity = true;
//add to spritekit scene
self.addChild(dot)
}
The shape is successfully added to the .sks and the controller (I see it on the screen). Then on a tap gesture I'm calling a function to turn on dynamics for the physics sprite node.
func MyTapGesture(){
dot.physicsBody?.isDynamic = true;
}
The MyTapGesture is being called (I debugged that it triggers), but the shape doesn't become dynamic and start using gravity... Does anyone know what I'm missing???
I'm calling the MyTapGesture from my interfaceController... It's wired up as so
let gameScene = GameScene();
#IBOutlet weak var spriteTapGestures: WKTapGestureRecognizer!
#IBAction func onSpriteTap(_ sender: Any) {
NSLog("tap")
gameScene. MyTapGesture()
}
Within the MyTapGesture I've also tried print(dot) and it outputs the following:
name:'MyShapeDot' texture:[<SKTexture> 'ShapeDot.png' (128 x 128)] position:{0, 0} scale:{1.00, 1.00} size:{10, 10} anchor:{0.5, 0.5} rotation:0.00
This leads me to believe it should work and I'm calling the right reference of the class that's attached to the object. But it doesn't work. If I call MyTapGesture() within the update func of the SpriteKit class where my dot was created
override func update(_ currentTime: TimeInterval) {
MyTapGesture()
}
It works and the dynamics update! ...so for some reason my tap gesture must be calling a wrong reference or something??? So confused since the debug shows the correct data printed for the shape that I created...
To solve this - I realized that my gameScene var in my interface controller didn't have the correct reference. So I instantiated it as nil:
var gameScene : GameScene?;
And then assigned the variable in the interface controllers awake func
if let scene = GameScene(fileNamed: "GameScene") {
gameScene = scene
}
I have been working on a game in SceneKit which involves a plane that the user can control using the arrow keys. The plane can fly, but the arrow keys have no effect on the plane. When I run a test to see if the arrow key function is working I can see that the arrow keys are effectively changing the speed of the plane, but the plane does not increase or decrease speed. Here is the code:
import SceneKit
import QuartzCore
class GameViewController: NSViewController {
var height = 0
var speed = 0 //I should be able to change this value by pressing the up arrow key during the simulation
override func keyDown(with theEvent: NSEvent) {
if(theEvent.keyCode == 123){//left
}
if(theEvent.keyCode == 124){//right
}
if(theEvent.keyCode == 125){//down
}
if(theEvent.keyCode == 126){//up
speed+=1
print(speed)
}
}
override func viewDidLoad(){
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/Plane.scn")!
let cameraNode = scene.rootNode.childNode(withName: "camera", recursively: true)!
let plane = scene.rootNode.childNode(withName: "plane", recursively: true)!
plane.runAction(SCNAction.repeatForever(SCNAction.moveBy(x: 0, y: CGFloat(height), z: CGFloat(speed), duration: 1)))
cameraNode.runAction(SCNAction.repeatForever(SCNAction.moveBy(x:0, y:CGFloat(height), z: CGFloat(speed), duration:1)))
let scnView = self.view as! SCNView
}
}
Once you have a created a SCNAction object with parameters that depend on the speed variable, any change to speed won't update the action.
This is because speed is a value type (Int) and there's no connection between the SCNAction instance and your view controller's speed instance variable. This is not an issue specific to SceneKit, any API that takes an Int as a parameter would behave the same way.
I'm in the process of learning both ARKit and Scenekit concurrently, and it's been a bit of a challenge.
With a ARWorldTrackingSessionConfiguration session created, I was wondering if anyone knew of a way to get the position of the user's 'camera' in the scene session. The idea is I want to animate an object towards the user's current position.
let reaperScene = SCNScene(named: "reaper.dae")!
let reaperNode = reaperScene.rootNode.childNode(withName: "reaper", recursively: true)!
reaperNode.position = SCNVector3Make(0, 0, -1)
let scene = SCNScene()
scene.rootNode.addChildNode(reaperNode)
// some unknown amount of time later
let currentCameraPosition = sceneView.pointOfView?.position
let moveAction = SCNAction.move(to: currentCameraPosition!, duration: 1.0)
reaperNode.runAction(moveAction)
However, it seems that currentCameraPosition is always [0,0,0], even though I am moving the camera around. Any idea what I'm doing wrong? Eventually the idea is I would rotate the object around an invisible sphere until it is in front of the camera and then animate it in, doing something similar to this: Rotate SCNCamera node looking at an object around an imaginary sphere (that way the user sees the object animate towards them)
Thanks for any help.
Set yourself as the ARSession.delegate. Than you can implement session(_:didUpdate:) which will give you an ARFrame for every frame processed in your session. The frame has an camera property that holds information on the cameras transform, rotation and position.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Do something with the new transform
let currentTransform = frame.camera.transform
doSomething(with: currentTransform)
}
As rickster pointed out you always can get the current ARFrame and the camera position through it by calling session.currentFrame.
This is useful if you need the position just once, eg to move a node where the camera has been but you should use the delegate method if you want to get updates on the camera's position.
I know it had been solved but i have a little neat solution for it ..
I would prefere adding a renderer delegate method.. it's a method in ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = orientation + location
print(currentPositionOfCamera)
}
of course you can't by default add the two SCNVector3 out of the box.. so you need to paste out of the class the following
func +(lhv:SCNVector3, rhv:SCNVector3) -> SCNVector3 {
return SCNVector3(lhv.x + rhv.x, lhv.y + rhv.y, lhv.z + rhv.z)
}
ARKit + SceneKit
For your convenience you can create a ViewController extension with an instance method session(_:didUpdate:) where an update will be occured.
import ARKit
import SceneKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let transform = frame.camera.transform
let position = transform.columns.3
print(position.x, position.y, position.z) // UPDATING
}
}
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
sceneView.session.delegate = self // ARSESSION DELEGATE
let config = ARWorldTrackingConfiguration()
sceneView.session.run(config)
}
}
RealityKit
In RealityKit, ARView's object contains a camera's transform as well:
import RealityKit
import UIKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subs: [AnyCancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
arView.scene.subscribe(to: SceneEvents.Update.self) { _ in
let camTransform = self.arView.cameraTransform.matrix
print(camTransform) // UPDATING
}.store(in: &subs)
}
}
I'm attempting to add support for Voice Over accessibility in a puzzle game which has a fixed board. However, I'm having trouble getting UIAccessibilityElements to show up.
Right now I'm overriding accessibilityElementAtIndex, accessibilityElementCount and indexOfAccessibilityElement in my SKScene.
They are returning an array of accessible elements as such:
func loadAccessibleElements()
{
self.isAccessibilityElement = false
let pieces = getAllPieces()
accessibleElements.removeAll(keepCapacity: false)
for piece in pieces
{
let element = UIAccessibilityElement(accessibilityContainer: self.usableView!)
element.accessibilityFrame = piece.getAccessibilityFrame()
element.accessibilityLabel = piece.getText()
element.accessibilityTraits = UIAccessibilityTraitButton
accessibleElements.append(element)
}
}
Where piece is a subclass of SKSpriteNode and getAccessibilityFrame is defined:
func getAccessibilityFrame() -> CGRect
{
return parentView!.convertRect(frame, toView: nil)
}
Right now one (wrongly sized) accessibility element seems to appear on the screen in the wrong place.
Could someone point me in the right direction?
Many thanks
EDIT:
I've tried a hack-ish work around by placing a UIView over the SKView with UIButton elements in the same location as the SKSpriteNodes. However, accessibility still doesn't want to work. The view is loaded as such:
func loadAccessibilityView()
{
view.isAccessibilityElement = false
view.accessibilityElementsHidden = false
skView.accessibilityElementsHidden = false
let accessibleSubview = UIView(frame: view.frame)
accessibleSubview.userInteractionEnabled = true
accessibleSubview.isAccessibilityElement = false
view.addSubview(accessibleSubview)
view.bringSubviewToFront(accessibleSubview)
let pieces = (skView.scene! as! GameScene).getAllPieces()
for piece in pieces
{
let pieceButton = UIButton(frame: piece.getAccessibilityFrame())
pieceButton.isAccessibilityElement = true
pieceButton.accessibilityElementsHidden = false
pieceButton.accessibilityTraits = UIAccessibilityTraitButton
pieceButton.setTitle(piece.getText(), forState: UIControlState.Normal)
pieceButton.setBackgroundImage(UIImage(named: "blue-button"), forState: UIControlState.Normal)
pieceButton.alpha = 0.2
pieceButton.accessibilityLabel = piece.getText()
pieceButton.accessibilityFrame = pieceButton.frame
pieceButton.addTarget(self, action: Selector("didTap:"), forControlEvents: UIControlEvents.TouchUpInside)
accessibleSubview.addSubview(pieceButton)
}
UIAccessibilityPostNotification(UIAccessibilityScreenChangedNotification, nil)
}
The buttons are placed correctly, however accessibility just isn't working at all. Something seems to be preventing it from working.
I've searched in vain for a description of how to implement VoiceOver in Swift using SpriteKit, so I finally figured out how to do it. Here's some working code that converts a SKNode to an accessible pushbutton when added to a SKScene class:
// Add the following code to a scene where you want to make the SKNode variable named “leave” an accessible button
// leave must already be initialized and added as a child of the scene, or a child of other SKNodes in the scene
// screenHeight must already be defined as the height of the device screen, in points
// Accessibility
private var accessibleElements: [UIAccessibilityElement] = []
private func nodeToDevicePointsFrame(node: SKNode) -> CGRect {
// first convert from frame in SKNode to frame in SKScene's coordinates
var sceneFrame = node.frame
sceneFrame.origin = node.scene!.convertPoint(node.frame.origin, fromNode: node.parent!)
// convert frame from SKScene coordinates to device points
// sprite kit scene origin is in lower left, accessibility device screen origin is at upper left
// assumes scene is initialized using SKSceneScaleMode.Fill using dimensions same as device points
var deviceFrame = sceneFrame
deviceFrame.origin.y = CGFloat(screenHeight-1) - (sceneFrame.origin.y + sceneFrame.size.height)
return deviceFrame
}
private func initAccessibility() {
if accessibleElements.count == 0 {
let accessibleLeave = UIAccessibilityElement(accessibilityContainer: self.view!)
accessibleLeave.accessibilityFrame = nodeToDevicePointsFrame(leave)
accessibleLeave.accessibilityTraits = UIAccessibilityTraitButton
accessibleLeave.accessibilityLabel = “leave” // the accessible name of the button
accessibleElements.append(accessibleLeave)
}
}
override func didMoveToView(view: SKView) {
self.isAccessibilityElement = false
leave.isAccessibilityElement = true
}
override func willMoveFromView(view: SKView) {
accessibleElements = []
}
override func accessibilityElementCount() -> Int {
initAccessibility()
return accessibleElements.count
}
override func accessibilityElementAtIndex(index: Int) -> AnyObject? {
initAccessibility()
if (index < accessibleElements.count) {
return accessibleElements[index] as AnyObject
} else {
return nil
}
}
override func indexOfAccessibilityElement(element: AnyObject) -> Int {
initAccessibility()
return accessibleElements.indexOf(element as! UIAccessibilityElement)!
}
Accessibility frames are defined in the fixed physical screen coordinates, not UIView coordinates, and transforming between them is kind of tricky.
The device origin is the lower left of the screen, with X up, when the device is in landscape right mode.
It's a pain converting, I've no idea why Apple did it that way.