properly rotate camera to see the side of the scene - swift

I'm using SceneKit to use a 3d car to use in google map just like Uber.
I managed to use it and rotate it but the camera is on top and I would like to rotate it once the location manager heading changes to see the edges of the car.
I tried changing the Euler angles but that didn't work well.
I tried using the default camera but I didn't find how to place it on top or on left or on right ... I can only change it from the .scn file only but not in Code.
https://github.com/HilalAH/Uber3dModel

Fyi - you have allowsCameraControl = true, probably don't want that.
This is pretty quick way to get you started.
Set a constraint on your camera so that it always looks at your car, then you can adjust the cameraEye using simple vectors. You can do some math on the cars heading and place the cameraEye wherever you want it in relation to the direction it's moving (back, side, etc.), but the car will always be the focal point.
Something like this code with cameraFocus being your car:
class Camera
{
var cameraEye = SCNNode()
var cameraFocus = SCNNode()
init()
{
cameraEye.name = "Camera Eye"
cameraFocus.name = "Camera Focus"
cameraFocus.isHidden = true
cameraFocus.position = SCNVector3(x: 0, y: 0, z: 0)
cameraEye.camera = SCNCamera()
cameraEye.constraints = []
cameraEye.position = SCNVector3(x: 0, y: 0, z: 20)
let vConstraint = SCNLookAtConstraint(target: cameraFocus)
vConstraint.isGimbalLockEnabled = true
cameraEye.constraints = [vConstraint]
}
}
Make sure you've added your camera objects:
scene.rootNode.addChildNode(camera.cameraEye)
scene.rootNode.addChildNode(camera.cameraFocus)

Related

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

How to get the SCNVector3 position of the camera in relation to it's direction ARKit Swift

I am trying to attach an object in front of the camera, but the issue is that it is always in relation to the initial camera direction. How can I adjust/get the SCNVector3 position to place the object in front, even if the direction of the camera is up or down?
This is how I do it now:
let ballShape = SCNSphere(radius: 0.03)
let ballNode = SCNNode(geometry: ballShape)
let viewPosition = sceneView.pointOfView!.position
ballNode.position = SCNVector3Make(viewPosition.x, viewPosition.y, viewPosition.z - 0.4)
sceneView.scene.rootNode.addChildNode(ballNode)
Edited to better answer the question now that it's clarified in a comment
New Answer:
You are using only the position of the camera, so if the camera is rotated, it doesn't affect the ball.
What you can do is get the transform matrix of the ball and multiply it by the transform matrix of the camera, that way the ball position will be relative to the full transformation of the camera, including rotation.
e.g.
let ballShape = SCNSphere(radius: 0.03)
let ballNode = SCNNode(geometry: ballShape)
ballNode.position = SCNVector3Make(0.0, 0.0, -0.4)
let ballMatrix = ballNode.transform
let cameraMatrix = sceneView.pointOfView!.transform
let newBallMatrix = SCNMatrix4Mult(ballMatrix, cameraMatrix)
ballNode.transform = newBallMatrix
sceneView.scene.rootNode.addChildNode(ballNode)
Or if you only want the SCNVector3 position, to answer exactly to your question (this way the ball will not rotate):
...
let newBallMatrix = SCNMatrix4Mult(ballMatrix, cameraMatrix)
let newBallPosition = SCNVector3Make(newBallMatrix.m41, newBallMatrix.m42, newBallMatrix.m43)
ballNode.position = newBallPosition
sceneView.scene.rootNode.addChildNode(ballNode)
Old Answer:
You are using only the position of the camera, so when the camera rotates, it doesn't affect the ball.
SceneKit uses a hierarchy of nodes, so when a node is "child" of another node, it follows the position, rotation and scale of its "parent". The proper way of attaching an object to another object, in this case the camera, is to make it "child" of the camera.
Then, when you set the position, rotation or any other aspect of the transform of the "child" node, you are setting it relative to its parent. So if you set the position to SCNVector3Make(0.0, 0.0, -0.4), it's translated -0.4 units in Z on top of its "parent" translation.
So to make what you want, it should be:
let ballShape = SCNSphere(radius: 0.03)
let ballNode = SCNNode(geometry: ballShape)
ballNode.position = SCNVector3Make(0.0, 0.0, -0.4)
let cameraNode = sceneView.pointOfView
cameraNode?.addChildNode(ballNode)
This way, when the camera rotates, the ball follows exactly its rotation, but separated -0.4 units from the camera.

Aligning ARFaceAnchor with SpriteKit overlay

I'm trying to calculate SpriteKit overlay content position (not just overlaying visual content) over specific geometry points ARFaceGeometry/ARFaceAnchor.
I'm using SCNSceneRenderer.projectPoint from the calculated world coordinate, but the result is y inverted and not aligned to the camera image:
let vertex4 = vector_float4(0, 0, 0, 1)
let modelMatrix = faceAnchor.transform
let world_vertex4 = simd_mul(modelMatrix, vertex4)
let pt3 = SCNVector3(x: Float(world_vertex4.x),
y: Float(world_vertex4.y),
z: Float(world_vertex4.z))
let sprite_pt = renderer.projectPoint(pt3)
// To visualize sprite_pt
let dot = SKSpriteNode(imageNamed: "dot")
dot.size = CGSize(width: 7, height: 7)
dot.position = CGPoint(x: CGFloat(sprite_pt.x),
y: CGFloat(sprite_pt.y))
overlayScene.addChild(dot)
In my experience, the screen coordinates given by ARKit's projectPoint function are directly usable when drawing to, for example, a CALayer. This means they follow iOS coordinates as described here, where the origin is in the upper left and y is inverted.
SpriteKit has its own coordinate system:
The unit coordinate system places the origin at the bottom left corner of the frame and (1,1) at the top right corner of the frame. A sprite’s anchor point defaults to (0.5,0.5), which corresponds to the center of the frame.
Finally, SKNodes are placed in an SKScene which has its origin on the bottom left. You should ensure that your SKScene is the same size as your actual view, or else the origin may not be at the bottom left of the view and thus your positioning of the node from view coordinates my be incorrect. The answer to this question may help, in particular checking the AspectFit or AspectFill of your view to ensure your scene is being scaled down.
The Scene's origin is in the bottom left and depending on your scene size and scaling it may be off screen. This is where 0,0 is. So every child you add will start there and work its way right and up based on position. A SKSpriteNode has its origin in the center.
So the two basic steps to convert from view coordinates and SpriteKit coordinates would be 1) inverting the y-axis so your origin is in the bottom left, and 2) ensuring that your SKScene frame matches your view frame.
I can test this out more fully in a bit and edit if there are any issues
Found the transformation that works using camera.projectPoint instead of the renderer.projectPoint.
To scale the points correctly on the spritekit: set scaleMode=.aspectFill
I updated https://github.com/AnsonT/ARFaceSpriteKitMapping to demo this.
guard let faceAnchor = anchor as? ARFaceAnchor,
let camera = sceneView.session.currentFrame?.camera,
let sie = overlayScene?.size
else { return }
let modelMatrix = faceAnchor.transform
let vertices = faceAnchor.geometry.vertices
for vertex in vertices {
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3, orientation: .portrait, viewportSize: size)
let dot = SKSpriteNode(imageNamed: "dot")
dot.size = CGSize(width: 7, height: 7)
dot.position = CGPoint(x: CGFloat(pt.x), y: size.height - CGFloat(pt.y))
overlayScene?.addChild(dot)
}

Object hovers above ground surface in ARKit

I placed an object on a plane but its displayed about 10-15 c.m. above said plane.
What code should I have to use for placing on the plane?
Here is the code
let scene = SCNScene(named: "art.scnassets/cup.scn")!
// Set the scene to the view
sceneView.scene = scene
Here is current scenario screenshot
object is not touching to plane
Your question isn't clear but please check this Code, it may help you
// helicopter is object of SCNNode
let dragonScene = SCNScene(named: "media.scnassets/helicopter.dae")!
for childNode in dragonScene.rootNode.childNodes {
// Adding all the child nodes
helicopter.addChildNode(childNode)
}
helicopter.scale = SCNVector3(x: 0.002, y: 0.002, z: 0.002)
helicopter.position = SCNVector3(x:2.0 , y:0.0, z:-1.6)
sceneView.scene.rootNode.addChildNode(helicopter)

SpriteKit: Coloring the Background

i still try to learn Swift and SpriteKit and i have a new question.
So i am following this Tutorial/Video: Noise Field
My project now looks like this: My Project
I have 100 Particle Objects moving around and i want to blend some color of the particles to the white background. Inside the tutorial it is quiet easy. You just create the Background once, and inside the draw function (in SpriteKit this would be the update() function) you give your objects an alpha value like 0.1.
SpriteKit works quiet different. If i change the alpha value under draw my Particles are now almost hidden, but the color is not being "drawn" on the background.
I know this is because SpriteKit works different then the p5 library for javascript. But i wonder how i could get the same effect inside SpriteKit..
So inside the update function i have 2 loops, one for columns and one for rows. I have x columns and y rows - and for each "cell" i create a random CGVector. Now my Particles are moving around based on the CGVector of the cell which is the nearest to the Particles position.
My Particle Class looks like this:
class Particle: SKShapeNode{
var pos = CGPoint(x: 0, y: 0)
var vel = CGVector(dx: 0, dy: 0)
var acc = CGVector(dx: 0, dy: 0)
var radius:CGFloat
var maxSpeed:CGFloat
var color: SKColor
And i have a function which looks like this to show the Particles:
func show(){
self.position = self.pos
let rect = CGRect(origin: CGPoint(x: 0.5, y: 0.5), size: CGSize(width: self.radius*2, height: self.radius*2))
let bezierP = UIBezierPath(ovalIn: rect)
self.path = bezierP.cgPath
self.fillColor = self.color
self.strokeColor = .clear
}
And in the update function i have this loop:
for particle in partikelArr{
particle.updatePos()
particle.show()
}
How could i now "colorize" the white background or "draw" my particle on the background based on the particles position, color, shape and size?
Thanks and best regards
EDIT:
So i know create for each particle a new SKShapeNode inside the update function, so this works and it looks like the Particles have colorized the background:
This was created like this inside the update function:
for particle in partikelArr{
let newP = SKShapeNode(path: particle.path!)
newP.position = particle.pos
newP.fillColor = particle.farbe
newP.strokeColor = .clear
newP.zPosition = 1000
newP.alpha = 0.1
self.addChild(newP)
}
But i do not want to create SKShapeNodes for each Particle (on the screenshot i have used 5 Particles, after some seconds i already have over 800 nodes on the scene). I would like to let my 100 Particles really "draw" their Shape and Color on the white Background node.