I am study RealityKit and I'm try to drop on the floor a set of red card divided in different row at the position tapped.
See the picture attached.
I notice that if the number of cards is to much AR kit display the cards for few second in the sene but then after a second the cards disappeared from a scene.
and on the console I get the following print out:
2021-10-10 15:31:31.851724+0800 indovinaWho[12166:4021700] [Session] ARSession <0x111038050>: ARSessionDelegate is retaining 14 ARFrames. This can lead to future camera frames being dropped.
Here my code how I drop the cards.
Basically with the following code I'm creating a AnchorEntity at location tapped and then append the entity "modCard" changing the position accordingly.
If for example I state 8 card per row in 3 row, I just see the card for few second and then suddently they disappeared from my scene.
Reality kit have a maximum number of entity can be attached to an AnchorEntity?
func createCard(raycast: ARRaycastResult, maxrow: Int, cardPerRow: Int){
let anchor = AnchorEntity(raycastResult: raycast)
var prPosMod = simd_make_float3(0,0,0) // prev position
let cardByside = cardPerRow/2
DispatchQueue.global().sync {
for _ in 1...maxrow {
for ncard in 1...cardPerRow {
let modEntity = self.modelCard()
if ncard < cardByside {
modEntity.position = simd_make_float3(prPosMod.x-0.1,prPosMod.y,prPosMod.z)
}else {
modEntity.position = simd_make_float3(prPosMod.x + 0.1,prPosMod.y,prPosMod.z)
}
prPosMod = modEntity.position
anchor.addChild(modEntity)
}
prPosMod = simd_float3(0,prPosMod.y,prPosMod.z-0.1)
}
self.arView.scene.addAnchor(anchor)
}
}
func modelCard() -> ModelEntity {
let mesh = MeshResource.generateBox(width: 0.08, height: 0.01, depth: 0.08, cornerRadius: 0.005, splitFaces: false)
let material = SimpleMaterial(color: .red, isMetallic: false)
return ModelEntity(mesh: mesh, materials: [material])
}
Related
Here is my previous question about in general apply force for a certain point of an AR object which had a perfect answer.
I have managed to apply force to a given point with a little bit of tinkering to have a perfect effect for me. Let me show also some code.
I get the AR object from Experience like:
if let skateAnchor = try? Experience.loadSkateboard(),
let skateEntity = skateAnchor.skateboard {
guard let entity = skateEntity as? HasPhysicsBody else { return }
skateAnchor.generateCollisionShapes(recursive: true)
entity.collision?.filter.mask = [.sceneUnderstanding]
skateboard = entity
}
Afterwards I set up the plane and the LiDAR scanner and add some gestures to it like:
let arViewTap = UITapGestureRecognizer(target: self,
action: #selector(tapped(sender:)))
arView.addGestureRecognizer(arViewTap)
let arViewLongPress = UILongPressGestureRecognizer(target: self,
action: #selector(longPressed(sender:)))
arView.addGestureRecognizer(arViewLongPress)
So far so good, on tap gesture I apply the logic from the previously linked answer and apply force impulse like:
if let sk8 = skateboard as? HasPhysics {
sk8.applyImpulse(direction, at: position, relativeTo: nil)
}
My issue comes with my "catching" logic, where I do want to use the long press, and apply downward force to my skateboard AR object like this:
#objc func longPressed(sender: UILongPressGestureRecognizer) {
if sender.state == .began || sender.state == .changed {
let location = sender.location(in:arView)
if arView.entity(at: location) is HasPhysics {
if let ray = arView.ray(through: location) {
let results = arView.scene.raycast(origin: ray.origin,
direction: ray.direction,
length: 100.0,
query: .nearest,
mask: .all,
relativeTo: nil)
if let _ = results.first,
let position = results.first?.position,
let normal = results.first?.normal {
// test different kind of forces
let direction = SIMD3<Float>(0, -20, 0)
if let sk8 = skateboard as? HasPhysics {
sk8.addForce(direction, at: position, relativeTo: nil)
}
}
}
}
}
}
Right now I know that I am ignoring the raycast results, but this is in pure development state, my issue is that when I apply positive/negative x/z the object responds well, it either slides back and forth or left or right, the positive y is also working by draging the board in the air, the only error prone force direction is the one I am striving to achieve is the downward facing negative y. The object just sits there with no effect at all.
Let also share how my object is defined inside the Reality Composer:
Ollie trick
In real life, if you shift your entire body's weight to the nose of the skateboard's deck (like doing the Ollie Maneuver), the skateboard's center of mass shifts from the middle towards the point where the force is being applied. In RealityKit, if you need to tear the rear (front) wheels of the skateboard off the floor, move the model's center of mass towards the slope.
The repositioning of the center of mass occurs in a local coordinate system.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showPhysics // shape visualization
let scene = try! Experience.loadScene()
let name = "skateboard_01_base_stylized_lod0"
typealias ModelPack = ModelEntity & HasPhysicsBody & HasCollision
let model = scene.findEntity(named: name) as! ModelPack
model.physicsBody = .init()
model.generateCollisionShapes(recursive: true)
model.physicsBody?.massProperties.centerOfMass.position = [0, 0,-27]
arView.scene.anchors.append(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Physics shape
The second problem that you need to solve is to replace the model's box shape of the physical body (RealityKit and Reality Composer generate this type of shape by default). Its shape cannot be in the form of a monolithic box, it's quite obvious, because the box-shaped form does not allow the force to be applied appropriately. You need a shape similar to the outline of the model.
So, you can use the following code to create a custom shape:
(four spheres for wheels and box for deck)
let shapes: [ShapeResource] = [
.generateBox(size: [ 20, 4, 78])
.offsetBy(translation: [ 0.0, 11, 0.0]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3,-21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3,-21.4])
]
// model.physicsBody = PhysicsBodyComponent(shapes: shapes, mass: 4.5)
model.collision = CollisionComponent(shapes: shapes)
P.S.
Reality Composer model's settings (I used Xcode 14.0 RC 1).
I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision
I am trying to place an object above a QR code in swift. I am able to detect the QR code however the location is wrong for the placement of the box. I don't really understand how the placement works. I know how to place objects down on planes. Do I need to relate the detected planes somehow with where it says it detects the QR code? Any information on how this SCNVector columns thing works would be appreciated as well haha. Also if CIDector is out dated and there is a new method.
Here is a snippet of detecting the QRCode and placing the box:
var discoveredQRCodes = String
func session(_ session: ARSession, didUpdate frame: ARFrame) {
//print("Updated")
if time != 0.5 {
return
}
DispatchQueue.global(qos: .background).async {
let image = CIImage(cvPixelBuffer: frame.capturedImage)
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: nil)
let features = detector!.features(in: image)
for feature in features as! [CIQRCodeFeature] {
if !self.discoveredQRCodes.contains(feature.messageString!) {
self.discoveredQRCodes.append(feature.messageString!)
let url = URL(string: feature.messageString!)
let position = SCNVector3(frame.camera.transform.columns.3.x,
frame.camera.transform.columns.3.y,
frame.camera.transform.columns.3.z)
// add3DModel(fromURL: url!, toPosition: getPositionBasedOnQRCode(frame: frame, position: "df"))
print(position)
print(url)
DispatchQueue.main.async {
let boxNode = SCNNode()
boxNode.geometry = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0.002)
boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
boxNode.position = position
boxNode.name = "node"
self.arView.scene.rootNode.addChildNode(boxNode)
}
//add3dInstance(fromURL: url!, toPosition: position)
}
}
}
}
Here is an image of the result:
Here is some debug output:
SCNVector3(x: 0.023941405, y: 0.040143043, z: 0.056782123)
First of all, you set up the boxNode position to the camera position. It's not what you want.
Secondly, any QR code detector provides 2d bounding box coordinates in the image space. To translate 2d coordinates to scene coordinates you need to find a ray from the camera to the QR code plane.
Please, check the code here.
I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}
I have two objects (two cubes). First, I add to the scene the first cube. Then I want to add the second one and I want it to be stuck to the first one, on one side of the first one - I will select which side by clicking on it (like in the image below). Is it possible to just click a face of the first cube and the second one to automatically appear into the scene and stick to the first cube? I cannot figure how to do this.
Photo
When you create an SCNBoxGeometry:
The SCNBox class automatically creates SCNGeometryElement objects as
needed to handle the number of materials.
As such in order to access these elements you would need to create an SCNMaterial for each face of the Box. Then you can perform an SCNHitTest to detect which face has been detected:
When you perform a hit-test search, SceneKit looks for SCNGeometry
objects along the ray you specify. For each intersection between the
ray and and a geometry, SceneKit creates a hit-test result to provide
information about both the SCNNode object containing the geometry and
the location of the intersection on the geometry’s surface.
So how would we approach this?
Lets assume you have created to SCNNodes called:
var cubeOne = SCNNode()
var cubeTwo = SCNNode()
These are both assigned 6 different SCNMaterials (one for each face) like so:
override func viewDidLoad() {
super.viewDidLoad()
cubeOne.geometry = cubeGeometry()
cubeOne.position = SCNVector3(0, -0.5, -1.5)
cubeOne.name = "Cube One"
cubeTwo.geometry = cubeGeometry2()
cubeTwo.position = SCNVector3(30, 0, -1.5)
cubeTwo.name = "Cube Two"
self.augmentedRealityView.scene.rootNode.addChildNode(cubeOne)
self.augmentedRealityView.scene.rootNode.addChildNode(cubeTwo)
}
/// Returns The 6 Faces Of An SCNBox
///
/// - Returns: SCNGeometry
func cubeGeometry() -> SCNGeometry{
var colours: [UIColor] = [.red, .green, .cyan, .yellow, .purple, .blue]
var faces = [SCNMaterial] ()
let cubeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
for faceIndex in 0..<5{
let material = SCNMaterial()
material.diffuse.contents = colours[faceIndex]
faces.append(material)
}
cubeGeometry.materials = faces
return cubeGeometry
}
Now you have assigned 6 materials to the faces, you will have six geometry elements which correspond to each side of your SCNBox.
Now having done this lets quickly create an enum which corresponds to the order of the faces:
enum BoxFaces: Int{
case Front, Right, Back, Left, Top, Botton
}
Now when we perform a hitTest we can log the location of the hit e.g:
/// Detects Which Cube Was Detected & Logs The Geometry Index
///
/// - Parameter gesture: UITapGestureRecognizer
#IBAction func cubeTapped(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An SCNHitTest
guard let hitTest = self.augmentedRealityView.hitTest(currentTouchLocation, options: nil).first else { return }
//3. If The Node In Cube One Then Get The Index Of The Touched Material
if let namedNode = hitTest.node.name{
if namedNode == "Cube One"{
//4. Get The Geometry Index
if let faceIndex = BoxFaces(rawValue: hitTest.geometryIndex){
print("User Has Hit \(faceIndex)")
//5. Position The Second Cube
positionStickyNode(faceIndex)
}
}
}
}
In part 5 you will notice the call to the function positionStickyNode which places the secondCube at the corresponding location of the 1st Cube:
/// Position The Second Cube Based On The Face Tapped
///
/// - Parameter index: BoxFaces
func positionStickyNode(_ index: BoxFaces){
let (min, max) = cubeTwo.boundingBox
let cubeTwoWidth = max.x - min.x
let cubeTwoHeight = max.y - min.y
switch index {
case .Front:
cubeTwo.simdPosition = float3(cubeOne.position.x, cubeOne.position.y, cubeOne.position.z + cubeTwoWidth)
case .Right:
cubeTwo.simdPosition = float3(cubeOne.position.x + cubeTwoWidth, cubeOne.position.y, cubeOne.position.z)
case .Back:
cubeTwo.simdPosition = float3(cubeOne.position.x, cubeOne.position.y, cubeOne.position.z - cubeTwoWidth)
case .Left:
cubeTwo.simdPosition = float3(cubeOne.position.x - cubeTwoWidth, cubeOne.position.y, cubeOne.position.z)
case .Top:
cubeTwo.simdPosition = float3(cubeOne.position.x, cubeOne.position.y + cubeTwoHeight, cubeOne.position.z)
case .Botton:
cubeTwo.simdPosition = float3(cubeOne.position.x, cubeOne.position.y - cubeTwoHeight, cubeOne.position.z)
}
This is a very crude example and will work when your cubes are the same size... You have more than enough however, to now figure out how this would work for different sizes etc.
Hope it helps...