I'm using the newest Geospatial API from ARCore, and trying to built it with SwiftUI and RealityKit. I have all SDK and api key set up properly, all coordinates and accuracy info are updated every frame properly. But Whenever I use GARSession.creatAnchor method, it returns a GARAnchor. I used GARAnchor's transform property to create an ARAnchor(transform: GARAnchor.transform), then I created AnchorEntity with this ARAnchor, and add AnchorEntity to ARView.scene. However, the model never showed up. I have checked coordinates, altitude, still no luck at all. So is there anyone could help me out? Thank you so much.
do {
let garAnchor = try parent.garSession.createAnchor(coordinate: CLLocationCoordinate2D(latitude: xx.xxxxxxx, longitude: xx.xxxxxx), altitude: xxx, eastUpSouthQAnchor: simd_quatf(ix: 0, iy: 0, iz: 0, r: 0))
if garAnchor.hasValidTransform && garAnchor.trackingState == .tracking {
let arAnchor = ARAnchor(transform: garAnchor.transform)
let anchorEntity = AnchorEntity(anchor: arAnchor)
let mesh = MeshResource.generateSphere(radius: 2)
let material = SimpleMaterial(color: .red, isMetallic: true)
let sephere = ModelEntity(mesh: mesh, materials: [material])
anchorEntity.addChild(sephere)
parent.arView.scene.addAnchor(anchorEntity)
print("Anchor has valid transform, and anchor is tracking")
} else {
print("Anchor has invalid transform")
}
} catch {
print("Add garAnchor failed: \(error.localizedDescription)")
}
}
Related
Here is my previous question about in general apply force for a certain point of an AR object which had a perfect answer.
I have managed to apply force to a given point with a little bit of tinkering to have a perfect effect for me. Let me show also some code.
I get the AR object from Experience like:
if let skateAnchor = try? Experience.loadSkateboard(),
let skateEntity = skateAnchor.skateboard {
guard let entity = skateEntity as? HasPhysicsBody else { return }
skateAnchor.generateCollisionShapes(recursive: true)
entity.collision?.filter.mask = [.sceneUnderstanding]
skateboard = entity
}
Afterwards I set up the plane and the LiDAR scanner and add some gestures to it like:
let arViewTap = UITapGestureRecognizer(target: self,
action: #selector(tapped(sender:)))
arView.addGestureRecognizer(arViewTap)
let arViewLongPress = UILongPressGestureRecognizer(target: self,
action: #selector(longPressed(sender:)))
arView.addGestureRecognizer(arViewLongPress)
So far so good, on tap gesture I apply the logic from the previously linked answer and apply force impulse like:
if let sk8 = skateboard as? HasPhysics {
sk8.applyImpulse(direction, at: position, relativeTo: nil)
}
My issue comes with my "catching" logic, where I do want to use the long press, and apply downward force to my skateboard AR object like this:
#objc func longPressed(sender: UILongPressGestureRecognizer) {
if sender.state == .began || sender.state == .changed {
let location = sender.location(in:arView)
if arView.entity(at: location) is HasPhysics {
if let ray = arView.ray(through: location) {
let results = arView.scene.raycast(origin: ray.origin,
direction: ray.direction,
length: 100.0,
query: .nearest,
mask: .all,
relativeTo: nil)
if let _ = results.first,
let position = results.first?.position,
let normal = results.first?.normal {
// test different kind of forces
let direction = SIMD3<Float>(0, -20, 0)
if let sk8 = skateboard as? HasPhysics {
sk8.addForce(direction, at: position, relativeTo: nil)
}
}
}
}
}
}
Right now I know that I am ignoring the raycast results, but this is in pure development state, my issue is that when I apply positive/negative x/z the object responds well, it either slides back and forth or left or right, the positive y is also working by draging the board in the air, the only error prone force direction is the one I am striving to achieve is the downward facing negative y. The object just sits there with no effect at all.
Let also share how my object is defined inside the Reality Composer:
Ollie trick
In real life, if you shift your entire body's weight to the nose of the skateboard's deck (like doing the Ollie Maneuver), the skateboard's center of mass shifts from the middle towards the point where the force is being applied. In RealityKit, if you need to tear the rear (front) wheels of the skateboard off the floor, move the model's center of mass towards the slope.
The repositioning of the center of mass occurs in a local coordinate system.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showPhysics // shape visualization
let scene = try! Experience.loadScene()
let name = "skateboard_01_base_stylized_lod0"
typealias ModelPack = ModelEntity & HasPhysicsBody & HasCollision
let model = scene.findEntity(named: name) as! ModelPack
model.physicsBody = .init()
model.generateCollisionShapes(recursive: true)
model.physicsBody?.massProperties.centerOfMass.position = [0, 0,-27]
arView.scene.anchors.append(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Physics shape
The second problem that you need to solve is to replace the model's box shape of the physical body (RealityKit and Reality Composer generate this type of shape by default). Its shape cannot be in the form of a monolithic box, it's quite obvious, because the box-shaped form does not allow the force to be applied appropriately. You need a shape similar to the outline of the model.
So, you can use the following code to create a custom shape:
(four spheres for wheels and box for deck)
let shapes: [ShapeResource] = [
.generateBox(size: [ 20, 4, 78])
.offsetBy(translation: [ 0.0, 11, 0.0]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3,-21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3,-21.4])
]
// model.physicsBody = PhysicsBodyComponent(shapes: shapes, mass: 4.5)
model.collision = CollisionComponent(shapes: shapes)
P.S.
Reality Composer model's settings (I used Xcode 14.0 RC 1).
I am trying to place an object above a QR code in swift. I am able to detect the QR code however the location is wrong for the placement of the box. I don't really understand how the placement works. I know how to place objects down on planes. Do I need to relate the detected planes somehow with where it says it detects the QR code? Any information on how this SCNVector columns thing works would be appreciated as well haha. Also if CIDector is out dated and there is a new method.
Here is a snippet of detecting the QRCode and placing the box:
var discoveredQRCodes = String
func session(_ session: ARSession, didUpdate frame: ARFrame) {
//print("Updated")
if time != 0.5 {
return
}
DispatchQueue.global(qos: .background).async {
let image = CIImage(cvPixelBuffer: frame.capturedImage)
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: nil)
let features = detector!.features(in: image)
for feature in features as! [CIQRCodeFeature] {
if !self.discoveredQRCodes.contains(feature.messageString!) {
self.discoveredQRCodes.append(feature.messageString!)
let url = URL(string: feature.messageString!)
let position = SCNVector3(frame.camera.transform.columns.3.x,
frame.camera.transform.columns.3.y,
frame.camera.transform.columns.3.z)
// add3DModel(fromURL: url!, toPosition: getPositionBasedOnQRCode(frame: frame, position: "df"))
print(position)
print(url)
DispatchQueue.main.async {
let boxNode = SCNNode()
boxNode.geometry = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0.002)
boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
boxNode.position = position
boxNode.name = "node"
self.arView.scene.rootNode.addChildNode(boxNode)
}
//add3dInstance(fromURL: url!, toPosition: position)
}
}
}
}
Here is an image of the result:
Here is some debug output:
SCNVector3(x: 0.023941405, y: 0.040143043, z: 0.056782123)
First of all, you set up the boxNode position to the camera position. It's not what you want.
Secondly, any QR code detector provides 2d bounding box coordinates in the image space. To translate 2d coordinates to scene coordinates you need to find a ray from the camera to the QR code plane.
Please, check the code here.
I am starting to use ARKit and I have a use case where I want to know the motion from a known position to another one.
So I was wondering if it is possible (like every tracking solution) to set a known position and orientation a starting point of the tracking in ARKit?
Regards
There are at least six approaches allowing you set a starting point for a model. But using no ARAnchors at all in your ARScene is considered as bad AR experience (although Apple's Augmented Reality app template has no any ARAnchors in a code).
First approach
This is the approach that Apple engineers propose us in Augmented Reality app template in Xcode. This approach doesn't use anchoring, so all you need to do is to accommodate a model in air with coordinates like (x: 0, y: 0, z: -0.5) or in other words your model will be 50 cm away from camera.
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene(named: "art.scnassets/ship.scn")!
let model = sceneView.scene.rootNode.childNode(withName: "ship",
recursively: true)
model?.position.z = -0.5
sceneView.session.run(ARWorldTrackingConfiguration())
}
Second approach
Second approach is almost the same as the first one, except it uses ARKit's anchor:
guard let sceneView = self.view as? ARSCNView
else { return }
if let currentFrame = sceneView.session.currentFrame {
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.5
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
Third approach
You can also create a pre-defined model's position pinned with ARAnchor using third approach, where you need to import RealityKit module as well:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
let model = ModelEntity(mesh: MeshResource.generateSphere(radius: 1.0))
// ARKit's anchor
let anchor = ARAnchor(transform: simd_float4x4(diagonal: [1,1,1]))
// RealityKit's anchor based on position of ARAnchor
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(model)
arView.scene.anchors.append(anchorEntity)
}
Fourth approach
If you turned on a plane detection feature you can use Ray-casting or Hit-testing methods. As a target object you can use a little sphere (located at 0, 0, 0) that will be ray-casted.
let query = arView.raycastQuery(from: screenCenter,
allowing: .estimatedPlane,
alignment: .any)
let raycast = session.trackedRaycast(query) { results in
if let result = results.first {
object.transform = result.transform
}
}
Fifth approach
This approach is focused to save and share ARKit's worldMaps.
func writeWorldMap(_ worldMap: ARWorldMap, to url: URL) throws {
let data = try NSKeyedArchiver.archivedData(withRootObject: worldMap,
requiringSecureCoding: true)
try data.write(to: url)
}
func loadWorldMap(from url: URL) throws -> ARWorldMap {
let mapData = try Data(contentsOf: url)
guard let worldMap = try NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self,
from: mapData)
else {
throw ARError(.invalidWorldMap)
}
return worldMap
}
Sixth approach
In ARKit 4.0 a new ARGeoTrackingConfiguration is implemented with the help of MapKit module. So now you can use a pre-defined GPS data.
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for geoAnchor in anchors.compactMap({ $0 as? ARGeoAnchor }) {
arView.scene.addAnchor(Entity.placemarkEntity(for: geoAnchor)
}
}
Working with Project Dent trying to put 3d object at absolute GPS coordinates.
Readme shows how to put a 2D information annotation object into AR space, but I can't get it to place 3D object at GPS coordinates
Project Dent doesn't use standard SceneView, which makes it hard to try and do this based on a lot of the tutorials out there. It uses SceneLocationView based on ARCL
Here's the sample code for a 2D annotation
let coordinate = CLLocationCoordinate2D(latitude: 51.504571, longitude: -0.019717)
let location = CLLocation(coordinate: coordinate, altitude: 300)
let view = UIView() // or a custom UIView subclass
let annotationNode = LocationAnnotationNode(location: location, view: view)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: annotationNode)
Here's what I've been trying to do to get it to work with a 3D object
let coordinate = CLLocationCoordinate2D(latitude: 51.504571, longitude: -0.019717)
let location = CLLocation(coordinate: coordinate, altitude: 300)
let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
let objectNode = LocationNode(location: location, SCNbox: box)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: objectNode)
Ideally, I'd like this code to simply place 3d box at these GPS coordinates in AR space.
Sadly, I can't even get it to build at present.
As an update to this, I've done the following. Create a new class in Nodes, based on LocationNode, called ThreeDNode -
open class ThreeDNode: LocationNode {
// Class for placing 3d objects in AR space
public let threeDObjectNode: LocationNode
public init(location: CLLocation?, scene: SCNScene) {
let boxGeometry = SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
//let boxNode = SCNNode(geometry: boxGeometry)
threeDObjectNode = LocationNode(location: location)
threeDObjectNode.geometry = boxGeometry
threeDObjectNode.removeFlicker()
super.init(location: location)
addChildNode(threeDObjectNode)
}
required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
and then in POIViewController, tried to place 3d object in AR space with following code -
//example using 3d box object
let coordinate2 = CLLocationCoordinate2D(latitude: 52.010339, longitude: -8.351157)
let location2 = CLLocation(coordinate: coordinate2, altitude: 300)
let asset = SCNScene(named: "art.scnassets/ship.scn")!
let object = ThreeDNode(location: location2, scene: asset)
//add to scene with confirmed location
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: object)
No joy :( Any help, much appreciated.
You'll want to create a location node, and add your box as its geometry, or as a child node:
let boxNode = SCNNode(geometry: box)
let locationNode = LocationNode(location: location)
locationNode.addChildNode(boxNode)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: locationNode)
I didn't run this code in Xcode so you may need to tweak.
I'm trying to add a node (a sphere) to a body model but it doesn't work properly after I rotate the model through a pan gesture.
Here's how I'm adding the node (using a long tap gesture):
func addSphere(sender: UILongPressGestureRecognizer) {
switch sender.state {
case .Began:
let location = sender.locationInView(bodyView)
let hitResults = bodyView.hitTest(location, options: nil)
if hitResults.count > 0 {
let result = hitResults.first!
let secondSphereGeometry = SCNSphere(radius: 0.015)
secondSphereGeometry.firstMaterial?.diffuse.contents = UIColor.redColor()
let secondSphereNode = SCNNode(geometry: secondSphereGeometry)
let vpWithZ = SCNVector3(x: Float(result.worldCoordinates.x), y: Float(result.worldCoordinates.y), z: Float( result.worldCoordinates.z))
secondSphereNode.position = vpWithZ
bodyView.scene!.rootNode.addChildNode(secondSphereNode)
}
break
default:
break
}
}
Here is how I rotate the view:
func rotateGesture(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(sender.view)
var newZAngle = (Float)(translation.x)*(Float)(M_PI)/180.0
newZAngle += currentZAngle
bodyView.scene!.rootNode.transform = SCNMatrix4MakeRotation(newZAngle, 0, 0, 1)
if sender.state == .Ended {
currentZAngle = newZAngle
}
}
And to load the 3D model I just do:
bodyView.scene = SCNScene(named: "male_body.dae") // bodyView is a SCNView in the storyboard
I found something related to the worldTransform property and also the function convertPosition:toNode: but couldn't find an example that works well.
The problem is that, if I rotate the model, the sphere are not positioned properly. They're always positioned as if the model was in its initial state.
If I turn the body and add long tap his arm (on the side), the sphere is added somewhere floating in front of the body, as you can see above.
I don't know how to fix this. Appreciate if someone can help me. Thanks!