When you create any scene with Reality Composer, you have to choose first what type of anchor "floor, wall, face, object" this mean when you load the scene it automatically places it self to the specified anchor.
My question is, Is there is any way to set it manually from code so that I would for example do a hit test and then anchor it to a specific point manually?
Thanks.
The official documentation has no reference to being able to change the default Anchor at runtime but from your description it sounds like you could try Select Object Anchoring to Place a Scene Near Detected Objects as described here:
https://developer.apple.com/documentation/realitykit/creating_3d_content_with_reality_composer/selecting_an_anchor_for_a_reality_composer_scene
You can easily apply other type of anchor (when implementing hit-testing or ray-casting) using the following code (default anchor in Reality Composer is .horizontal):
import ARKit
import RealityKit
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
let estimatedPlane: ARRaycastQuery.Target = .estimatedPlane
let alignment: ARRaycastQuery.TargetAlignment = .vertical
let tapLocation: CGPoint = sender.location(in: arView)
let result: [ARRaycastResult] = arView.raycast(from: tapLocation,
allowing: estimatedPlane,
alignment: alignment)
guard let rayCast: ARRaycastResult = result.first
else { return }
let anchor = AnchorEntity(world: rayCast.worldTransform)
anchor.addChild(myScene)
arView.scene.anchors.append(anchor)
}
Or you can place anchors automatically (for example, like ARFaceAnchor for detected face):
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession,didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor = AnchorEntity(anchor: faceAnchor)
// RealityKit's Facial analog
// AnchorEntity(.face).self
anchor.addChild(glassModel)
arView.scene.anchors.append(anchor)
}
}
...or you can place ARImageAnchor the same way:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors.first as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
let anchor = AnchorEntity(anchor: imageAnchor)
// RealityKit's image anchor analog
// AnchorEntity(.image(group: "Group", name: "model")).self
anchor.addChild(imageModel)
arView.scene.anchors.append(anchor)
}
}
Related
Trying to render a 2D image to show on plane instead of the white/colored plane I have have currently. The following code renders the plane as colored. Tried the following to
the 2D image appear over the colored plane instead. The image I am using is in Assets folder and is: PNG image - 966 KB. Looked/read/researched Apple Docs, StackOflow searching for how.
import UIKit
import SceneKit
import ARKit
import RealityKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARImageTrackingConfiguration()
if let imageToTrack = ARReferenceImage.referenceImages(inGroupNamed: "BdayCardImages", bundle: Bundle.main) {
configuration.trackingImages = imageToTrack
configuration.maximumNumberOfTrackedImages = 1
}
// Run the view's session
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
// MARK: - ARSCNViewDelegatemake
//this method finds and triggers image plane
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: SCNNode) -> ARAnchor?
{
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor
{
let plane = SCNPlane(
width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height
)
plane.firstMaterial?.diffuse.contents =
UIColor(red: 1, green: 1.3, blue: 0.5, alpha: 0.8)
let planeNode = SCNNode(geometry: plane)
if imageAnchor.referenceImage.name == "TamBDYCArd3"
{
if let bdayImage = ARImageAnchor(named: "assets/TamBDYCArd3") //Error here: 'Argument passed to call that takes no arguments'
{
if let imageNode = bdayImage.rootNode.childNode.first //Error here: Type of expression is ambiguous without more context'
{
imageNode.update(with: imageNode)
planeNode.addChildNode(imageNode)
if imageAnchor.referenceImage.name == "TamBDYCArd3" //the png image
{
if let bdayImage = ARImageAnchor(named: "assets/TamBDYCArd3")
{
if let imageNode = bdayImage.rootNode.childNode.first
{
imageNode.update(with: imageNode)
planeNode.addChildNode(imageNode)
planeNode.addChildNode(bdayImage)
}
}
}
}
return node
}
}
Any help is appreciated. I am a newb so thanks for your consideration and time.
I was expecting to render the 2D image over the plane. Thanks
I spent many days trying to understand and follow examples without success.
My goal is to place a virtual AR object to the real world scanned previously with the LiDAR. With the showSceneUnderstanding I can see the realtime mesh created ok that fine.
With a tap function I can insert a usdz file, that also fine.
Because I have toyModel.physicsBody?.mode = .kinematic and self.arView.installGestures(for: toyRobot) I can move/scale the model.
Now want to be able move the model AND collide with the mesh generated by the LiDAR. When I move the model to a scanned wall the mesh it's stopped for example.
Here is my complete code :
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
var tapRecognizer = UITapGestureRecognizer()
override func viewDidLoad() {
super.viewDidLoad()
self.arView.session.delegate = self
//Scene Understanding options
self.arView.environment.sceneUnderstanding.options.insert([.physics, .collision, .occlusion])
//Only for dev
self.arView.debugOptions.insert(.showSceneUnderstanding)
self.tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(placeObject(_:)))
self.arView.addGestureRecognizer(self.tapRecognizer)
}
#objc func placeObject(_ sender: UITapGestureRecognizer) {
// Perform a ray cast against the mesh (sceneUnderstanding)
// Note: Ray-cast option ".estimatedPlane" with alignment ".any" also takes the mesh into account.
let tapLocation = sender.location(in: arView)
if let result = arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any).first {
// Load the "Toy robot"
let toyRobot = try! ModelEntity.loadModel(named: "toy_robot_vintage.usdz")
// Add gestures to the toy (only available is physicsBody mode == kinematic)
self.arView.installGestures(for: toyRobot)
// Toy Anchor to place the toy on surface
let toyAnchor = AnchorEntity(world: result.worldTransform)
toyAnchor.addChild(toyRobot)
// Create a "Physics" model of the toy in order to add physics mode
guard let toyModel = toyAnchor.children.first as? HasPhysics else {
return
}
// Because toyModel is a fresh new model we need to init physics
toyModel.generateCollisionShapes(recursive: true)
toyModel.physicsBody = .init()
// Add the physics body mode
toyModel.physicsBody?.mode = .kinematic
let test = ShapeResource.generateConvex(from: toyRobot.model!.mesh)
toyModel.components[CollisionComponent] = CollisionComponent(shapes: [test], mode: .default, filter: .default)
// Finally add the toy anchor to the scene
self.arView.scene.addAnchor(toyAnchor)
}
}
}
Someone knows if it's possible to achieve that ? Many thanks in advance!
Following the previous discussion with #AndyFedoroff I added convext raycast in order, always, to collide the placed 3d object with the LiDAR created mesh. Here is my full code. I don't know if I'm doing well... In any case still doesn't work.
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
var tapRecognizer = UITapGestureRecognizer()
var panGesture = UIPanGestureRecognizer()
var panGestureEntity: Entity? = nil
override func viewDidLoad() {
super.viewDidLoad()
self.arView.session.delegate = self
//Scene Understanding options
self.arView.environment.sceneUnderstanding.options.insert([.physics, .collision, .occlusion])
//Only for dev
self.arView.debugOptions.insert(.showSceneUnderstanding)
self.tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(self.placeObject))
self.arView.addGestureRecognizer(self.tapRecognizer)
self.panGesture = UIPanGestureRecognizer(target: self, action: #selector(self.didPan))
self.arView.addGestureRecognizer(self.panGesture)
}
#objc func placeObject(_ sender: UITapGestureRecognizer) {
// Perform a ray cast against the mesh (sceneUnderstanding)
// Note: Ray-cast option ".estimatedPlane" with alignment ".any" also takes the mesh into account.
let tapLocation = sender.location(in: arView)
if let result = self.arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any).first {
// Load the "Toy robot"
let toyRobot = try! ModelEntity.loadModel(named: "toy_robot_vintage.usdz")
// Add gestures to the toy (only available is physicsBody mode == kinematic)
self.arView.installGestures(for: toyRobot)
// Toy Anchor to place the toy on surface
let toyAnchor = AnchorEntity(world: result.worldTransform)
toyAnchor.addChild(toyRobot)
// Create a "Physics" model of the toy in order to add physics mode
guard let model = toyAnchor.children.first as? (Entity & HasPhysics)
else { return }
model.generateCollisionShapes(recursive: true)
model.physicsBody = PhysicsBodyComponent(shapes: [.generateBox(size: .one)],
mass: 1.0,
material: .default,
mode: .kinematic)
// Finally add the toy anchor to the scene
self.arView.scene.addAnchor(toyAnchor)
}
}
#objc public func didPan(_ sender: UIPanGestureRecognizer) {
print(sender.state)
let point = sender.location(in: self.arView)
switch sender.state {
case .began:
if let entity = self.arView.hitTest(point).first?.entity {
self.panGestureEntity = entity
} else {
self.panGestureEntity = nil
}
case .changed:
if let entity = self.panGestureEntity {
if let raycast = arView
.scene
.raycast(origin: .zero, direction: .one, length: 1, query: .all, mask: .all, relativeTo: entity)
.first
{
print("hit", raycast.entity, raycast.distance)
}
}
case .ended:
self.panGestureEntity = nil
default: break;
}
}
}
Try downcasting to Entity & HasPhysics:
guard let model = anchor.children.first as? (Entity & HasPhysics)
else { return }
model.generateCollisionShapes(recursive: true)
model.physicsBody = PhysicsBodyComponent(shapes: [.generateBox(size: .one)],
mass: 1.0,
material: .default,
mode: .kinematic)
Official documentation says:
For non-model entities, generateCollisionShapes(recursive:) method has no effect. Nevertheless, the method is defined for all entities so that you can call it on any entity, and have the calculation propagate recursively to all that entity’s descendants.
I currently have a group of 5 reference images that my app is tracking. However, I want it to be able distinguish between the reference images (i.e. if the image is of image A, image B, or image C. Tracking real photos I print out on postcards).
Currently, my code is able to detect the image and put a plane (i.e. a simple rectangle) over it. However, my question is, is it possible to distinguish whether the app has detected a picture of picture A vs picture B? If so, how?
I know there's an option to use ML, but wanted to see if there were any easier options in SceneKit/ARKit that I haven't considered; especially since I'm using the exact image and not trying to have the app guess an object.
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
}
//MARK-: WHERE I CONFIGURE MY APP TO DETECT AN IMAGE
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARImageTrackingConfiguration()
guard let trackedImages = ARReferenceImage.referenceImages(inGroupNamed: "Photos", bundle: Bundle.main) else {
print("No images available")
return
}
configuration.trackingImages = trackedImages
configuration.maximumNumberOfTrackedImages = 7
sceneView.session.run(configuration)
}
//MARK-: WHERE I PLACE A PLANE OVER THE DETECTED IMAGE
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.8)
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
}
You can easily do it using such instance properties as referenceImage and name.
// The detected image referenced by the image anchor.
var referenceImage: ARReferenceImage { get }
and:
// A descriptive name for your reference image.
var name: String? { get set }
Here's how they look like in code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if imageAnchor.referenceImage.name == "apple" {
print("Image with apple is successfully detected...")
}
}
As per Andy's solution, I used the code below, and used the variable imageName as a reference.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageName = referenceImage.name ?? "no name"
}
I want to detect the 2D images using ARKit and RealityKit. I don't want to use SceneKit because many implementations based on RealityKit. I couldn't find any examples detecting images on RealityKit. I referred https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience sample code from apple. It uses Scenekit and ARSCNViewDelegate
let arConfiguration = ARWorldTrackingConfiguration()
arConfiguration.planeDetection = [.vertical, .horizontal]
arConfiguration.isLightEstimationEnabled = true
arConfiguration.environmentTexturing = .automatic
if let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "sanitzer", bundle: nil) {
arConfiguration.maximumNumberOfTrackedImages = 1
arConfiguration.detectionImages = referenceImages
}
self.session.run(arConfiguration, options: [.resetTracking, .removeExistingAnchors])
I have implemented ARSessionDelegate but not able to detect image?
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
//how to capture image anchor?
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
//how to capture image anchor?
}
Apple has implemented ARSCNViewDelegate capture the detected images. What is the equivalent delegate for ARSCNViewDelegate in RealityKit? How to detect ARImageAnchor?
In ARKit/RealityKit project use the following code for session() instance methods:
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors.first as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
let anchor = AnchorEntity(anchor: imageAnchor)
// Add Model Entity to anchor
anchor.addChild(model)
arView.scene.anchors.append(anchor)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
arView.session.delegate = self
resetTrackingConfig()
}
func resetTrackingConfig() {
guard let refImg = ARReferenceImage.referenceImages(inGroupNamed: "Sub",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImg
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
arView.session.run(config, options: ARSession.RunOptions(options))
}
}
And take into consideration – a folder for reference images (in .png or .jpg format) must have an extension .arresourcegroup.
I want to detect object categories like door, window using CoreML and ARKit and I want to find measurements (like height, width and area) of a door.
How can I detect objects and add some overlay shape on that object so I could find real world position and measurement of that object?
Use ARKit's built-in object detection algorithm for that task. It's simple and power.
With ARKit's object detection you can detect your door (preliminary scanned or shot on smartphone).
The following code helps you detect real world objects (like door) and place 3D object or 3D text at ARObjectAnchor position:
import ARKit
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let _ = anchor as? ARObjectAnchor {
let text = SCNText(string: "SIZE OF THIS OBJECT IS...",
extrusionDepth: 0.05)
text.flatness = 0.5
text.font = UIFont.boldSystemFont(ofSize: 10)
let textNode = SCNNode(geometry: text)
textNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
textNode.scale = SCNVector3(0.01, 0.01, 0.01)
node.addChildNode(textNode)
}
}
}
And supply an Xcode's folder Resources with images of your real-life objects.
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.debugOptions = .showFeaturePoints
sceneView.delegate = self
guard let dObj = ARReferenceObject.referenceObjects(inGroupNamed: "Resources",
bundle: nil)
else {
fatalError("There's no reference image")
return
}
configuration.detectionObjects = dObj
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
}