Does ARKit ever automatically update or "detect"/add a pure ARAnchor by itself? (Like how it generates ARPlaneAnchors?) - arkit

What is the point of a "pure" ARAnchor (that isn't an ARPlaneAnchor)?
Does ARKit ever automatically update/track or add/detect a pure ARAnchor? (Like how it does with ARPlaneAnchor?)
Why would anyone want to add a custom ARAnchor to ARKit, if they are not being tracked?
Can a pure ARAnchor ever be automatically "upgraded" to an ARPlaneAnchor by ARKit?

The main reason why you might use ARAnchor.....
ARAnchor is suitable for non-SceneKit engine (SpriteKit, Metal, etc), for SceneKit it is the same (since ARKit derived from SceneKit and has the same world coordinate)
If you're using ARSKView, you need a way to reference positions / orientations in 3D (real-world) space, because SpriteKit isn't 3D. You need ARAnchor to keep track of positions in 3D so that they can get mapped into 2D.
If you're building your own non-SceneKit engine with Metal (or GL)... that's not a 3D scene description API — it's a GPU programming API — so it doesn't really have a notion of world space. You can use ARAnchor as a bridge between ARKit's notion of world space and whatever you build.
What's the difference between using ARAnchor to insert a node and directly insert a node?

Related

ARKit shows a map on floor/bottom of a screen

Via ARKit, I want to place indoor map on floor.
Currently I tried 2 things:
I've placed large Plane below camera and above floor, But it causes quite drift. Does not move well when we walk, and overall experience is not overwhelming.
Saw a solution where you can identify horizontal plane, but it has its own issues.
So is it really possible with good results?
Devices with LiDAR
The LiDAR scanner has its advantages and disadvantages. The main advantage of LiDAR is its ability to almost instantly reconstruct floor and walls, then you can easily attach any 3D model to the resulted surface – a model will be stable, it will not drift, so a user's AR experience will be overwhelming, as you said. Also, an important advantage of LiDAR is the excellent performance in environment with poor lighting and with poor textures.
Here you can read about Occlusion feature and some of the LiDAR peculiarities. Good news: LiDAR perfectly works in conjunction with the Plane Detection option.
ARKit subdivides the reconstructed scene into ARMeshAnchors which give you access to polygonal geometry and surface classification.
ARMeshAnchor().geometry.classification
ARMeshAnchor().geometry.faces
ARMeshAnchor().geometry.vertices
ARMeshAnchor().transform.columns.3
Devices without LiDAR
In the absence of a LiDAR scanner, we can only detect horizontal and vertical surfaces using the Plane Detection feature. I can say that all AR frameworks (including ARKit and RealityKit) are much better and faster in defining horizontal surfaces, as opposed to vertical ones.
However, Detected Planes are less stable compared to Reconstructed Surfaces, and therefore, a slight drifting is possible sometimes. To successfully complete the Plane Detection stage, you need a well-lit room and good-for-tracking surrounding objects' textures.
ARKit calls your delegate's renderer(_:didAdd:for:) with a ARPlaneAnchor for each unique vertical and/or horizontal surface. And each plane anchor provides details about the surface – its world position, dimensions and real-world surfaces' classification.
In addition to the above, the delegate method called renderer(_:didUpdate:for:) is required to merge multiple coplanar Detected Planes into bigger resulting Detected Plane (a surface of a floor, for example).
ARPlaneAnchor().classification
ARPlaneAnchor().extent
ARPlaneAnchor().alignment
ARPlaneAnchor().center
Is it really possible with good results?
Yes, in both cases, it's possible to attach a map without drifting – whether you're using Plane Detection or Scene Reconstruction.

Trackable custom anchors in ARKit

Is there a way to create a custom anchor with ARKit that is trackable? I am trying to build and AR app and put an anchor on a moving 3D object and have a scenekit node render on that object. For example, can a create an anchor on someone's hand/leg/toe/dog/ball (without a VNHandPoseRequest) that conforms to ARTrackable? I have a model that gives me the exact 3D location of this initial anchor and that is how I plan of initializing it, but now I need a way to track that 3D point in realtime in an ARScene.

Scanning Real-World Object and generating 3D Mesh from it

ARKit app allows us to create an ARReferenceObject, and using it, we can reliably recognize the position and orientation of the real-world objects. But also we can save the finished .arobject file.
However, ARReferenceObject contains only the spatial features information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.
func createReferenceObject(transform: simd_float4x4,
center: simd_float3,
extent: simd_float3,
completionHandler: (ARReferenceObject?, Error?) -> Void)
My question:
Is there a method that allows us to reconstruct digital 3D geometry (low-poly or high-poly) from the .arobject file using Poisson Surface Reconstruction or Photogrammetry?
RealityKit 2.0 | Object Capture API
Object Capture API, announced at WWDC 2021, provides you with the long-awaited tools for photogrammetry. At the output we get USDZ model with a hi-res texture.
Read about photogrammetry HERE.
ARKit | Mesh Reconstruction
Using iOS device with LiDAR and ARKit 3.5/4.0/5.0 you can easily reconstruct a topological map of surrounding environment. Scene Reconstruction feature starts working immediately after launching a current ARSession.
Apple LiDAR works within 5 meters range. A scanner can help you improve a quality of ZDepth channel, and such features as People/Real World Objects Occlusion, Motion Tracking, Immediate Physics Contact Body and Raycasting.
Other awesome peculiarities of LiDAR scanner are:
you can use your device in a poorly lit room
you can track a pure white walls with no features at all
you can detect a planes almost instantaneously
Consider that a quality of a scanned object when you're using LiDAR isn't as good as you expect. Small details are not scanned. That's because a resolution of an Apple LiDAR isn't high enough.
You answered your own question with a quote from Apple's documentation:
An ARReferenceObject contains only the spatial feature information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.
If you run that sample code, you can see for yourself the visualizations it creates of the reference object during scanning and after a test recognition — it's just a sparse 3D point cloud. There's certainly no photogrammetry in what Apple's API provides you, and there'd not much to go on in terms of recovering realistic structure in a mesh.
That's not to say that such efforts are impossible — there have been some third parties demoing Here photogrammetry experiments based on top of ARKit. But
1. that's not using ARKit 2 object scanning, just the raw pixel buffer and feature points from ARFrame.
2. the level of extrapolation in those demos would require
non-trivial original R&D, as it's far beyond the kind of information
ARKit itself supplies.

What's the difference between using ARAnchor to insert a node and directly insert a node?

In ARKit, I have found 2 ways of inserting a node after the hitTest
Insert an ARAnchor then create the node in renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode?
let anchor = ARAnchor(transform:hit.worldTransform)
sceneView.session.add(anchor:anchor)
Insert the node directly
node.position = SCNVector3(hit.worldTransform.columns.3.x, hit.worldTransform.columns.3.y, hit.worldTransform.columns.3.z)
sceneView.scene.rootNode.addChildNode(node)
Both look to work for me, but why one way or the other?
Update: As of iOS 11.3 (aka "ARKit 1.5"), there is a difference between adding an ARAnchor to the session (and then associating SceneKit content with it through ARSCNViewDelegate callbacks) and just placing content in SceneKit space.
When you add an anchor to the session, you're telling ARKit that a certain point in world space is relevant to your app. ARKit can then do some extra work to make sure that its world coordinate space lines up accurately with the real world, at least in the vicinity of that point.
So, if you're trying to make virtual content appear "attached" to some real-world point of interest, like putting an object on a table or wall, you should see less "drift" due to world-tracking inaccuracy if you give that object an anchor than if you just place it in SceneKit space. And if that object moves from one static position to another, you'll want to remove the original anchor and add one at the new position afterward.
Additionally, in iOS 11.3 you can opt in to "relocalization", a process that helps ARKit resume a session after it gets interrupted (by a phone call, switching apps, etc). The session still works while it's trying to figure out how to map where you were before to where you are now, which might result in the world-space positions of anchors changing once relocalization succeeds.
(On the other hand, if you're just making space invaders that float in the air, perfectly matching world space isn't as important, and thus you won't really see much difference between anchor-based and non-anchor-based positioning.)
See the bit around "Use anchors to improve tracking quality around virtual objects" in Apple's Handling 3D Interaction and UI Controls in Augmented Reality article / sample code.
The rest of this answer remains historically relevant to iOS 11.0-11.2.5 and explains some context, so I'll leave it below...
Consider first the use of ARAnchor without SceneKit.
If you're using ARSKView, you need a way to reference positions / orientations in 3D (real-world) space, because SpriteKit isn't 3D. You need ARAnchor to keep track of positions in 3D so that they can get mapped into 2D.
If you're building your own engine with Metal (or GL, for some strange reason)... that's not a 3D scene description API — it's a GPU programming API — so it doesn't really have a notion of world space. You can use ARAnchor as a bridge between ARKit's notion of world space and whatever you build.
So in some cases you need ARAnchor because that's the only sensible way to refer to 3D positions. (And of course, if you're using plane detection, you need ARPlaneAnchor because ARKit will actually move those relative to scene space as it refined its estimates of where planes are.)
With ARSCNView, SceneKit already has a 3D world coordinate space, and ARKit does all the work of making that space match up to the real-world space ARKit maps out. So, given a float4x4 transform that describes a position (and orientation, etc) in world space, you can either:
Create an ARAnchor, add it to the session, and respond to ARSCNViewDelegate callback to provide SceneKit content for each anchor, which ARKit will add to and position in the scene for you.
Create an SCNNode, set its simdTransform, and add it as a child of the scene's rootNode.
As long as you have a running ARSession, there's no difference between the two approaches — they're equivalent ways to say the same thing. So if you like doing things the SceneKit way, there's nothing wrong with that. (You can even use SCNVector3 and SCNMatrix4 instead of SIMD types if you want, but you'll have to convert back and forth if you're also getting SIMD types from ARKit APIs.)
The one time these approaches differ is when the session is reset. If world tracking fails, you resume an interrupted session, and/or
you start a session over again, "world space" may no longer line up with the real world in the same way it did when you placed content in the scene.
In this case, you can have ARKit remove anchors from the session — see the run(_:options:) method and ARSession.RunOptions. (Yes, all of them, because at this point you can't trust any of them to be valid anymore.) If you placed content in the scene using anchors and delegate callbacks, ARKit will nuke all the content. (You get delegate callbacks that it's being removed.) If you placed content with SceneKit API, it stays in the scene (but most likely in the wrong place).
So, which to use sort of depends on how you want to handle session failures and interruptions (and outside of that there's no real difference).
SCNVector3 is just "a representation of a three-component vector." SCNVector3 docs.
When using ARAnchor, you have access to a three-component vector, but also you are able "to track the positions and orientations of real or virtual objects relative to the camera" ARAnchor docs. And that's why you use the session to add the anchor instead of using the scene.
See the docs and you can see the difference in terms of the API :)
Hope it helps.

Unity3D: terrain object's textures flickering on long distances

I'm making RTS project on Unity3D.
I created terrain with Unity's standard Terrain tool, and added textures of grass, mood etc. on it. Then, for creating "man-made" objects of terrain (roads, sidewalks, road curbs etc) - I'm created this objects in separate assets, and placed on terrain.
And I have one issue with this solution. On moving camera away from the terrain, terrain's texture(e.g. mood) are flickering under roads, sidewalks and other objects. AFAIK, this bug caused by insuficcient accuracy of floating-point coordinates in Unity3D engine (?).
Now, as I concerned, my approach to creating terrain objects is not correct. I must create one mesh with terrain, and all manmade objects in 3D modeling software, and then create UV-map for texturing all of it. Is this approach correct? If "yes", is any special approach for modelling and texturing so large and complex object as terrain?
I had the same issue time ago and I solve it by increasing camera's near plane value. I had many objects on the plane that were flickering when moving the camera and was due to having the near plane value at 0.01. I change it to 0.5 and none of the objects where flickering anymore.
I hope this helps!