Why anchors' position doesn't change after ARKit reconciling the recorded world map with the current environment? - arkit

Environment: ARKit 2.0, iPhone X, iOS 12.1
While running an ARSeesion at the ARWorldMappingStatusMapped state,
A few custom anchors at specified positions (#A) were added to the world map via ARSession.AddAnchor.
The world map was saved to a file.
Then close the app and restart the ARSession,
configuration Loaded the saved world map and assigned it to ARKitWorldTrackingSessionConfiguration.initialWorldMap
run option reset tracking and remove existing anchors
Then restart the session by calling ARSession.runWithConfiguration with the configuration and run option above.
When the state of ARSession indicates that it has reconciled the recorded world map with the current environment, read the positions of anchors (#B) from ARWorldMap.anchors.
But I found that positions of read anchors (#B) are not changed compared to positions when they were saved (#A). That's incorrect. Why?
Because the initial position of the phone is the position of coordinate origin. And the app was once closed and the phone was moved to another place. So the position of coordinate origin changed after restarting the ARSeesion. Then ARSession should also reconcile the positions of ARAnchors saved in the world map to proper places, thus the positions of ARAnchors should have been changed.

From the initialWorldMap documentation:
If successful, the tracking state becomes ARCamera.TrackingState.normal after a short time, indicating that the current world coordinate system and anchors match those from the recorded world map.
The position of those anchors should be the same, however the origin of the world coordinate system should change after relocalization.

I found the cause. The initial session, whose initialWorldMap is nil and run option is 0, started after I restarted the session for reconciling. So it conflicted with the reconciling session. Which cancels reconciling. Thus the positions of loaded anchors will not change because they are unknown to the ARSession.

Related

How to you get AVAudioEnvironmentNode.listenerPosition to update?

I am trying to use an AVAudioEnvironmentNode() with AVAudioEngine and ARKit.
I am not using ARSCNView or any of the other ARViews, this is just a plain ARSession.
I have a sound source -> AVAudioEnvironmentNode -> AVAudioEngine.mainOut.
I understand how set the position of the sound source. I am trying to figure out how to move the audioListener. Because I want to walk around the sound source in space.
The Apple Documentation says that to update the node's listener position you use.
AVAudioEnvironmentNode.listenerPosition = AVAudio3DPoint(newX, newY, newZ)
When I pass the ARCamera's forward and up to the node, that seems to change fine. However when I trying to change the position, I do not hear anything, and when I print a debug of listenerPosition, the output stays at zero origin, even those the camera's position is moving.
Is there something I have to do to make the AVAudioEnvironmentNode movable to take a new position?
Thanks.

Get Orientation of SCNNode Swift [duplicate]

I am working on a basic racing game using Apple's SceneKit and am running into issues simulating a car. Using the SCNPhysicsVehicle behavior, I am able to properly set up the car and drive it using the documented methods.
However, I am unable to get the car's position. It seems logical that the SCNPhysicsVehicle would move the SCNNodes that contain the chassis and the wheels as the car moves but the SCNNodes remain at their original position and orientation. Strangely enough, the chassis' SCNPhysicsBody's velocity remains accurate throughout the simulation so I can assume that the car's position is based off of the SCNPhysicsBody and not the SCNNode. Unfortunately, there is no documented method that I have found to get an SCNPhysicsBody's position.
Getting a car's position should be trivial and is essential to create a racing game but I can't seem to find any way of getting it. Any thoughts or suggestions would be appreciated.
Scene Kit automatically updates the position of the node that owns an SCNPhysicsBody based on the physics simulation, so SCNNode.position is the right property to look for.
The catch is that there are actually two versions of that node in play. The one you typically access is called the "model" node. It reflects the target values for properties you set, even if you set those properties through an animation. The presentationNode reflects the state of the node currently being rendered — if an animation is in progress, the node's properties have intermediate values, not the target values of the animation.
Actions, physics, constraints, and any scene graph changes you make inside update/render loop methods directly target the "presentation" version of your scene graph. So, to read node properties that have been set by the physics simulation, get the presentationNode for the node you're interested in (the node that owns the vehicle's chassisBody physics body), then read the presentation node's position (or other properties).
I have the same problem with my player node.
I move it with applyForce (to manage collision detection).
But when i check node position after some movement, the node position has not move (presentation node is the actual position as rickster write in his answer)
I manage to update the scnNode.position with renderer loop
You have to set position of your node with the presentationNode position.
node.position = node.presentationNode.position
Set this into renderer(_: updateAtTime) and your node position will sync with any animation you made to the physicsBody

Apple ARKit inaccurate on iPhone x

I work for Stanley Black&Decker, doing high accuracy measuring with ARKit. I have been testing with an iPhone7+ and iPad Pro (extensively since July), and accuracy between AR and real world is pretty good (within a few inches over 40' for example). However, with iPhone X, the accuracy is off - by a foot or more over 40'. In fact the iPhone X seems to incorrectly scale everything by maybe 3% to 8% too small (for example 45' reality shows as 42' 2" AR). Has anyone else seen differences between models?
UPDATE: Excellent. There are (as you mentioned) several layers of abstraction. At the base is Visual Inertial Odometry - that uses (random) feature point "cloud", gyro, accelerometer to establish a world origin. The next layer is hz plane detection (plane anchors). It appears that every frame (60fps) ARKit re-calculates (re-Estimates) world origin based on VIO. This induces a background jitter (usually +- 1mm/axis). If Feature point cloud gets too small, or changes too fast, world origin detection becomes hard to estimate, or is inconclusive, loss of origin continuity.
But there is another condition where origin and plane anchors have NOT changed, but the POV instantaneously (in 16ms) jumps by .5 to 2.5m meters. So ARKit incorrectly thinks POV has moved - aka iPhone physically jumped. Somewhat opposite of elevator where iPhone DID move, but feature point cloud did not.
An unknown is if plane anchors "feed back" into world origin (or POV) estimation. I do not think so. If one or more planes are in view (fustrum) then there should not be a slippage - but there is. So it appears world origin is only determined by VIO and feature point cloud, hence, plane anchors can move relative to origin, and jitter, and they do.
On the original question, I use iPhone7 and iPhoneX side by side, both detect the same (single) plane (on the floor). But as I slowly move from starting point, iPhone7 position (either by scnHit or Pov) is pretty accurate (4m is 4m). While the iPhoneX seems to underestimate the position (4m shows as 3.5m)
Yes model shifts for longer distance in ARKit.
ARKit works by mapping environment and placing virtual coordinates on top. So when you start ARKit app first it searches for and creates anchor for the real world where it can find enough feature points. As we move around these anchors are added for different real world objects or places. And it tries to match already found places with created anchors and position virtual world (3D coordinates) accordingly.
You know if enough feature point is not found model shifts from its place because it gets confused between real and virtual positioning. And when anchor is added in these case we will have origin of virtual world shifted for this anchor.
Say when AR session started the origin was in one corner of a table and you have model placed in center of table. Now when you moved to next end of table and the model shifts to edge of the table because it did not find enough feature point. And suddenly it found new anchor when model is on the edge. Now what happens is it have two anchors for two ends of table. If you move your camera to first end of table it matches with first anchor and model is placed on center of table. And if you move your camera to next end, it matches with second anchor and shifts the model to edge of the table.
And this chance increases with increase in distance.

Fluent movement with NetworkTransform & NetworkAnimator

My player character moves around the world using the Animator with root motion activated. Basically, the AI system sets the velocities on Animator, which in turn controls the Animation clips that control character motion. As this is a standard feature that ensures very realistic animation without noticable sliding, I thought this was a good idea ...until I added Network synchronization.
Synching the characters over the Network using NetworkTransform and NetworkAnimation causes those two components to conflict:
NetworkTransform moves the character to whichever position the host commands.
NetworkAnimator syncs the animation vars and plays the Animation clips as host instructs it to, while those Animation clips also apply root motion.
The result is precise (meaning the character reaches the exact target destination), but very stuttering movement (noticable jumps).
Removing NetworkTransform, the host and client instances of the characters desynchronize very quickly, meaning they will end up at different positions in the world when solely controlled by the timing-dependant Animator.
Removing NetworkAnimator, client instances won't play the same animations as the host, if any animations at all.
I tried keeping both Components while disabling root motion for the Animator (on client only). In that case however, NetworkTransform does not seem to interpolate at all. The character just jumps from synched position to synched position in steps of about 0.02 units. Same with rotation.
NetworkTransform is configured to "Sync Transform", as the character neither has a RigidBody nor a CharacterController. All other values are the defaults: sync rate of 9 (also tried higher values there), movement threshold of 0.001, snap threshold of 5, interpolate movement = 1.
How do I get fluent root motion based movement on the Network? I expected that to be a standard scenario...
What you need is to disasble the Root Motion flag on non local instances but also to interpolate the rotation, not just the movement.
Moreover, an interpolation of 1 seems high, same as the thereshold of 5: those seem not realistic unless you are not using Unity standard where 1 unit = 1 meter. I would have a 25cm (0.25) interpolation for the movementand a 3 degrees for the rotation. The sync rate of 9 could be enough but in my experience it has to be recomputed based on the packet loss.

How can you obtain the ilplotcube rotatation information at runtime?

Suppose I plot a surface and at runtime I use the mouse to rotate the surface. Once the right rotation of the surface is achieved, how I can get its state?
Each driver creates a clone of the global scene which is constantly synched and updated with changes in its source. The rotation is done on the clone. I have not tested it, but I think, you can query objects (e.g. the plot cube) in the clone by
panel.GetCurrentScene().First<ILPlotCube>(/*your filter if needed*/)
This instance will reflect all changes done by the user.
The method pointed out in user492238s answer does work. However, GetCurrentScene() assembles a new scene as composition of the global and the local (to the current driver) scene. This can get costly if called frequently. If only individual objects / properties are needed, panel.SceneSyncRoot can be used instead.
Also, the rotation of a plot cube is exposed by the plotcube.Rotation property. So, in order to get the current rotation of a plot cube (including the rotation due to user input):
panel.SceneSyncRoot.First<ILPlotCube>().Rotation