Apple ARKit inaccurate on iPhone x - arkit

I work for Stanley Black&Decker, doing high accuracy measuring with ARKit. I have been testing with an iPhone7+ and iPad Pro (extensively since July), and accuracy between AR and real world is pretty good (within a few inches over 40' for example). However, with iPhone X, the accuracy is off - by a foot or more over 40'. In fact the iPhone X seems to incorrectly scale everything by maybe 3% to 8% too small (for example 45' reality shows as 42' 2" AR). Has anyone else seen differences between models?
UPDATE: Excellent. There are (as you mentioned) several layers of abstraction. At the base is Visual Inertial Odometry - that uses (random) feature point "cloud", gyro, accelerometer to establish a world origin. The next layer is hz plane detection (plane anchors). It appears that every frame (60fps) ARKit re-calculates (re-Estimates) world origin based on VIO. This induces a background jitter (usually +- 1mm/axis). If Feature point cloud gets too small, or changes too fast, world origin detection becomes hard to estimate, or is inconclusive, loss of origin continuity.
But there is another condition where origin and plane anchors have NOT changed, but the POV instantaneously (in 16ms) jumps by .5 to 2.5m meters. So ARKit incorrectly thinks POV has moved - aka iPhone physically jumped. Somewhat opposite of elevator where iPhone DID move, but feature point cloud did not.
An unknown is if plane anchors "feed back" into world origin (or POV) estimation. I do not think so. If one or more planes are in view (fustrum) then there should not be a slippage - but there is. So it appears world origin is only determined by VIO and feature point cloud, hence, plane anchors can move relative to origin, and jitter, and they do.
On the original question, I use iPhone7 and iPhoneX side by side, both detect the same (single) plane (on the floor). But as I slowly move from starting point, iPhone7 position (either by scnHit or Pov) is pretty accurate (4m is 4m). While the iPhoneX seems to underestimate the position (4m shows as 3.5m)

Yes model shifts for longer distance in ARKit.
ARKit works by mapping environment and placing virtual coordinates on top. So when you start ARKit app first it searches for and creates anchor for the real world where it can find enough feature points. As we move around these anchors are added for different real world objects or places. And it tries to match already found places with created anchors and position virtual world (3D coordinates) accordingly.
You know if enough feature point is not found model shifts from its place because it gets confused between real and virtual positioning. And when anchor is added in these case we will have origin of virtual world shifted for this anchor.
Say when AR session started the origin was in one corner of a table and you have model placed in center of table. Now when you moved to next end of table and the model shifts to edge of the table because it did not find enough feature point. And suddenly it found new anchor when model is on the edge. Now what happens is it have two anchors for two ends of table. If you move your camera to first end of table it matches with first anchor and model is placed on center of table. And if you move your camera to next end, it matches with second anchor and shifts the model to edge of the table.
And this chance increases with increase in distance.

Related

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

HoloLens: How to stabilize holograms at far distances

I want to place virtual objects (holograms) at far distances (20+ meters) in the HoloLens 1. However, at such distances holograms become unstable and appear to "swim" in the display. Has anyone had success with this? What worked for you?
Some potential fixes include:
Ensure 60 FPS
Adjust Stabilization Plane
Employ visual markers (vuforia)
Use static room scan (may not scale well)
For me, frame rate is not an issue. And I am using Unity 2017.4.4f1. Currently, I have a single world anchor and all objects are set relative to this anchor.
20+ meters is a lot and I am not sure if this will work good enough.
Ensuring 60 fps or at least 50/55+ is important but this wont solve the swimming at this distance. A low framerate might only cause additional swimming :)
Everything that should appear statically placed in the room should be on or very close to the stabilization plane. So what you want to avoid is having the far objects at very different distances from the user. That would otherwise cause the ones farthest off from the stabilization plane to swim.
If you only have the far away object try placing the stabilization plane at the same distance as the object, if the distances are changing a lot you can also update the stabilization plane distance at runtime to always set it to the current distance to the object.
Would be interesting to hear if it worked out :)
One more thing: If I remember correctly, objects should ideally placed directly or in close proximity to their world anchor to help stabilization.
20 metres is too far. The docs
Best practices When holograms cannot be placed at 2m and conflicts
between convergence and accommodation cannot be avoided, the optimal
zone for hologram placement is between 1.25m and 5m. In every case,
designers should structure content to encourage users to interact 1+ m
away (e.g. adjust content size and default placement parameters).

Transformation unity 3d

I'm learning unity by the book "Unity game development in 24 hours". The book says:
Translation: Translation is a inert transformation. That means any changes applied after it won't be affected.
Scaling: Scaling effectively changes the size of the local coordinate grid. Basically, when you scale an object to be larger, you are really scaling the local coordinate system to be larger. This causes the object to seem to grow. This change is multiplicative. For example, if an object is scaled to 1 (its natural, default size) and then translated 5 units along the x axis, the object appears to move 5 units to the right. If the same object were to be scaled to 2, however, then translating 5 units on the x axis would result in the object appearing to move 10 units to the right. This is because the local coordinate system is now double the size and 5 times 2 equals 10. Inversely, if the object were scaled to .5 and then moved, it would appear to only move 2.5 units (.5 x 5 = 2.5)
I tried to experiment this two effects but it didn't work that way. To the Translation, I can apply any changes after it. And to the Scaling, it scaled the local coordinate system in multiplicative way but it didn't multi the affect of translation. Am I understand this wrong or it's the book?
Translating (using Transform.Translate method) means moving object's transform by some vector. Simple as that.
Local scale is little bit more complicated. It scales not only the object itself, but all objects, that are children of it. And the distance moved is relative - if you have a cube that's 1x1x1 in size and you move it by 1 unit, it will move its full length. If, however, you scale it by 2 and than move it by 1 unit, it moves only half its size.
According to what you wrote, the book is probably really bad source to learn Unity3D. Try doing some official tutorials, they are really good and explain the basics really well. This one is pretty good, this one as well. And remember, anytime you are in doubt with Unity. try to search their really good documentation first.

iPhone iOS is it possible to create a rangefinder with 2 laser pointers and an iPhone?

I'm working on an IPhone robot that would be moving around. One of the challenges is estimating distance to objects- I don't want the robot to run into things. I saw some very expensive (~1000$) laser rangefinders, and would like to emulate one using iPhone.
I got one or two camera feeds and two laser pointers. The laser pointers are mounted about 6 inches apart, at an angle The angle of lasers in relation to the cameras is known. The Angle of cameras to each other is known.
The lasers are pointing ahead of cameras, creating 2 dots on a camera feed. Is it possible to estimate the distance to the dots by looking at the distance between the dots in a camera image?
The lasers form a trapezoid from the
/wall \
/ \
/laser mount \
As the laser mount gets closer to the wall, the points should be moving further away from each other.
Is what I'm talking about feasible? Has anyone done something like that?
Would I need one or two cameras for such calculation?
If you just don't want to run into things, rather than have an accurate idea of the distance to them, then you could go "dambusters" on it and just detect when the two points become one - this would be at a known distance from the object.
For calculation, it is probaby cheaper to have four lasers instead, in two pairs, each pair at a different angle, one pair above the other. Then a comparison between the relative differences of the dots would probably let you work out a reasonably accurate distance. Math overflow for that one, though.
In theory, yes, something like this can work. Google "light striping" or "structured light depth measurement" for some good discussions of using this sort of idea on a larger scale.
In practice, your measurements are likely to be crude. There are a number of factors to consider: the camera intrinsic parameters (focal length, etc) and extrinsic parameters will affect how the dots appear in the image frame.
With only two sample points (note that structured light methods use lines, etc), the environment will present difficulties for distance measurement. Surfaces that are directly perpendicular to the floor (and direction of travel) can be handled reasonably well. Slopes and off-angle walls may be detectable, but you will find many situations that will give ambiguous or incorrect distance measures.

iPhone 3D compass

I am trying to build an app for the iPhone 4 which enables the user to "point" at a hardcoded destination and a dot appears where the destination is located.
First, i use the compass to make a horizontal compass(this will cover the left/right rotation):
// Heading
nowHeading = heading.trueHeading;
// Shift image (horizontal compass)
float shift = bearing - nowHeading;
destinationImage.center = CGPointMake(shift+160, destinationImage.center.y);
I shift the dot 160 pixels because the screen is 320 pixels width. My question is now, how can I expand this code to handle up and down? Meaning that if i point the phone down in the table, the dot wont show.. I have to point (like taking a picture) at the destination in order for it to be drawn on the screen. I've already implemented the accelerator. But i don't know how to unite these components to solve my problem.
Bearing should depend on the field of vision of the camera. For iPhone 4 the horizontal angular view is 47.5 so 320 points/47.5 = xxx points per degree, use that to shift horizontally. You also have to add an adaptive filter to the accelerometers, you can get one from the AccelerometerGraph project from Apple.
You have the rotation in one axis (bearing) you should get the rotation on the other two from the accelerometers. The atan2 of two axis give you the rotation on the third. Go to UIAcceleration and imagine an axis physically piercing the device if that helps and do double xAngle = atan2(acceleration.y, acceleration.z); So once you have the rotation upside down you can repeat what you did for the horizontal with the vertical field of view, eg: 60 for the iPhone.
That is going to be one rough implementation :) but achieving smooth movement is difficult. One thing you can do is use the gyros to get a faster response and correct their signal periodically with the accelerometers. See this talk for the troubles ahead: Sensor Fusion on Android Devices. Here is a website dedicated to the Kalman Filter. If you dare with Quaternions I recommend "Visualizing Quaternions" from Andrew J. Hanson.
It sounds like you are trying to do a style of Augmented Reality. If that. Is the case there are several libraries and sample code suggested here:
Augmented Reality