Does plane detection increase accuracy of Augmented Image Tracking - unity3d

I have an application for visualizing scan of a room and i am using an Augmented Image to align my points to real world. I am not using plane detection in my application so it is optional for me.
However, i have some questions regarding tracking accuracy because accuracy of my alignment at the moment solely depends on how accurate i can detect the center position and corners of the augmented image.
Does using plane detection increase accuracy of detecting an image position in an Augmented Image application?
Does it also affect accuracy of tracking and ARCore environmental understanding. Because users can move in the room and inspect the scan and also i tested my application with and without plane detection and it appears with plane detection my alignment changes over time because of ARCore environmental understanding and there is a shift in anchors. This does not happen that much without plane detection.
Thanks in advance for any help!

Related

Robot movement measuring using matlab video processing

I'm doing robot project - It need to measure subtle movements in XY direction, while driving in Z direction .
So I was thinking of using a camera with MATLAB and blinking LED attached to a wall - that way using image subtraction I can identify the LED, and with weight matrix locate the center of the light.
Now every period of time I can log the amount of pixels the center moved right-left or up-down directions and check the accuracy of the motion.
But when attempting this sensing solution I had some challenges I couldn't overcome
light source like LED/laser has soft edges so the center is not accurate
the camera is not calibrated (and I'm not sure how to calibrate it)
Is there other simple solution for this problem?
note: the amount of motion can be proportional.
You might be able to improve the accuracy of the location of the led by applying some kind of peak interpolation.
For the calibration: Matlab offers an app for camera calibration, maybe that helps you.

ARKit shows a map on floor/bottom of a screen

Via ARKit, I want to place indoor map on floor.
Currently I tried 2 things:
I've placed large Plane below camera and above floor, But it causes quite drift. Does not move well when we walk, and overall experience is not overwhelming.
Saw a solution where you can identify horizontal plane, but it has its own issues.
So is it really possible with good results?
Devices with LiDAR
The LiDAR scanner has its advantages and disadvantages. The main advantage of LiDAR is its ability to almost instantly reconstruct floor and walls, then you can easily attach any 3D model to the resulted surface – a model will be stable, it will not drift, so a user's AR experience will be overwhelming, as you said. Also, an important advantage of LiDAR is the excellent performance in environment with poor lighting and with poor textures.
Here you can read about Occlusion feature and some of the LiDAR peculiarities. Good news: LiDAR perfectly works in conjunction with the Plane Detection option.
ARKit subdivides the reconstructed scene into ARMeshAnchors which give you access to polygonal geometry and surface classification.
ARMeshAnchor().geometry.classification
ARMeshAnchor().geometry.faces
ARMeshAnchor().geometry.vertices
ARMeshAnchor().transform.columns.3
Devices without LiDAR
In the absence of a LiDAR scanner, we can only detect horizontal and vertical surfaces using the Plane Detection feature. I can say that all AR frameworks (including ARKit and RealityKit) are much better and faster in defining horizontal surfaces, as opposed to vertical ones.
However, Detected Planes are less stable compared to Reconstructed Surfaces, and therefore, a slight drifting is possible sometimes. To successfully complete the Plane Detection stage, you need a well-lit room and good-for-tracking surrounding objects' textures.
ARKit calls your delegate's renderer(_:didAdd:for:) with a ARPlaneAnchor for each unique vertical and/or horizontal surface. And each plane anchor provides details about the surface – its world position, dimensions and real-world surfaces' classification.
In addition to the above, the delegate method called renderer(_:didUpdate:for:) is required to merge multiple coplanar Detected Planes into bigger resulting Detected Plane (a surface of a floor, for example).
ARPlaneAnchor().classification
ARPlaneAnchor().extent
ARPlaneAnchor().alignment
ARPlaneAnchor().center
Is it really possible with good results?
Yes, in both cases, it's possible to attach a map without drifting – whether you're using Plane Detection or Scene Reconstruction.

How to generate surface/plane around a real world Object (Like bottle) using Unity & ARCore?

I built an apk using the HelloAR scene (which is provided with ARcore package). The app is only detecting Horizontal surface like table and creates it's own semi-transparent plane over it. When I moved my phone around a bottle, the app again, only created a horizontal plane cutting through the bottle. I expected ARCore to create planes along the bottle as I move my phone around, like polygons in a mesh.
Another scenario is, I placed 2 books on the floor, and each of them have different thickness. But the HelloAR app creates only one semi-transparent horizontal surface over the thicker book, instead of creating two surfaces (one for each book).
What is going wrong here? How can I fix it and make the HelloAR app work more precisely? Please help.
Software: Unity v2018.2,
ARcore v1.11.0
ARCore generates an approximate point cloud using a soft movement of the device to identify the featured points, this points are detected by contrast in the different shapes, if you use your application in test mode in unity you can see how the points are placed in your empty scene.
Once the program has enough points at the "same height" (I don't know the exact precision), it generates the plane that you can see, but it won't detect planes separated by a difference of 5cm or even more distance.
If you want to know the approximate accuracy of the app, test it with unity and make a script to capture the generated points that have been used to generate the planes, then check the Y difference to see which is the tolerance distance.
Okay so Vuforia is currently one of the leading SDKs for augmented reality providing a wide area of detection options (Images, Ground, Point, 3D objects, ...)
So regarding your question about detecting a bottle I would most certainly use the 3D model detection feature. You can read the official docs here.
You need to first generate an approximate of the object in a 3d modeling software and the use their program to generate the detection model. Then you put this in Unity and setup the detection. (no coding needed)
I have some experience with this kind of detection. I used it to detect a large 2mx2m scale model of an electric vehicle. It works great, you can walk around it and it tracks it through and through. You can see a short official demo here
Hope it helped to explain this in short!

How to fix objects in the indoor environment with ArCore?

I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.