How to measure floor area using ARCore? - unity3d

I'm a beginner, take it easy on me.
How would you measure the area of the floor using ARCore/Unity. I figure you somehow measure the area of the plane visualiser, or measure the area of each individual triangle, but I have no idea how to attack it.
The closest thing I can find is measuring distance...

You can get a (somewhat imprecise) estimate by multiplying
plane.getExtentX() * plane.getExtentZ();
This is most likely too high because it assumes the plane to be rectangular and
will suffer from measurement errors in both directions. But it might be good enough depending on your usecase.
A slightly more precise alternative would be to get the plane's polygon
plane.getPolygon()
and then computing the polygon's area

Related

Specifications of Checkerboard (Calibration) for obtaining maximum accuracy in stereo reconstruction

I have to reconstruct an object which will be placed around 1 meter to 1.5 meters away from the baseline of my stereo setup. The image captured by both cameras have high resolution (10 MP)
The accuracy with which I have to detect it's position is +/- 0.5mm, in all the three co-ordinate axes. (If you require more details, please let me know)
For these, what should the optimal specifications of my checkerboard (for calibration) be?
I only know that it should be an asymmetric board. It should be placed in the same distance range as the range where object is expected to be placed. Also, it should be oriented in all possible angles (making sure all corners are seen by both cameras)
What about:
Number of squares horizontally and vertically? (also, on which side should the squares be more / even?)
Dimension of each square on checkerboard?
What effect does the baseline distance have on this?
Do these parameters of the checkerboard affect my accuracy in anyway? Are there any other parameters I need to consider for calibration?
I am using the MATLAB Stereo Calibrator App.
I will try to answer as good as I can:
Numbers of squares. Well, as you can guess, the more squares (actually corners between squares are used!) the better the result will be, as you have a more overdetermined system of equations to solve. Additionally, it doesnt matter the size of the chequerboard, only the odd/even pattern matters.
Dimensions of squares. the size does not matter very much in "mathematical" reresentation, but it matters practically. If your squares are very small, probably your printer wont draw a that good corner of the square and that will make your data "noisier". In the past, for really small calibration system I needed to go to an specialised printing shop so they could print it with the maximum quality possible. Of course if you make them very big you wont be able to fit lost of them in the iage which is not good.
The baseline distance has effect only in how properly can you see the corners between squares. The more accurate (in mm!, real distance!) you are detecting this corners the better. Obviously if you make small squares and put them very far, well, you wont see very much. This fits with the 1,2 question. Additionally, another problem you may have is focal length. In a application I worked on, some really small and close things wanted to be imaged. That was a problem while calibrating, as the amount if z distance I could see without blur was around 2mm. This really crippled my ability to calibrate properly because I could big angles in Z direction without getting blurred corners.
TL;DR: You want to have lots of corners between squares of the chequerboard but you want to see them as precisely as possible.

Project GPS coordinates to Euclidean space

There are a lot of similar questions but I can't get a clear answer out of them. So, I want to represent latitude and longitude in a 2D space such that I can calculate the distances if necessary.
There is the equirectangular approach which can calculate the distances but this is not exactly what I want.
There is the UTM but it seems there are many zones and letters. So the distance should take into consideration the changing of zone which is not trivial.
I want to have a representation such that i can deal with x,y as numbers in Euclidean space and perform the standard distance formula on them without multiplying with the diameter of Earth every time I need to calculate the distance between two points.
Is there anything in Matlab that can change lat/long to x,y in Euclidean space?
I am not a matlab speciallist but the answer is not limited to matlab. Generally in GIS when you want to perform calculations in Euclidean space you have to apply 'projection' to the data. There are various types of projections, one of the most popular being Transverse Mercator
The common feature of such projections is the fact you can't precisely represent whole world with it. I mean the projection is based on chosen meridian and is precise enough up to some distance from it (e.g. Gauss Krueger projection is quite accurate around +-500km from the meridian.
You will always have to choose some kind of 'zone' or 'meridian', regardless of what projection you choose, because it is impossible to transform a sphere into plane without any deformations (be it distance, angle or area).
So if you are working on a set of data located around some geographical area you can simply transform (project) the data and treat it as normal Enclidean 2d space.
But if you think of processing data located around the whole world you will have to properly cluster and project it using proper zone.

Stereo Camera Geometry

This paper describes nicely the geometry of a stereo image system. I am trying to figure out, if the cameras tilted towards each other with a certain angle, how the calculation would change? I looked around but couldn't find any reference to tilted camera systems.
Unfortunately, the calculation changes significantly. The rectified case (where both cameras are well-aligned to each other) has the advantage that you can calculate the disparity and the depth is proportional to the disparity. This is not the case in the general case.
When you introduce tilts, you end up with something called epipolar geometry. Here is a paper about this I just googled. In order to calculate the depth from a pixel-pair you need the fundamental matrix or the essential matrix. Both are not easy to obtain from the image pair. If, however, you have the geometric relation of both cameras (translation and rotation), calculating these matrices is a lot easier.
There are several ways to calculate the depth of a pixel-pair. One way is to use the fundamental matrix to rectify both images (although rectifying is not easy either, or even unique) and run a simple disparity check.

MATLAB - What are the units of Matlab Camera Calibration Toolbox

When showing the extrinsic parameters of calibration (the 3D model including the camera position and the position of the calibration checkerboards), the toolbox does not include units for the axes. It seemed logical to assume that they are in mm, but the z values displayed can not possibly be correct if they are indeed in mm. I'm assuming that there is some transformation going on, perhaps having to do with optical coordinates and units, but I can't figure it out from the documentation. Has anyone solved this problem?
If you marked the side length of your squares in mm, then the z-distance shown would be in mm.
I know next to nothing about matlabs (not entirely true but i avoid matlab wherever I can, and that would be almost always possible) tracking utilities but here's some general info.
Pixel dimension on the sensor has nothing to do with the size of the pixel on screen, or in model space. For all purposes a camera produces a picture that has no meaningful units. A tracking process is unaware of the scale of the scene. (the perspective projection takes care of that). You can re insert a scale by taking 2 tracked points and measuring the distance between those points. This is the solver spaces distance is pretty much arbitrary. Now if you know the real distance between these points you can get a conversion factor. By doing:
real distance / solver space distance.
There's really now way to knowing this distance form the cameras settings as the camera is unable to differentiate between different scales of scenes. So a perfect 1:100 replica is no different for the solver than the real deal. So you must allays relate to something you can measure separately for each measuring session. The camera always produces something that's relative in nature.

how accurate is the altitude measurement in mobile phones

How accurate is the altitude measurement from a mobile phone's GPS? I've gathered that the lat/long can vary by hundreds of meters but is that same level of uncertainty present in the altitude values?
In particular I'm working with Windows Phone 7 but I'm sure that this question applies to other mobile devices. I expect that there are only a few GPS chip manufacturers and the same chip would be used by different phones.
This question deals with how it is calculated but it doesn't mention anything about accuracy or reliability.
I don't know specifically about the iPhone, but elevation is often much less accurate than X,Y information from a GPS. Here are some sources of information about this.
It requires fairly complicated math to understand fully, but no the altitude on ANY gps is not as accurate the lat/long position.
http://weather.gladstonefamily.net/gps_elevation.html
http://gpsinformation.net/main/altitude.htm
http://www.sawmillcreek.org/showthread.php?83752-Do-civilian-GPS-unts-do-accurate-altitude
quote from the third link - "The altitude error is much greater because it is a satellite based system. If you think about it, the best satellite positions for a perfect read are going to be evenly distributed in an imaginary sphere surrounding you. Unfortunately, since you are standing on the earth, that rules out half the sphere because you need line-of-sight to the satellite. As a practical matter, it even rules out a constellation with satellites close to the horizon. So, generally speaking, your fixes will be overhead--which means that the cumulative error is mainly in the vertical plane. So, I think the offhand estimate is vertical error = 1.5x horizontal error."
Vertical error is specified as 1.5 * horizontal error. You must also allow for local deviation between the geodetic model and the planetary surface because the geodetic oblate spheroid model is an approximation only even when local correction tables are in use.
Since the triangulation hands back a point in space, the same inaccuracies would apply to the Z axis like they do to X and Y.
In other words, it's no more and no less accurate than the accuracy of the LAT/LONG.