I'm doing some study on the 3d reconstruction from two views and fixed known camera focal length. Something that is unclear to me is does triangulation gives us the real world scale of an object or the scale of the result is different to the actual one? If the scale is different than the actual size, how can I find the depth of points from it? I was wondering if there is more information that I need to create a real world scale of object.
Scale is arbitrary in SfM tasks so the result may be different in every reconstruction since points are initially projected on a random depth value.
You need at least one known distance in your scene to recover the absolute (real-world) scale. You can include one object with known size in your scene so you will be able to convert your scale afterwards.
Related
I want to export an image in Google Earth Engine and I want the pixel size to match some in-situ plots with dimensions 2mx30m. How can I set the scale parameter to match this diamesions?
What I currently have (for pixel size 30mx30m):
var myimage= sat_image.reduceRegions(my_points, ee.Reducer.first(),30)
print(myimage)
Export.table.toDrive(myimage,
"pts30",
"insitu_points")
Instead of specifying the scale parameter, specify a crsTransform parameter. It is a “a row-major ordering of the 3x2 transform matrix”, as the documentation puts it, which means that you should specify a list of numbers like
crsTransform=[xScaling, 0, 0, 0, yScaling, 0]
Note that these numbers are not the same as the scale parameter. The scale parameter specifies meters of nominal scale per pixel. These number specify distance in the projection's coordinate system per pixel. For example, if the projection's numerical units are degrees, then the crsTransform factors are degrees per pixel. Thus, it is usually a good idea to specify a crs together with crsTransform so that you know what projection you're measuring with.
Also, this may not be the best option for such very non-square pixels. Consider, instead of using reduceRegions on points, converting your points to rectangle features and then using reduceRegion with a mean reducer. This will have a similar effect, but lets you choose the exact shape you want to sample with.
I'm not sure how well either option will work or whether there are further issues to deal with — I haven't done anything like this myself. But I looked around and there is very little discussion crsTransform at all, so I figured it was worth writing up.
I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale.
Then how could I set the overall relative dimensions between the gameobjects?
For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius.
Also, consider Bounds.extents, that's relative to the objects's bounding box.
Again, considering the Sphere, you can obtain the diameter with:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Bounds bounds = mesh.bounds;
float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object.
The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size).
But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1.
You should try to learn some 3D modelling application to create arbitrary objects.
Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling.
Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number.
So typically 1 m = 1 unit.
All things that are 1 unit tall are 1 m tall.
If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall.
A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option.
Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also.
This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.
Which Matlab functions or examples should be used to (1) track distance from moving object to stereo (binocular) cameras, and (2) track centroid (X,Y,Z) of moving objects, ideally in the range of 0.6m to 6m. from cameras?
I've used the Matlab example that uses the PeopleDetector function, but this becomes inaccurate when a person is within 2m. because it begins clipping heads and legs.
The first thing that you need deal with, is in how detect the object of interest (I suppose you have resolved this issue). There are a lot of approaches of how to detect moving objects. If your cameras will stand in a fix position you can work only with one camera and use some background subtraction to get the objects that appear in the scene (Some info here). If your cameras are are moving, I think the best approach is to work with optical flow of the two cameras (instead to use a previous frame to get the flow map, the stereo pair images are used to get the optical flow map in each fame).
In MatLab, there is an option called disparity computation, this could help you to try to detect the objects in scene, after this you need to add a stage to extract the objects of your interest, you can use some thresholds. Once you have the desired objects, you need to put them in a binary mask. In this mask you can use some image momentum (Check this and this) extractor to calculate the centroids. If the images in the binary mask look noissy you can use some morphological operations to improve the reults (watch this).
Is it possible for the Unity TerrainData structure to take absolute elevations? I have a terrain generator that generates absolute elevations, but they are huge. The perlin octave with the highest amplitude is the one that decides what altitude the entire map is at, with an amplitude of 2500 and wavelength 10000. In order for my map to tile properly and transition between altitudes seamlessly, I need to be able to use this system of absolute altitude. I would scale down my generator's output to fit in the limited space (between 0 and 1), and stretch the y scale of the TerrainData, but it will lose too much precision.
What can I do? Is there a way I can use elevations that may vary by as much as 2500 meters?
One thing that might be important is that there will never be that much variation in the space of a single Terrain object, but across many, many Terrain objects, it is possible for the player to traverse that kind of altitude.
I've tested changing different variables, and I've reached the following conclusion...
Heightmap Resolution does not mean precision of data (some people I asked believed it determined the number of possible height values). It means the number of samples per row and column. This, along with size determines how far apart samples are, and effectively how large the polygons of the terrain are. It's my impression that there is no way to improve precision, although I now know how to increase the height of the terrain object. Instead, since I will never have 2500 meters of elevation difference in the same terrain object, each piece of terrain generated by my generator I will put in a terrain object that is positioned and sized to contain all of the data in that square. The data will also have to be converted so that it will fit, but other than that, I see no drawbacks to this method.
Important note: Resolution must be 2^n + 1 where n is any number. If you provide a different value for resolution, the next permitted value down will be selected (always the one below your choice).
I am having a problem transferring the position of some objects in still image (RGB image ) into 2D view of the room where the image had been taken.I have the coordinates of about 3 objects in the image (i mean X,y coordinate ) as well as the distance between them and I want to transfer the position of these 3 objects into the plan view .
Any help is much appreciated
You will probably need to clarify your question, but if I'm reading it the right way, it coult be as simple as taking the ratio from one object to another.
For example, if your sensor is 640px wide, and that covers a horizontal length of 10 meters, then you know that every 64 pixels represents one meter in the real world.
Bare in mind that this assumes the objects in the real world are at in the same plane, orthogonal to the lens vector. If objects are in different planes (depths), then you have a bigger problem in your hands.