I'm trying to capture the "floor texture" based on a ARCore detected plane and the environment (camera) texture. Then reapply this floor texture in a plane mesh, creating a digital floor based on reality.
I've uploaded an image to illustrate this:
This is not a ARCore specific question, I think it can be resolved with math and graphics programming, maybe something like unprojecting the plane based on the camera matrix, but I do not know exactly how to do that.
Can someone help me?
Thanks!!
Essentially, we have three important coordinate systems in this problem:
We have the usual 3D world coordinate system that can be defined arbitrarily. We have a 2D coordinate system for the plane. The origin of that coordinate system is in the plane's center (note that I use the term plane synonymous for rectangle for this purpose) and the coordinates range from -1 to +1. And finally, we have a 2D coordinate for the image. Actually, we have two for the image: An unsigned coordinate system (as shown in the figure) with the origin in the bottom left and coordinates ranging from 0 to 1, and a signed one with coordinates ranging from -1 to 1.
We know the four corners of our plane in world space and the 3x4 view/projection matrix P that allows us to project any point in world space to image space using homogeneous coordinates:
p_image,signed = P * p_world
If your projection matrix is 4x4, simply drop the third row (last-but 1) as we are not interested in image-space depth.
We do not really care about world space as this is somewhat arbitrary. Given a 2D point in plane space, we can transform it to world space using:
p_world = 1/4 (p0 + p1 + p2 + p3) + u * 1/2 * (p1 - p0) + v * 1/2 * (p3 - p0)
The first part is the plane's origin and the point differences in the second and third terms are the coordinate axes. We can represent this in matrix form as
/ x \ / p1_x - p0_x p2_x - p0_x 1/4 (p0_x + p1_x + p2_x + p3_x) \ / u \
| y | = | p1_y - p0_y p2_x - p0_x 1/4 (p0_y + p1_y + p2_y + p3_y) | * | v |
| z | | p1_z - p0_z p2_x - p0_x 1/4 (p0_z + p1_z + p2_z + p3_z) | \ 1 /
\ 1 / \ 0 0 1 /
Let us call this matrix M.
Now, we can go directly from plane space to image space with:
p_image,signed = P * M * p_plane
The matrix P * M is now a 3x3 matrix. It is the homography between your ground plane and the image plane.
So, what can we do with it? We can use it to draw the image in plane space. So, here is what we are going to do:
We will generate a render target that we will fill with a single draw call. This render target will then contain the texture of our plane. To draw this, we:
Upload the camera image to the GPU as a texture
Bind the render target
Draw a full-screen quad with corners (-1, -1), (1, -1), (1, 1), (-1, 1)
In the vertex shader, calculate texture coordinates in image space from plane space
In the pixel shader, sample the camera image at the interpolated texture coordinates
The interesting part is number 4. We almost know what we need to do. We already know how to go to signed image space. Now, we just need to go to unsigned image space. And this is a simple shift and scale:
/ 1/2 0 1/2 \
p_image,unsigned = | 0 1/2 1/2 | * p_image,signed
\ 0 0 1 /
If we call this matrix S, we can then calculate S * P * M to get a single 3x3 matrix T. This matrix can be used in the vertex shader to calculate the texture coordinates from the plane-space points that you pass in:
texCoords = p_image,unsigned = T * p_plane
It is important that you pass the entire 3D vector to the fragment shader and do the perspective divide only in the pixel shader to produce a correct perspective.
I'm trying to estimate a position based on signal strength received from 4 Wi-Fi Access Points. I measure the signal strength from 4 access points located in each corner of a square room with 100 square meters (10x10). I recorded the signal strengths in a known position (x, y) = (9.5, 1.5) using an Android phone. Now I want to check how accurate can a multilateration method be under the circumstances.
Using MATLAB, I applied a formula to calculate distance using the signal strength. The following MATLAB function shows the application of the formula:
function [ d_vect ] = distance( RSS )
% Calculate distance from signal strength
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The input RSS is a vector with the four signal strengths measured in the test point (x,y) = (9.5, 1.5). The RSS vector looks like this:
RSS =
-57.6000
-60.4000
-44.7000
-54.4000
and the resultant vector with all the estimated distances to each access points looks like this:
d_vect =
7.5386
10.4061
1.7072
5.2154
Now I want to estimate my position based on these distances and the access points position in order to find the error between the estimated position and the known position (9.5, 1.5). I want to find the intersection area (In order to estimate a position) between four circles where each access point is the center of one of the circles and the distance is the radius of the circle.
I want to find the grey area as shown in this image :
http://www.biologycorner.com/resources/venn4.gif
If you want an alternative way of estimating the location without estimating the intersection of circles you can use trilateration. It is a common technique in navigation (e.g. GPS) to estimate a position given a set of distance measurements.
Also, if you wanted the area because you also need an estimate of the uncertainty of the position I would recommend solving the trilateration problem using least squares which will easily give you an estimate of the parameters involved and an error propagation to yield an uncertainty of the location.
I found an answear that solved perfectly the question. It is explained in detail in this link:
https://gis.stackexchange.com/questions/40660/trilateration-algorithm-for-n-amount-of-points
I also developed some MATLAB code for the problem. Here it goes:
Estimate distances from the Access Points:
function [ d_vect ] = distance( RSS )
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The trilateration function:
function [] = trilat( X, d, real1, real2 )
cla
circles(X(1), X(5), d(1), 'edgecolor', [0 0 0],'facecolor', 'none','linewidth',4); %AP1 - black
circles(X(2), X(6), d(2), 'edgecolor', [0 1 0],'facecolor', 'none','linewidth',4); %AP2 - green
circles(X(3), X(7), d(3), 'edgecolor', [0 1 1],'facecolor', 'none','linewidth',4); %AP3 - cyan
circles(X(4), X(8), d(4), 'edgecolor', [1 1 0],'facecolor', 'none','linewidth',4); %AP4 - yellow
axis([0 10 0 10])
hold on
tbl = table(X, d);
d = d.^2;
weights = d.^(-1);
weights = transpose(weights);
beta0 = [5, 5];
modelfun = #(b,X)(abs(b(1)-X(:,1)).^2+abs(b(2)-X(:,2)).^2).^(1/2);
mdl = fitnlm(tbl,modelfun,beta0, 'Weights', weights);
b = mdl.Coefficients{1:2,{'Estimate'}}
scatter(b(1), b(2), 70, [0 0 1], 'filled')
scatter(real1, real2, 70, [1 0 0], 'filled')
hold off
end
Where,
X: matrix with APs coordinates
d: distance estimation vector
real1: real position x
real2: real position y
If you have three sets of measurements with (x,y) coordinates of location and corresponding signal strength. such as:
m1 = (x1,y1,s1)
m2 = (x2,y2,s2)
m3 = (x3,y3,s3)
Then you can calculate distances between each of the point locations:
d12 = Sqrt((x1 - x2)^2 + (y1 - y2)^2)
d13 = Sqrt((x1 - x3)^2 + (y1 - y3)^2)
d23 = Sqrt((x2 - x3)^2 + (y2 - y3)^2)
Now consider that each signal strength measurement signifies an emitter for that signal, that comes from a location somewhere at a distance. That distance would be a radius from the location where the signal strength was measured, because one would not know at this point the direction from where the signal came from. Also, the weaker the signal... the larger the radius. In other words, the signal strength measurement would be inversely proportional to the radius. The smaller the signal strength the larger the radius, and vice versa. So, calculate the proportional, although not yet accurate, radius's of our three points:
r1 = 1/s1
r2 = 1/s2
r3 = 1/s3
So now, at each point pair, set apart by their distance we can calculate a constant (C) where the radius's from each location will just touch one another. For example, for the point pair 1 & 2:
Ca * r1 + Ca * r2 = d12
... solving for the constant Ca:
Ca = d12 / (r1 + r2)
... and we can do this for the other two pairs, as well.
Cb = d13 / (r1 + r3)
Cc = d23 / (r2 + r3)
All right... select the largest C constant, either Ca, Cb, or Cc. Then, use the parametric equation for a circle to find where the coordinates meet. I will explain.
The parametric equation for a circle is:
x = radius * Cos(theta)
y = radius * Sin(theta)
If Ca was the largest constant found, then you would compare points 1 & 2, such as:
Ca * r1 * Cos(theta1) == Ca * r2 * Cos(theta2) &&
Ca * r1 * Sin(theta1) == Ca * r2 * Sin(theta2)
... iterating theta1 and theta2 from 0 to 360 degrees, for both circles. You might write code like:
for theta1 in 0 ..< 360 {
for theta2 in 0 ..< 360 {
if( abs(Ca*r1*cos(theta1) - Ca*r2*cos(theta2)) < 0.01 && abs(Ca*r1*sin(theta1) - Ca*r2*sin(theta2)) < 0.01 ) {
print("point is: (", Ca*r1*cos(theta1), Ca*r1*sin(theta1),")")
}
}
}
Depending on what your tolerance was for a match, you wouldn't have to do too many iterations around the circumferences of each signal radius to determine an estimate for the location of the signal source.
So basically you need to intersect 4 circles. There can be many approaches to it, and there are two that will generate the exact intersection area.
First approach is to start with one circle, intersect it with the second circle, then intersect the resulting area with the third circle and so on. that is, on each step you know current intersection area, and you intersect it with a new circle. The intersection area will always be a region bounded by circle arcs, so to intersect it with a new circle you walk along the boundary of the area and check whether each bounding arc intersects with a new circle. If it does, then you leave only the part of the arc that lies inside a new circle, remember that you should continue with an arc from a new circle, and continue traversing the boundary until you find the next intersection.
Another approach that seems to result in a worse time complexity, but in your case of 4 circles this will not be important, is to find all the intersection points of two circles and choose only those points that are of interest for you, that is which lie inside all other circles. These points will be the corners of your area, and then it is rather easy to reconstruct the area. After googling a bit, I have even found a live demo of this approach.
I am in need of an idea! I want to model the vascular network on the eye in 3D. I have made statistics on the branching behaviour in relation to vessel diameter, length etc. What I am stuck at right now is the visualization:
The eye is approximated as a sphere E with center in origo C = [0, 0, 0] and a radius r.
What I want to achieve is that based on the following input parameters, it should be able to draw a segment on the surface/perimeter of E:
Input:
Cartesian position of previous segment ending: P_0 = [x_0, y_0, z_0]
Segment length: L
Segment diameter: d
Desired angle relative to the previous segment: a (1)
Output:
Cartesian position of resulting segment ending: P_1 = [x_1, y_1, z_1]
What I do now, is the following:
From P_0, generate a sphere with radius L, representing all the points we could possibly draw to with the correct length. This set is called pool.
Limit pool to only include points with a distance to C between r*0.95 and r, so only the points around the perimeter of the eye are included.
Select only the point that would generate a relative angle (2) closest to the desired angle a.
The problem is, that whatever angle a I desire, is actually not what is measured by the dot product. Say I want an angle at 0 (i.e. that the new segment is following the same direction as the previous`, what I actually get is an angle around 30 degrees because of the curvature of the sphere. I guess what I want is more the 2D angle when looking from an angle orthogonal from the sphere to the branching point. Please take a look at the screenshots below for a visualization.
Any ideas?
(1) The reason for this is, that the child node with the greatest diameter is usually follows the path of the previous segment, whereas smaller child nodes tend to angle differently.
(2) Calculated by acos(dot(v1/norm(v1), v2/norm(v2)))
Screenshots explaining the problem:
Yellow line: previous segment
Red line: "new" segment to one of the points (not neccesarily the correct one)
Blue x'es: Pool (text=angle in radians)
I will restate the problem with my own notation:
Given two points P and Q on the surface of a sphere centered at C with radius r, find a new point T such that the angle of the turn from PQ to QT is A and the length of QT is L.
Because the segments are small in relation to the sphere, we will use a locally-planar approximation of the sphere at the pivot point Q. (If this isn't an okay assumption, you need to be more explicit in your question.)
You can then compute T as follows.
// First compute an aligned orthonormal basis {U,V,W}.
// - {U,V} should be a basis for the plane tangent at Q.
// - W should be normal to the plane tangent at Q.
// - U should be in the direction PQ in the plane tangent at Q
W = normalize(Q - C)
U = normalize(Q - P)
U = normalize(U - W * dotprod(W, U))
V = normalize(crossprod(W, U))
// Next compute the next point S in the plane tangent at Q.
// In a regular plane, the parametric equation of a unit circle
// centered at the origin is:
// f(A) = (cos A, sin A) = (1,0) cos A + (0,1) sin A
// We just do the same thing, but with the {U,V} basis instead
// of the standard basis {(1,0),(0,1)}.
S = Q + L * (U cos A + V sin A)
// Finally project S onto the sphere, obtaining the segment QT.
T = C + r * normalize(S - C)
I have random non-overlapping spheres (stationary) in a cube. I am trying to implement periodic boundary condition on the walls, so that if there are any half sphere on the edge it appears on the other side. This seems to be a well studied problem, but am not sure how should i implement it on spheres.
At the start of the code, i would have random sphere position (c, r), where c = [ x y z] inside or on the edges of cube of dimension (dims), where dims = [ l b h ].
Code for selecting a random sphere in a cube:
function [ c, r ] = randomSphere_1( dims )
% creating one sphere at random inside [0..dims(1)]x[0..dims(2)]x...
% radius and center coordinates are sampled from a uniform distribution
% over the relevant domain.
%
% output: c - center of sphere (vector cx, cy,... )
% r - radius of sphere (scalar)
r = 0.15 + ( 0.55 - 0.15) .* rand(1);% radius varies between 0.15mm - 0.55mm
c = bsxfun (#times,(dims) , rand(1,3)) + r; % to make sure sphere is placed inside or on the edge of the cube
% periodic condition
*if*
I am trying to build a code if the center co-ordinates fall on the edge or over them then there should be periodicity in spheres that is the the other half should be on the other side. I have added an image just to give an idea about what i mean by periodic boundary condition. So the spheres on the edges are exactly how i want. How should i proceed with the problem?
I have three properties of a surface (Easting, Northing and depth) in each E & N.
I want to fit a surface to these points and then calculate the volume of this fitted surface in each dx, dy and dz and then compare it with some other data.
Can you help me do that ?
To calculate volume of your surface compared to the flat plane at depth 0 just do
volume_ref = sum(sum(data)) * dx * dy * dz;
To get the volume compared to another surface (e.g. anticline) calculate the volume of the anticline with respect to the same reference (depth 0) and then subtract.