Wgs to Mercator - unity3d

We have this function we have been using to convert Wgs coordinates to Mercator. The goal is to have a thin split in latitude towards the poles, and a large one close to the equator to make the 3d match the imagery of the texture, all in an equirectangular projection.
Currently our function looks like this:
WgsToMercator(coord)
{
yRadian = coord.y * Math.PI / 180.0;
sinLat = Math.Sin(yRadian);
y = 0.5 - Math.Log((1 + sinLat) / (1 - sinLat)) / (Math.PI * 4); /// valeur entre 0 et 1, 1 correspondant a -90 degres et 0 a 90 degres
return y;
}
Current result is:
WgsToMercator(90) = 0;
WgsToMercator(45) = 0.36
WgsToMercator(0) = 0.5;
WgsToMercator(-45) = 0.64
WgsToMercator(-90) = 1;
Expected result would be:
WgsToMercator(90) = 0;
WgsToMercator(45) = 0.14
WgsToMercator(0) = 0.5;
WgsToMercator(-45) = 0.86
WgsToMercator(-90) = 1;
My math are rusty and can't find a way to get the expected result. Thanks a lot by advance

I can get approximately your numbers with sine squared, after subtracting your input from 90, taking half of the result, and converting to radians.
Something like (in Octave/Matlab):
function output = WgsToMercator(coord)
rebased_angle = 90 - coord;
half_angle = 0.5 * rebased_angle;
angle_radians = half_angle * (3.1415/180.0);
output = sin(angle_radians)*sin(angle_radians);
end
Which gets:
>> WgsToMercator(90)
ans = 0
>> WgsToMercator(45)
ans = 0.14644
>> WgsToMercator(0)
ans = 0.49998
>> WgsToMercator(-45)
ans = 0.85353
>> WgsToMercator(-90)
ans = 1.00000
In C# it's:
public float WgsToMercator(float coord)
{
var rebasedAngle = 90.0f - coord;
var halfAngle = 0.5f * rebasedAngle;
var angleRadians = halfAngle * (Mathf.PI / 180.0f);
return Mathf.Sin(angleRadians) * Mathf.Sin(angleRadians);
}
:EDIT:
Here's a plot of the function I provided (in blue) and the points you've given (red circles).

Related

How to convert Bounding Box coordinates to Yolo Coordinates?

I am trying to convert Bounding box coordinates to Yolo coordinates. The bounding box coordinates are not in the typical format. They look like this:
1,-1,855,884,94,195,1,-1,-1,-1
1,-1,1269,830,103,202,0,-1,-1,-1
1,-1,1023,909,86,170,0,-1,-1,-1
1,-1,879,681,76,191,0,-1,-1,-1
How do I use the 1s, -1s, and 0s to convert these coordinates to Yolo format?
I tried this code to convert them to Yolo:
def convert(filename_str, coords):
os.chdir("..")
image = cv2.imread(filename_str + ".jpg")
coords[2] -= coords[0]
coords[3] -= coords[1]
x_diff = int(coords[2]/2)
y_diff = int(coords[3]/2)
coords[0] = coords[0]+x_diff
coords[1] = coords[1]+y_diff
coords[0] /= int(image.shape[1])
coords[1] /= int(image.shape[0])
coords[2] /= int(image.shape[1])
coords[3] /= int(image.shape[0])
os.chdir("Label")
return coords
I get negative Yolo coordinates with this format:
0 0.2871825876662636 0.5 -0.46009673518742444 -0.637962962962963
0 0.4147521160822249 0.4777777777777778 -0.7049576783555018 -0.5814814814814815
0 0.3355501813784764 0.5 -0.5665054413542926 -0.6842592592592592
Thanks in advance
Try this:
def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
im=Image.open(img_path)
w= int(im.size[0])
h= int(im.size[1])
print(xmin, xmax, ymin, ymax) #define your x,y coordinates
b = (xmin, xmax, ymin, ymax)
bb = convert((w,h), b)

Convert Fisheye Video into regular Video

I have a video stream coming from a 180 degree fisheye camera. I want to do some image-processing to convert the fisheye view into a normal view.
After some research and lots of read articles I found this paper.
They describe an algorithm (and some formulas) to solve this problem.
I used tried to implement this method in a Matlab. Unfortunately it doesn't work, and I failed to make it work. The "corrected" image looks exactly like the original photograph and there's no any removal of distortion and secondly I am just receiving top left side of the image, not the complete image but changing the value of 'K' to 1.9 gives mw the whole image, but its exactly the same image.
Input image:
Result:
When the value of K is 1.15 as mentioned in the article
When the value of K is 1.9
Here is my code:
image = imread('image2.png');
[Cx, Cy, channel] = size(image);
k = 1.5;
f = (Cx * Cy)/3;
opw = fix(f * tan(asin(sin(atan((Cx/2)/f)) * k)));
oph = fix(f * tan(asin(sin(atan((Cy/2)/f)) * k)));
image_new = zeros(opw, oph,channel);
for i = 1: opw
for j = 1: oph
[theta,rho] = cart2pol(i,j);
R = f * tan(asin(sin(atan(rho/f)) * k));
r = f * tan(asin(sin(atan(R/f))/k));
X = ceil(r * cos(theta));
Y = ceil(r * sin(theta));
for k = 1: 3
image_new(i,j,k) = image(X,Y,k);
end
end
end
image_new = uint8(image_new);
warning('off', 'Images:initSize:adjustingMag');
imshow(image_new);
This is what solved my problem.
input:
strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
zoom as floating point >= 1. (1 = no change in zoom)
algorithm:
set halfWidth = imageWidth / 2
set halfHeight = imageHeight / 2
if strength = 0 then strength = 0.00001
set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength
for each pixel (x,y) in destinationImage
set newX = x - halfWidth
set newY = y - halfHeight
set distance = squareroot(newX ^ 2 + newY ^ 2)
set r = distance / correctionRadius
if r = 0 then
set theta = 1
else
set theta = arctangent(r) / r
set sourceX = halfWidth + theta * newX * zoom
set sourceY = halfHeight + theta * newY * zoom
set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)

Why are the final image dimensions in my perspective projection not in (-1,-1) to (1,1)?

I have implemented a perspective projection algorithm according to chapter 6 Computer Graphics Principles and Practices (CGP&P) by Foley, van Dam, Feiner, Hughes (2nd edition). I have
N'per = M * Sper * Spar * T (-prp) * R * T (-vrp).
As I understand it, the final image should be in canonical form size of (-1,-1) to (1,1) and z in (0,-1). However, the final image X-Y dimensions (see Figure 1) do not seem correct. I'm mostly trying to understand how the final image size is determined. I have included the matlab code below. My frustum (f) is defined by eyepoint (EP) at a specified lat/lon that has been converted to ECEF; distances: near plane (nDist) = 300; view plane (vDist) = 900; and far plane (fDist) = 25000. A line of sight (LOS) vector created at the EP is the center of projection. The frustum correctly finds and returns the buildings that within it along the LOS. Field of View is (10 deg x 10 deg). Now I'm just trying to project those buildings onto a defined window so that I can "quantize" (paint?) the grid and indicate which building is located at which x,y pair in the view plane. Unfortunately, because the window is not returning at the indicated size, it makes the painting more difficult for me. And besides, I'd just like to know what I'm doing wrong to not end up with the correct dimensions.
Matlab code (no attempts at optimizations or anything. just brute-force implementation!
function iPersProj = getPersProj(bldg, bi, f, plotpersp, fPersPlot)
color = [rand rand rand];
face = eFaces.bottom;
iPersProjBtm = persproj(f, bldg, face);
face = eFaces.top;
iPersProjTop = persproj(f, bldg, face);
iPersProj = [iPersProjTop;iPersProjBtm];
hold on;
scatter3(iPersProjTop(:,1), ...
iPersProjTop(:,2), ...
iPersProjTop(:,3),'+','CData',color);
scatter3(iPersProjBtm(:,1), ...
iPersProjBtm(:,2), ...
iPersProjBtm(:,3),'o','CData',color);
pPersProj=[iPersProjTop;
iPersProjTop(1,:); ...
iPersProjBtm; ...
iPersProjBtm(1,:); ...
iPersProjBtm(2,:); ...
iPersProjTop(4,:); ...
iPersProjTop(3,:); ...
iPersProjBtm(3,:); ...
iPersProjBtm(4,:); ...
iPersProjTop(2,:); ...
iPersProjTop(1,:)];
line (pPersProj(:,1), pPersProj(:,2),'Color',color);
text (pPersProj(1,1), pPersProj(1,2), int2str(bi));
end
function proj = persproj(f, bldg, face)
vrp = f.vC; %center view plane
vpn = f.Z; % LOS for frustum
cop = -f.EP;
F = f.vDist - f.nDist;
B = f.vDist - f.fDist;
umin = -5;
vmin = -5;
umax = 5;
vmax = 5;
R = getrotation (f);
Tvrp = gettranslation(-vrp);
ed = R * Tvrp * [f.EP 1]'; %translate eyepoint to camera?
prp = [0 0 ed(3)];
sh = getsh(prp, umax, umin, vmax, vmin);
Tprp = gettranslation(-prp);
vrpp = -prp(3); %(sh * Tprp * [0;0;0;1]); %vrp-prime per CGP&P
zmin = -(vrpp + F)/(vrpp+B);
zmax = -(vrpp + B)/(vrpp+B);
zprj = -vrpp/(vrpp+B);
sper = getsper(vrpp, B, umax, umin, vmax, vmin);
M=[ 1 0 0 0; ...
0 1 0 0; ...
0 0 1/(1+zmin) -zmin/(1+zmin); ...
0 0 -1 0];
proj = zeros(4,4);
for i=1:4
Q=bldg.coords(i,:,face);
uvdw = M * sper * sh * Tprp * R * Tvrp * [Q';1];
proj (i,1) = uvdw(1);
proj (i,2) = uvdw(2);
proj (i,3) = uvdw(3);
end
end
function sper = getsper (vrpz, B, umax, umin, vmax, vmin)
dx=umax-umin;
dy=vmax-vmin;
sper=zeros(4,4);
sper(1,1) = 2*vrpz/(dx*(vrpz+B));
sper(2,2) = 2*vrpz/(dy*(vrpz+B));
sper(3,3) = -1/(vrpz+B);
sper(4,4) = 1;
end
function sh = getsh (prp, umax, umin, vmax, vmin)
sx=umax+umin;
sy=vmax+vmin;
cw = [sx/2 sy/2 0 1]';
dop = cw - [prp 1]';
shx = - dop(1)/dop(3);
shy = - dop(2)/dop(3);
sh=zeros(4,4);
sh(1,1) = 1;
sh(2,2) = 1;
sh(3,3) = 1;
sh(4,4) = 1;
sh(1,3) = shx;
sh(2,3) = shy;
end
function R = getrotation (f)
rz = f.Z;
rx=cross(f.Y, rz);
rx=rx/norm(rx);
ry=cross(rz,rx);
R=zeros(4,4);
R(1,1:3) = rx;
R(2,1:3) = ry;
R(3,1:3) = rz;
R(4,4) = 1;
end
function T = gettranslation(p)
T = zeros(4,4);
T(1:3,4) = p';
T(1,1) = 1;
T(2,2) = 1;
T(3,3) = 1;
T(4,4) = 1;
end
Figure 1: Prospective Projection but dimensions are not (-1,-1) to (1,1)1

iOS OpenGL ES 2.0 Quaternion Rotation Slerp to XYZ Position

I am following the quaternion tutorial: http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl and am trying to rotate a globe to some XYZ location. I have an initial quaternion and generate a random XYZ location on the surface of the globe. I pass that XYZ location into the following function. The idea was to generate a lookAt vector with GLKMatrix4MakeLookAt and define the end Quaternion for the slerp step from the lookAt matrix.
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// The eye location is defined by the look at location multiplied by this modifier
float modifier = 1.0;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
//NSLog(#"%f %f %f %f %f %f",xEye, yEye, zEye, x, y, z);
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
_currentSatelliteLocation = GLKMatrix4Multiply(_currentSatelliteLocation,self.effect.transform.modelviewMatrix);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
//_currentSatelliteLocation = GLKMatrix4Translate(_currentSatelliteLocation, 0.0f, 0.0f, GLOBAL_EARTH_Z_LOCATION);
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
// Print info on the quat
GLKVector3 vec = GLKQuaternionAxis(_slerpEnd);
float angle = GLKQuaternionAngle(_slerpEnd);
//NSLog(#"%f %f %f %f",vec.x,vec.y,vec.z,angle);
NSLog(#"Quat end:");
[self printMatrix:_currentSatelliteLocation];
//[self printMatrix:self.effect.transform.modelviewMatrix];
}
The interpolation works, I get a smooth rotation, however the ending location is never the XYZ I input - I know this because my globe is a sphere and I am calculating XYZ from Lat Lon. I want to look directly down the 'lookAt' vector toward the center of the earth from that lat/lon location on the surface of the globe after the rotation. I think it may have something to do with the up vector but I've tried everything that made sense.
What am I doing wrong - How can I define a final quaternion that when I finish rotating, looks down a vector to the XYZ on the surface of the globe? Thanks!
Is the following your meaning:
Your globe center is (0, 0, 0), radius is R, the start position is (0, 0, R), your final position is (0, R, 0), so rotate the globe 90 degrees around X-asix?
If so, just set lookat function eye position to your final position, the look at parameters to the globe center.
m_target.x = 0.0f;
m_target.y = 0.0f;
m_target.z = 1.0f;
m_right.x = 1.0f;
m_right.y = 0.0f;
m_right.z = 0.0f;
m_up.x = 0.0f;
m_up.y = 1.0f;
m_up.z = 0.0f;
void CCamera::RotateX( float amount )
{
Point3D target = m_target;
Point3D up = m_up;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 - amount) * up.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 - amount) * up.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 - amount) * up.z) + (cos(amount) * target.z);
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 + amount) * target.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 + amount) * target.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 + amount) * target.z);
Normalize(m_target);
Normalize(m_up);
}
void CCamera::RotateY( float amount )
{
Point3D target = m_target;
Point3D right = m_right;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 + amount) * right.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 + amount) * right.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 + amount) * right.z) + (cos(amount) * target.z);
m_right.x = (cos(amount) * right.x) + (cos(PI / 2 - amount) * target.x);
m_right.y = (cos(amount) * right.y) + (cos(PI / 2 - amount) * target.y);
m_right.z = (cos(amount) * right.z) + (cos(PI / 2 - amount) * target.z);
Normalize(m_target);
Normalize(m_right);
}
void CCamera::RotateZ( float amount )
{
Point3D right = m_right;
Point3D up = m_up;
amount = amount / 180 * PI;
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 - amount) * right.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 - amount) * right.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 - amount) * right.z);
m_right.x = (cos(PI / 2 + amount) * up.x) + (cos(amount) * right.x);
m_right.y = (cos(PI / 2 + amount) * up.y) + (cos(amount) * right.y);
m_right.z = (cos(PI / 2 + amount) * up.z) + (cos(amount) * right.z);
Normalize(m_right);
Normalize(m_up);
}
void CCamera::Normalize( Point3D &p )
{
float length = sqrt(p.x * p.x + p.y * p.y + p.z * p.z);
if (1 == length || 0 == length)
{
return;
}
float scaleFactor = 1.0 / length;
p.x *= scaleFactor;
p.y *= scaleFactor;
p.z *= scaleFactor;
}
The answer to this question is a combination of the following rotateTo function and a change to the code from Ray's tutorial at ( http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl ). As one of the comments on that article says there is an arbitrary factor of 2.0 being multiplied in GLKQuaternion Q_rot = GLKQuaternionMakeWithAngleAndVector3Axis(angle * 2.0, axis);. Remove that "2" and use the following function to create the _slerpEnd - after that the globe will rotate smoothly to XYZ specified.
// Rotate the globe using Slerp interpolation to an XYZ coordinate
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
}

Transform screen coordinates to model coordinates

I've got some sort of newbie question.
In my application (processingjs) i use scale() and translate() to allow the user to zoom and scroll through the scene. As long as i keep the scale set to 1.0 i've got no issues. BUT whenever i use the scale (i.e. scale(0.5)) i'm lost...
I need the mouseX and mouseY translated to the scene coordinates, which i use to determine the mouseOver state of the object I draw on the scene.
Can anybody help me how to translate these coordinates?
Thanks in advance!
/Richard
Unfortunately for me this required a code modification. I'll look at submitting this to the Processing.JS code repository at some point, but here's what I did.
First, you'll want to use modelX() and modelY() to get the coordinates of the mouse in world view. That will look like this:
float model_x = modelX(mouseX, mouseY);
float model_y = modelY(mouseX, mouseY);
Unfortunately Processing.JS doesn't seem to calculate the modelX() and modelY() values correctly in a 2D environment. To correct that I changed the functions to be as follows. Note the test for mv.length == 16 and the section at the end for 2D:
p.modelX = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var ox = 0, ow = 0;
var ox = ci[0] * ax + ci[1] * ay + ci[2] * az + ci[3] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? ox / ow : ox
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multX(x, y);
};
p.modelY = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var oy = ci[4] * ax + ci[5] * ay + ci[6] * az + ci[7] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? oy / ow : oy
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multY(x, y);
};
I hope that helps someone else who is having this problem.
Have you tried another method?
For example, assume that you are in a 2D environment, you can "map" all the frame in a sort of matrix.
Something like this:
int fWidth = 30;
int fHeight = 20;
float objWidth = 10;
float objHeight = 10;
void setup(){
fWidth = 30;
fHeight = 20;
objWidth = 10;
objHeight = 10;
size(fWidth * objWidth, fHeight * objHeight);
}
In this case you will have a 300*200 frame, but divided in 30*20 sections.
This allows you to move in somewhat ordered way your objects.
When you draw an object you have to give his sizes, so you can use objWidth and objHeight.
Here's the deal: you can make a "zoom-method" that edit the value of the object sizes.
In this way you drew a smaller/bigger object without editing any frame property.
This is a simple example because of your inaccurate question.
You can do it [in more complex ways], in a 3D environment too.