For some reason, the clock is not getting positioned properly. I have no idea why. I have tried to change the values of the numbers but this is the best I got it too. I really hope someone could help me out with this. I am trying to make it so that the hands of the clock don't stick out too much too.
In your loop where you draw the dashed outline:
var outerCircleRadius = radius;
var innerCircleRadius = radius - 14;
for (double i = 0; i < 360; i += 12) {
var x1 = centerX + outerCircleRadius * cos(i * pi / 180);
var y1 = centerX + outerCircleRadius * sin(i * pi / 180);
var x2 = centerX + innerCircleRadius * cos(i * pi / 180);
var y2 = centerX + innerCircleRadius * sin(i * pi / 180);
canvas.drawLine(Offset(x1, y1), Offset(x2, y2), dashBrush);
}
In the assignment for y1 and y2, centerX should be centerY.
EDIT: Looking at the rest of the code, there are a lot of places where centerX should be centerY. Always double-check code that you copy-paste.
Related
I'm looking to draw a line on a map from current user position with his bearing/heading.
But the line always have the same direction even if I rotate the phone to another direction. I wonder if I miss something in my calculation.
final la1 = userPosition.latitude;
final lo1 = userPosition.longitude;
const r = 6367; // earth radius
const d = 40; // distance
const dist = d / r;
final bearing = vector.radians(direction.value);
final la2 =
asin(sin(la1) * cos(dist) + cos(la1) * sin(dist) * cos(bearing));
final lo2 = lo1 +
atan2(sin(bearing) * sin(dist) * cos(la1),
cos(dist) - sin(la1) * sin(la2));
return LatLng(la2, lo2);
As you can see in the screenshots bellow, I create 2 lines with a different bearing (check map orientation), but they look the same.
import 'dart:math';
import 'package:google_maps_flutter/google_maps_flutter.dart';
import 'package:vector_math/vector_math.dart';
LatLng createCoord(LatLng coord, double bearing, double distance) {
var radius = 6371e3; //meters
var delta = (distance) / radius; // angular distance in radians
var teta = radians(bearing);
var phi1 = radians(coord.longitude);
var lambda1 = radians(coord.latitude);
var phi2 = asin(sin(phi1) * cos(delta) + cos(phi1) * sin(delta) * cos(teta));
var lambda2 = lambda1 +
atan2(sin(teta) * sin(delta) * cos(phi1),
cos(delta) - sin(phi1) * sin(phi2));
lambda2 = (lambda2 + 3 * pi) % (2 * pi) - pi; // normalise to -180..+180°
return LatLng(degrees(lambda2), degrees(phi2)); //[lon, lat]
}
I am following the quaternion tutorial: http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl and am trying to rotate a globe to some XYZ location. I have an initial quaternion and generate a random XYZ location on the surface of the globe. I pass that XYZ location into the following function. The idea was to generate a lookAt vector with GLKMatrix4MakeLookAt and define the end Quaternion for the slerp step from the lookAt matrix.
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// The eye location is defined by the look at location multiplied by this modifier
float modifier = 1.0;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
//NSLog(#"%f %f %f %f %f %f",xEye, yEye, zEye, x, y, z);
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
_currentSatelliteLocation = GLKMatrix4Multiply(_currentSatelliteLocation,self.effect.transform.modelviewMatrix);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
//_currentSatelliteLocation = GLKMatrix4Translate(_currentSatelliteLocation, 0.0f, 0.0f, GLOBAL_EARTH_Z_LOCATION);
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
// Print info on the quat
GLKVector3 vec = GLKQuaternionAxis(_slerpEnd);
float angle = GLKQuaternionAngle(_slerpEnd);
//NSLog(#"%f %f %f %f",vec.x,vec.y,vec.z,angle);
NSLog(#"Quat end:");
[self printMatrix:_currentSatelliteLocation];
//[self printMatrix:self.effect.transform.modelviewMatrix];
}
The interpolation works, I get a smooth rotation, however the ending location is never the XYZ I input - I know this because my globe is a sphere and I am calculating XYZ from Lat Lon. I want to look directly down the 'lookAt' vector toward the center of the earth from that lat/lon location on the surface of the globe after the rotation. I think it may have something to do with the up vector but I've tried everything that made sense.
What am I doing wrong - How can I define a final quaternion that when I finish rotating, looks down a vector to the XYZ on the surface of the globe? Thanks!
Is the following your meaning:
Your globe center is (0, 0, 0), radius is R, the start position is (0, 0, R), your final position is (0, R, 0), so rotate the globe 90 degrees around X-asix?
If so, just set lookat function eye position to your final position, the look at parameters to the globe center.
m_target.x = 0.0f;
m_target.y = 0.0f;
m_target.z = 1.0f;
m_right.x = 1.0f;
m_right.y = 0.0f;
m_right.z = 0.0f;
m_up.x = 0.0f;
m_up.y = 1.0f;
m_up.z = 0.0f;
void CCamera::RotateX( float amount )
{
Point3D target = m_target;
Point3D up = m_up;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 - amount) * up.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 - amount) * up.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 - amount) * up.z) + (cos(amount) * target.z);
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 + amount) * target.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 + amount) * target.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 + amount) * target.z);
Normalize(m_target);
Normalize(m_up);
}
void CCamera::RotateY( float amount )
{
Point3D target = m_target;
Point3D right = m_right;
amount = amount / 180 * PI;
m_target.x = (cos(PI / 2 + amount) * right.x) + (cos(amount) * target.x);
m_target.y = (cos(PI / 2 + amount) * right.y) + (cos(amount) * target.y);
m_target.z = (cos(PI / 2 + amount) * right.z) + (cos(amount) * target.z);
m_right.x = (cos(amount) * right.x) + (cos(PI / 2 - amount) * target.x);
m_right.y = (cos(amount) * right.y) + (cos(PI / 2 - amount) * target.y);
m_right.z = (cos(amount) * right.z) + (cos(PI / 2 - amount) * target.z);
Normalize(m_target);
Normalize(m_right);
}
void CCamera::RotateZ( float amount )
{
Point3D right = m_right;
Point3D up = m_up;
amount = amount / 180 * PI;
m_up.x = (cos(amount) * up.x) + (cos(PI / 2 - amount) * right.x);
m_up.y = (cos(amount) * up.y) + (cos(PI / 2 - amount) * right.y);
m_up.z = (cos(amount) * up.z) + (cos(PI / 2 - amount) * right.z);
m_right.x = (cos(PI / 2 + amount) * up.x) + (cos(amount) * right.x);
m_right.y = (cos(PI / 2 + amount) * up.y) + (cos(amount) * right.y);
m_right.z = (cos(PI / 2 + amount) * up.z) + (cos(amount) * right.z);
Normalize(m_right);
Normalize(m_up);
}
void CCamera::Normalize( Point3D &p )
{
float length = sqrt(p.x * p.x + p.y * p.y + p.z * p.z);
if (1 == length || 0 == length)
{
return;
}
float scaleFactor = 1.0 / length;
p.x *= scaleFactor;
p.y *= scaleFactor;
p.z *= scaleFactor;
}
The answer to this question is a combination of the following rotateTo function and a change to the code from Ray's tutorial at ( http://www.raywenderlich.com/12667/how-to-rotate-a-3d-object-using-touches-with-opengl ). As one of the comments on that article says there is an arbitrary factor of 2.0 being multiplied in GLKQuaternion Q_rot = GLKQuaternionMakeWithAngleAndVector3Axis(angle * 2.0, axis);. Remove that "2" and use the following function to create the _slerpEnd - after that the globe will rotate smoothly to XYZ specified.
// Rotate the globe using Slerp interpolation to an XYZ coordinate
- (void)rotateToLocationX:(float)x andY:(float)y andZ:(float)z {
// Turn on the interpolation for smooth rotation
_slerping = YES; // Begin auto rotating to this location
_slerpCur = 0;
_slerpMax = 1.0;
_slerpStart = _quat;
// Create a look at vector for which we will create a GLK4Matrix from
float xEye = x;
float yEye = y;
float zEye = z;
_currentSatelliteLocation = GLKMatrix4MakeLookAt(xEye, yEye, zEye, 0, 0, 0, 0, 1, 0);
// Turn our 4x4 matrix into a quat and use it to mark the end point of our interpolation
_slerpEnd = GLKQuaternionMakeWithMatrix4(_currentSatelliteLocation);
}
[SOLVED - See answer in bottom]
I'm trying to draw a cube with perspective, using OpenGL ES 2.0 on iOS (iPhone), but it's appearing as a rectangular shape.
From what I've gathered searching the web it seems to be related to the viewport / projection matrix, but I can't seem to put the finger on the actual cause.
If I set the viewport to a square measure (width == height) it draws perfectly well (a cube), but if I set it correctly (width = screen_width, height = screen_height) then the cube is drawn as a rectangular shape.
Should setting the Projection matrix accordingly with the Viewport make the cube stay a cube?!
My code (please let me know if more info is needed):
Render method:
// viewportSize is SCREEN_WIDTH and SCREEN_HEIGHT
// viewportLowerLeft is 0.0 and 0.0
ivec2 size = this->viewportSize;
ivec2 lowerLeft = this->viewportLowerLeft;
glViewport(lowerLeft.x, lowerLeft.y, size.x, size.y); // if I put size.x, size.x it draws well
mat4 projectionMatrix = mat4::FOVFrustum(45.0, 0.1, 100.0, size.x / size.y);
glUniformMatrix4fv(uniforms.Projection, 1, 0, projectionMatrix.Pointer());
Matrix operations:
static Matrix4<T> Frustum(T left, T right, T bottom, T top, T near, T far)
{
T a = 2 * near / (right - left);
T b = 2 * near / (top - bottom);
T c = (right + left) / (right - left);
T d = (top + bottom) / (top - bottom);
T e = - (far + near) / (far - near);
T f = -2 * far * near / (far - near);
Matrix4 m;
m.x.x = a; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = b; m.y.z = 0; m.y.w = 0;
m.z.x = c; m.z.y = d; m.z.z = e; m.z.w = -1;
m.w.x = 0; m.w.y = 0; m.w.z = f; m.w.w = 1;
return m;
}
static Matrix4<T> FOVFrustum(T fieldOfView, T near, T far, T aspectRatio)
{
T size = near * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
return Frustum(-size, size, -size / aspectRatio, size / aspectRatio, near, far);
}
If you haven't figured this out already, change:
return Frustum(-size, size, -size / aspectRatio, size / aspectRatio, near, far);
to
return Frustum(-size / aspectRatio, size / aspectRatio, -size, size,, near, far);
and it should draw correctly.
(or simply change the ratio from size.x/size.y to size.y/size.x)
Ok I found out the problem, size.x and size.y are int, so the division returns an int.
1.0 * size.x / size.y
Solves the problem.
facepalm
I want to create an augmented reality view on the iPhone. As a starting point, I took a look at Apple's pARk demo project. There, however, the deviceMotion property is used to get the rotation matrix to do the camera transformation with. But since deviceMotion uses the gyroscope (available on the iPhone 4 and newer) and I want to support the 3GS as well (in fact, a 3GS is my only development device), I cannot use this approach. So I want to create the rotation matrix myself using the data available from the accelerometer and compass.
Unfortunately, I lack the math skills to do so myself. Searching around, it seemed to me that this is the most relevant hands-on guide for my problem, but following the implementation there doesn't seem to adapt to my problem (the POI-views only appear momentarily and seemingly more due to device movement than to its heading; I've posted my onDisplayLink method (the only method with major changes) below). I've tried to read up on the relevant math, but at this point I simply don't know enough about it to find an approach on my own or to find the error in my code. Any help, please?
Edit: I've since recognized that the sensor data should better be stored in doubles than in ints and added a bit of smoothing. Now I can see more clearly how POIs that should appear from the side upon device rotation rather come down from above. Maybe that helps pointing to what's wrong.
CMAccelerometerData* orientation = motionManager.accelerometerData;
CMAcceleration acceleration = orientation.acceleration;
vec4f_t normalizedAccelerometer;
vec4f_t normalizedMagnetometer;
xG = (acceleration.x * kFilteringFactor) + (xG * (1.0 - kFilteringFactor));
yG = (acceleration.y * kFilteringFactor) + (yG * (1.0 - kFilteringFactor));
zG = (acceleration.z * kFilteringFactor) + (zG * (1.0 - kFilteringFactor));
xB = (heading.x * kFilteringFactor) + (xB * (1.0 - kFilteringFactor));
yB = (heading.y * kFilteringFactor) + (yB * (1.0 - kFilteringFactor));
zB = (heading.z * kFilteringFactor) + (zB * (1.0 - kFilteringFactor));
double accelerometerMagnitude = sqrt(pow(xG, 2) + pow(yG, 2) + pow(zG, 2));
double magnetometerMagnitude = sqrt(pow(xB, 2) + pow(yB, 2) + pow(zB, 2));
normalizedAccelerometer[0] = xG/accelerometerMagnitude;
normalizedAccelerometer[1] = yG/accelerometerMagnitude;
normalizedAccelerometer[2] = zG/accelerometerMagnitude;
normalizedAccelerometer[3] = 1.0f;
normalizedMagnetometer[0] = xB/magnetometerMagnitude;
normalizedMagnetometer[1] = yB/magnetometerMagnitude;
normalizedMagnetometer[2] = zB/magnetometerMagnitude;
normalizedMagnetometer[3] = 1.0f;
vec4f_t eastDirection;
eastDirection[0] = normalizedAccelerometer[1] * normalizedMagnetometer[2] - normalizedAccelerometer[2] * normalizedMagnetometer[1];
eastDirection[1] = normalizedAccelerometer[0] * normalizedMagnetometer[2] - normalizedAccelerometer[2] * normalizedMagnetometer[0];
eastDirection[2] = normalizedAccelerometer[0] * normalizedMagnetometer[1] - normalizedAccelerometer[1] * normalizedMagnetometer[0];
eastDirection[3] = 1.0f;
double eastDirectionMagnitude = sqrt(pow(eastDirection[0], 2) + pow(eastDirection[1], 2) + pow(eastDirection[2], 2));
vec4f_t normalizedEastDirection;
normalizedEastDirection[0] = eastDirection[0]/eastDirectionMagnitude;
normalizedEastDirection[1] = eastDirection[1]/eastDirectionMagnitude;
normalizedEastDirection[2] = eastDirection[2]/eastDirectionMagnitude;
normalizedEastDirection[3] = 1.0f;
vec4f_t northDirection;
northDirection[0] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * xB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[0];
northDirection[1] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * yB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[1];
northDirection[2] = (pow(normalizedAccelerometer[0], 2) + pow(normalizedAccelerometer[1],2) + pow(normalizedAccelerometer[2],2)) * zB - (normalizedAccelerometer[0] * xB + normalizedAccelerometer[1] * yB + normalizedAccelerometer[2] * zB)*normalizedAccelerometer[2];
northDirection[3] = 1.0f;
double northDirectionMagnitude;
northDirectionMagnitude = sqrt(pow(northDirection[0], 2) + pow(northDirection[1], 2) + pow(northDirection[2], 2));
vec4f_t normalizedNorthDirection;
normalizedNorthDirection[0] = northDirection[0]/northDirectionMagnitude;
normalizedNorthDirection[1] = northDirection[1]/northDirectionMagnitude;
normalizedNorthDirection[2] = northDirection[2]/northDirectionMagnitude;
normalizedNorthDirection[3] = 1.0f;
CMRotationMatrix r;
r.m11 = normalizedEastDirection[0];
r.m21 = normalizedEastDirection[1];
r.m31 = normalizedEastDirection[2];
r.m12 = normalizedNorthDirection[0];
r.m22 = normalizedNorthDirection[1];
r.m32 = normalizedNorthDirection[2];
r.m13 = normalizedAccelerometer[0];
r.m23 = normalizedAccelerometer[1];
r.m33 = normalizedAccelerometer[2];
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
When the device is placed on a table and roughly (using Compass.app) pointing to north, I log this data:
Accelerometer: x: -0.016692, y: 0.060852, z: -0.998007
Magnetometer: x: -0.016099, y: 0.256711, z: -0.966354
North Direction x: 0.011472, y: 8.561041, z:0.521807
Normalized North Direction x: 0.001338, y: 0.998147, z:0.060838
East Direction x: 0.197395, y: 0.000063, z:-0.003305
Normalized East Direction x: 0.999860, y: 0.000319, z:-0.016742
Does that appear sane?
Edit 2: I have updated the assignment of r to one that apparently leads me halfway to my goal: when the device is upright, I now see the landmarks near the horizontal plane; however, they are about 90º clock-wards off their expected location. Also, the output after the movement suggested by Beta:
Accelerometer: x: 0.074289, y: -0.997192, z: -0.009475
Magnetometer: x: 0.031341, y: -0.986382, z: -0.161458
North Direction x: -1.428996, y: -0.057306, z:-5.172881
Normalized North Direction x: -0.266259, y: -0.010678, z:-0.963842
East Direction x: 0.151658, y: -0.011698, z:-0.042025
Normalized East Direction x: 0.961034, y: -0.074126, z:-0.266305
After getting hold of an iPhone 4, I was able to compare the data generated by the code above with the output of the CoreMotion attitude data. With this, I found out that I should assign the values to my rotation matrix in the following manner:
CMRotationMatrix r;
r.m11 = normalizedNorthDirection[0];
r.m21 = normalizedNorthDirection[1];
r.m31 = normalizedNorthDirection[2];
r.m12 = 0 - normalizedEastDirection[0];
r.m22 = normalizedEastDirection[1];
r.m32 = 0 - normalizedEastDirection[2];
r.m13 = 0 - normalizedAccelerometer[0];
r.m23 = 0 - normalizedAccelerometer[1];
r.m33 = 0 - normalizedAccelerometer[2];
This gives roughly similar values, but of course the data produced by CoreMotion using the gyro is much better. Anyway, that's a starting point to reasonably support the 3GS. Maybe there can be additional quality derived by some sort of filtering, but I've not decided yet whether that's worth the effort.
I've got some sort of newbie question.
In my application (processingjs) i use scale() and translate() to allow the user to zoom and scroll through the scene. As long as i keep the scale set to 1.0 i've got no issues. BUT whenever i use the scale (i.e. scale(0.5)) i'm lost...
I need the mouseX and mouseY translated to the scene coordinates, which i use to determine the mouseOver state of the object I draw on the scene.
Can anybody help me how to translate these coordinates?
Thanks in advance!
/Richard
Unfortunately for me this required a code modification. I'll look at submitting this to the Processing.JS code repository at some point, but here's what I did.
First, you'll want to use modelX() and modelY() to get the coordinates of the mouse in world view. That will look like this:
float model_x = modelX(mouseX, mouseY);
float model_y = modelY(mouseX, mouseY);
Unfortunately Processing.JS doesn't seem to calculate the modelX() and modelY() values correctly in a 2D environment. To correct that I changed the functions to be as follows. Note the test for mv.length == 16 and the section at the end for 2D:
p.modelX = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var ox = 0, ow = 0;
var ox = ci[0] * ax + ci[1] * ay + ci[2] * az + ci[3] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? ox / ow : ox
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multX(x, y);
};
p.modelY = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var oy = ci[4] * ax + ci[5] * ay + ci[6] * az + ci[7] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? oy / ow : oy
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multY(x, y);
};
I hope that helps someone else who is having this problem.
Have you tried another method?
For example, assume that you are in a 2D environment, you can "map" all the frame in a sort of matrix.
Something like this:
int fWidth = 30;
int fHeight = 20;
float objWidth = 10;
float objHeight = 10;
void setup(){
fWidth = 30;
fHeight = 20;
objWidth = 10;
objHeight = 10;
size(fWidth * objWidth, fHeight * objHeight);
}
In this case you will have a 300*200 frame, but divided in 30*20 sections.
This allows you to move in somewhat ordered way your objects.
When you draw an object you have to give his sizes, so you can use objWidth and objHeight.
Here's the deal: you can make a "zoom-method" that edit the value of the object sizes.
In this way you drew a smaller/bigger object without editing any frame property.
This is a simple example because of your inaccurate question.
You can do it [in more complex ways], in a 3D environment too.