I have a question about the Qibla direction, I am building an iPhone application which will show both North direction and Qibla direction, I am showing the north direction with the help of CLLocationManager and updating it with CLHeading as newHeading.magneticHeading, And i am showing the Qibla direction with the following code
double A = MECCA_LONGITUDE - lon;
double b = 90.0 - lat;
double c = 90.0 - MECCA_LATITUDE;
NSLog(#"tan -1( sin(%f) / ( sin(%f) * cot(%f) - cos(%f) * cos(%f)))", A, b, c, b, A);
double qibAngle = atan(sin(A) /( sin(b) * (1 / tan(c)) - cos(b) * cos(A) ));
NSLog(#"qib Angle- %f",qibAngle);
qibla.transform = CGAffineTransformMakeRotation(qibAngle * M_PI /180);
So, here i am getting the angle, but it does not update the angle when i rotate the device, Can anyone help me out, i know that i need to do some thing with heading , but i don't know what to do?
I assume the code you posted computes the angle between geographical north and the direction towards Mecca for the current location. All you need to do now is take into account the user's heading.
For example, suppose the user is located so Mecca is directly due West, and the user is facing directly due East. Since tan returns +/-90 degrees, the qibla angle would have to be -90 degrees. Now the adjustment should be obvious: you need to subtract 90 degrees from the qibla angle respective to geographical north (-90) to arrive at (-180) degrees, which is how much user needs to turn in order to face Mecca.
Simply put, you need to "undo" the user's deviation, and you do this by subtracting from the qibla angle the the user's heading, which is relative to geographical north.
With the maths out of the way, now you need to observe heading changes and recompute the qibla angle when the heading changes. Lastly, make sure to use the trueHeading property.
I'm probably going to lose points on this answer because I know absolutely nothing about ios, but, I believe atan returns a value in radians, and CGAffineTransformMakeRotation takes it's argument in radians as well , so the conversion qibAngle * M_PI /180 is not needed.
You might also want to re-title your post, since most people have no idea what Qibla is and wouldn't realize that it's about math and iOS. I only looked because I've heard calculating the right direction to Mecca is kind of a neat math problem.
Related
I'm currently working on an app and I have my Node rotating by angle every time I touch by using the TouchesBegan method. Now I've been trying to figure out if theres a way to tell what way a node is orientated?
For example if you have a square is there a way to give every side a diffrent value (1,2,3,4)? Can you tell what value is faceing down?
I was thinking if I could tell what angle the node has been rotated by
( one touch = 90 degrees / two touches = 180 degrees....)
I could use that value for features ill be needing in the future. However I don't know if that value is ever saved, or how to go about saving it
Thank you for any help!
To get the angle your SKSpriteNode is facing use the zRotation property on you SKSpriteNode. Bear in mind this is measured in radians, if you specifically need it in degrees you can convert from radians to degrees with the following code:
let degrees = sprite.zRotation * 180 / CGFloat(M_PI)
Alternatively, if all you wanted to do was know how many times the user had touched the screen - you could use a variable that you increment every time touchesBegan in called.
Hope that helps!
e = ephem.readtle(...)
e.compute('2012/02/04 07:55:00')
As far as I can see there's only e.elevation as a measure of distance which is relative to the sea level. At the moment I'm using a.elevation/1000 + 6371 to estimate the distance from the center of the earth.
I'm pretty sure that the exact earth center distance at the requested point in time is needed for the ephemeris calculations. Is this distance somewhere exposed and if not, why not and can that be changed?
I had thought that the answer would involving having to expose an ellipsoidal model of the earth from deep inside of the C code to Python to get you the information you need. But, having just gone through the Earth-satellite code, it turns out that the way that it converts the satellite's distance from the Earth center to its height is simply (from earthsat.c):
#if SSPELLIPSE
#else
*Height = r - EarthRadius;
#endif
Apparently the programmer planned to someday implement an ellipsoidal earth, and had an #if statement ready to guard the new code, but never wrote any.
So you can convert the height (“elevation”) back to a distance from the Earth's center by adding the value EarthRadius which is defined as:
#define EarthRadius 6378.16 /* Kilometers */
Since the elevation is, I believe, in meters, you will want to multiply EarthRadius by 1000.0 or else divide the elevation by 1000.0 to get the right result.
I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).
I would like to create a MKCoordinateRegion (to zoom to the good region on the map) from the northeast and southwest points given by Google. For that I need to compute the coordinate of the center between these two coordinates. Any clue? I could do simple math but I will have problems with the equator...
Thanks!!!
Assuming you mean anti-meridian and not the equator then here goes (While all this works on a flattened map and should be good enough for your purpose, it's completely bung on a sphere. see note at the bottom).
What I've done in other cases is start at either point, and if the next point is more than 180 degrees to the east, I convert it so that it is less than 180 to the west like so
if(pointa.lon - pointb.lon > 180)
pointb.lon += 360:
else if (pointa.lon - pointb.lon < -180)
pointb.lon -= 360
At this time pointb.lon might be an invalid longitude like 190 but you can at least work out the mid-point between pointa and point b because they will be on a continuous scale, so you might have points 175 and 190. Then just get the mid-point between them as 182.5, then convert that to make sure it is within the usual limits and you get -177.5 as the latitude between the two points. Working out the latitude is easy.
Of course on a sphere this is wrong because the midpoint between (-180,89) and (180,89) is (0*,90) not (0,89).
* = could be anything
Also, couldn't you just zoomToRect made with the defined corners? It'd save you doing this calculation and then next one which would be to work out what zoom level you need to be at when centered on that point to include the two corners you know about. Since the Maps app doesn't appear to scroll over the anti-meridian I assume MKMapview can't either so your rectangle is going to have to have the northeast coord as the top right and the southwest as the bottom left.
This SO post has the code to zoom a map view to fit all its annotations.
I am trying to make an application that would detect what kind of shape you made with your iPhone using accelerometer.
As an example, if you draw a circle with your hand holding the iPhone, the app would be able to redraw it on the screen.
This could also work with squares, or even more complicated shapes.
The only example of application I've seen doing such a thing is AirPaint (http://vimeo.com/2276713), but it doesn't seems to be able to do it in real time.
My first try is to apply a low-pass filter on the X and Y parameters from the accelerometer, and to make a pointer move toward these values, proportionally to the size of the screen.
But this is clearly not enought, I have a very low accuracy, and if I shake the device it also makes the pointer move...
Any ideas about that ?
Do you think accelerometer data is enought to do it ? Or should I consider using other data, such as the compass ?
Thanks in advance !
OK I have found something that seems to work, but I still have some problems.
Here is how I proceed (admiting the device is hold verticaly) :
1 - I have my default x, y, and z values.
2 - I extract the gravity vector from this data using a low pass filter.
3 - I substract the normalized gravity vector from each x, y, and z, and get the movement acceleration.
4 - Then, I integrate this acceleration value with respect to time, so I get the velocity.
5 - I integrate this velocity again with respect to time, and find a position.
All of the below code is into the accelerometer:didAccelerate: delegate of my controller.
I am trying to make a ball moving according to the position i found.
Here is my code :
NSTimeInterval interval = 0;
NSDate *now = [NSDate date];
if (previousDate != nil)
{
interval = [now timeIntervalSinceDate:previousDate];
}
previousDate = now;
//Isolating gravity vector
gravity.x = currentAcceleration.x * kFileringFactor + gravity.x * (1.0 - kFileringFactor);
gravity.y = currentAcceleration.y * kFileringFactor + gravity.y * (1.0 - kFileringFactor);
gravity.z = currentAcceleration.z * kFileringFactor + gravity.z * (1.0 - kFileringFactor);
float gravityNorm = sqrt(gravity.x * gravity.x + gravity.y * gravity.y + gravity.z * gravity.z);
//Removing gravity vector from initial acceleration
filteredAcceleration.x = acceleration.x - gravity.x / gravityNorm;
filteredAcceleration.y = acceleration.y - gravity.y / gravityNorm;
filteredAcceleration.z = acceleration.z - gravity.z / gravityNorm;
//Calculating velocity related to time interval
velocity.x = velocity.x + filteredAcceleration.x * interval;
velocity.y = velocity.y + filteredAcceleration.y * interval;
velocity.z = velocity.z + filteredAcceleration.z * interval;
//Finding position
position.x = position.x + velocity.x * interval * 160;
position.y = position.y + velocity.y * interval * 230;
If I execute this, I get quite good values, I mean I can see the acceleration going into positive or negative values according to the movements I make.
But when I try to apply that position to my ball view, I can see it is moving, but with a propencity to go more in one direction than the other. This means, for example, if I draw circles with my device, i will see the ball describing curves towards the top-left corner of the screen.
Something like that : http://img685.imageshack.us/i/capturedcran20100422133.png/
Do you have any ideas about what is happening ?
Thanks in advance !
The problem is that you can't integrate acceleration twice to get position. Not without knowing initial position and velocity. Remember the +C term that you added in school when learning about integration? Well by the time you get to position it is a ct+k term. And it is is significant. That's before you consider that the acceleration data you're getting back is quantised and averaged, so you're not actually integrating the actual acceleration of the device. Those errors will end up being large when integrated twice.
Watch the AirPaint demo closely and you'll see exactly this happening, the shapes rendered are significantly different to the shapes moved.
Even devices that have some position and velocity sensing (a Wiimote, for example) have trouble doing gesture recognition. It is a tricky problem that folks pay good money (to companies like AILive, for example) to solve for them.
Having said that, you can probably quite easily distinguish between certain types of gesture, if their large scale characteristics are different. A circle can be detected if the device has received accelerations in each of six angle ranges (for example). You could detect between swiping the iphone through the air and shaking it.
To tell the difference between a circle and a square is going to be much more difficult.
You need to look up how acceleration relates to velocity and velocity to position. My mind is having a wee fart at the moment, but I am sure it the integral... you want to intergrate acceleration with respect to time. Wikipedia should help you with the maths and I am sure there is a good library somewhere that can help you out.
Just remember though that the accelerometers are not perfect nor polled fast enough. Really sudden movements may not be picked up that well. But for gently drawing in the air, it should work fine.
Seems like you are normalizing your gravity vector before subtraction with the instantaneous acceleration. This would keep the relative orientation but remove any relative scale. The latest device I tested (admittedly not an Idevice) returned gravity at roughly -9.8 which is probably calibrated to m/s. Assuming no other acceleration, if you were to normalize this then subtract it from the filtered pass, you would end up with a current accel of -8.8 instead of 0.0f;
2 options:
-You can just subtract out the gravity vector after the filter pass
-Capture the initial accel vector length, normalize the accel and the gravity vectors, scale the accel vector by the dot of the accel and gravity normals.
Also worth remembering to take the orientation of the device into account.