How to calculate SUN coordinates in ECEF (geocenter) frame? - coordinates

I want to find the python library in which I may obtain the SUN coordinates in ECEF (Eart-Centered, Earth-Fixed) frame - geocenter coordinates. I try using jplephem, pyephem etc. but none of them availalbe to give these coordinates. Please give me library, algorithm or etc. in which I may obtained these coordinates.
With greetings.

You are correct that neither of those libraries has quick support for an ECEF reference frame, though it can be faked in PyEphem by creating an Observer at latitude 0° and longitude 0° and whose elevation is negative enough to put them at the center of the Earth.
If you are interested in a more modern library, the new 1.34 version of Skyfield directly supports the standard ITRS reference frame, which is ECEF:
from skyfield import framelib
from skyfield.api import load
ts = load.timescale()
t = ts.now()
planets = load('de421.bsp')
sun = planets['sun']
earth = planets['earth']
apparent = earth.at(t).observe(sun).apparent()
vector = apparent.frame_xyz(framelib.itrs)
print(vector.au)
The result:
[-0.653207 -0.62839897 -0.38480108]
The operations you can perform with reference frames are explained in more detail here:
https://rhodesmill.org/skyfield/positions.html#coordinates-in-other-reference-frames

Related

Instantiate a pyephem EarthSatellite from positional coordinates without a TLE

I have the position of a satellite at a given time, in altitude/latitude/longitude coordinates.
I'd like to calculate the azimuth/elevation from an observer on earth (given in lat/long) to this satellite at that position.
The ephem.EarthSatellite object only operates on TLEs and a desired timestamp. Is there anyway to instantiate a satellite from positional coordinates? Maybe with a different ephem.Body type?
No, there is no way to create your own objects with Earth-fixed coordinates in PyEphem. You might want to take a look at its replacement that I am writing, though, called Skyfield — you should be able to create a Topos object with any lat / lon / elevation you want, and then observe it from any other location you define with a Topos and get an alt / az back.

Converting Veins Coordinates to GPS

I am using realistic street networks imported from OpenStreetMap for simulations with Veins, for example the Luxembourg scenario from Lara Codeca. Now, to prepare a visualisation (using Google Earth), I want to export the vehicle positions in the simulation from SUMO or OmNET coordinates to GPS coordinates.
As material I have the OSM file used for generating the scenario, including the GPS positions of all nodes there. I was hoping to find a simple mapping from the simulation coordinates to GPS coordinates, for example, by knowing the GPS coordinates of the corners of the bounding box and the simulation playground.
Is there a simple way to make this conversion, and how can I find the actual corners that were used by the OSM conversion when generating the playground?
The conversion works as follows:
1. Accessing the Location Information from OmNET
// Adapt your path to the mobility module here
Veins::TraCIMobility* mobility =
check_and_cast<Veins::TraCIMobility*>(
getParentModule()->getSubmodule("veinsmobility"));
Veins::TraCICommandInterface* traci = mobility->getCommandInterface();
Coord currPos = mobility->getCurrentPosition();
std::pair<double, double> currLonLat = traci->getLonLat(currPos);
getLonLat() returned absolute 2D coordinates for me, so there is a conversion step required.
2. Finding the Transformation
The .net.xml file from SUMO contains the required transformation. The <location> tag contains the attributes netOffset and projParameters that are needed.
For the Luxembourg scenario, these are
netOffset="-285448.66,-5492398.13"
projParameter="+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
3. Inversing the Transformation
The library PROJ.4 can be used to do the inversion. A Python interface is also available (pyproj).
import pyproj
projection = pyproj.Proj(
"+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
# x, y obtained from OmNET
lon, lat = projection(x, y, inverse=True)
In case only the relative location information is available, the x, y values must be adjusted first by adding the netOffset values to them.
Edit
Only the first step is necessary when you build SUMO --with-proj-gdal support, the result of getLonLat() will be in the desired format immediately.

Relative coordinate calculation in custom map

I'm currently working on a mapping app for iPhone. I've created some custom maps of various sizes, but I've run into an issue:
I would like to implement the ability for users' locations to be checked automatically, but since Im not using a MapView this is much more dificult. (see below)
given the different coordinate systems, I would like to receive a geolocation (green dot) and translate it into a pixel location on a custom map.
Ive got the geolocations for the 4 corners, but the rect is askew. Ive calculated the angle of rotation, but Im just generally confused.
note: the size of the maps arent big enough for the spherical nature of the earth to come into calculation.
Any help is appreciated!
To convert a geolocation to point you need to first understand the mapping. assuming you are using Mercator.
x = R*long
y = R*(1+sin(lat))/cos(lat)
where lat and long are in radians.R is radius of earth. the scale of the image would be from 0 to R*PI
so to get it within view.frame.size you may have to divide by a scale factor.
for difference between points.
x2-x1 = R* (long2-long1)
y2-y1 = R* ( (1+sin(lat2))/cos(lat2) - (1+sin(lat1))/cos(lat1) )

iOS is it possible to convert CLLocation into some sort of XYZ metric coordinate system?

I'm building an augmented reality game, and working with CLLocation is rather cumbersome.
Is there some way to locally approximate CLLocation as XYZ coordinate, expressed in meters with the origin starting at some arbitrary point (for example the initial position when the game was started)?
Lets say I'm working with a 1 mile radius and do not really care about the curvature of the earth. Is it possible to approximate or somehow simplify the location based calculations for local position tracking?
Alternatively, is there a coordinate system that can be used with CLLocation that also incorporates the roll, pitch, yaw of the CMAttitude as well as compass orientation?
Clarification: As far as I understand, the problem with latitude and longitude is that their units vary in size, depending on the position on the globe. I should've specified that X,Y,Z should be in standard units, like meters or feet.
Thank you!
The Haversine formula may be useful.
I found a good article on it at http://www.jaimerios.com/?p=39 with code examples.
You could get the initial point at the app's launch and calculate the relative points based on the user coordinates as he or she moves. Admittedly, this is not super elegant, but if you are just trying to do some simple comparisons based on the user's location relative to an arbitrary origin, this should work. For the Z, Alex Stone's suggestion of calculating it based on the altitude should be fine.

Maths behind iPhone AR ToolKit

I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).