I am using realistic street networks imported from OpenStreetMap for simulations with Veins, for example the Luxembourg scenario from Lara Codeca. Now, to prepare a visualisation (using Google Earth), I want to export the vehicle positions in the simulation from SUMO or OmNET coordinates to GPS coordinates.
As material I have the OSM file used for generating the scenario, including the GPS positions of all nodes there. I was hoping to find a simple mapping from the simulation coordinates to GPS coordinates, for example, by knowing the GPS coordinates of the corners of the bounding box and the simulation playground.
Is there a simple way to make this conversion, and how can I find the actual corners that were used by the OSM conversion when generating the playground?
The conversion works as follows:
1. Accessing the Location Information from OmNET
// Adapt your path to the mobility module here
Veins::TraCIMobility* mobility =
check_and_cast<Veins::TraCIMobility*>(
getParentModule()->getSubmodule("veinsmobility"));
Veins::TraCICommandInterface* traci = mobility->getCommandInterface();
Coord currPos = mobility->getCurrentPosition();
std::pair<double, double> currLonLat = traci->getLonLat(currPos);
getLonLat() returned absolute 2D coordinates for me, so there is a conversion step required.
2. Finding the Transformation
The .net.xml file from SUMO contains the required transformation. The <location> tag contains the attributes netOffset and projParameters that are needed.
For the Luxembourg scenario, these are
netOffset="-285448.66,-5492398.13"
projParameter="+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
3. Inversing the Transformation
The library PROJ.4 can be used to do the inversion. A Python interface is also available (pyproj).
import pyproj
projection = pyproj.Proj(
"+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
# x, y obtained from OmNET
lon, lat = projection(x, y, inverse=True)
In case only the relative location information is available, the x, y values must be adjusted first by adding the netOffset values to them.
Edit
Only the first step is necessary when you build SUMO --with-proj-gdal support, the result of getLonLat() will be in the desired format immediately.
Related
I want to find the python library in which I may obtain the SUN coordinates in ECEF (Eart-Centered, Earth-Fixed) frame - geocenter coordinates. I try using jplephem, pyephem etc. but none of them availalbe to give these coordinates. Please give me library, algorithm or etc. in which I may obtained these coordinates.
With greetings.
You are correct that neither of those libraries has quick support for an ECEF reference frame, though it can be faked in PyEphem by creating an Observer at latitude 0° and longitude 0° and whose elevation is negative enough to put them at the center of the Earth.
If you are interested in a more modern library, the new 1.34 version of Skyfield directly supports the standard ITRS reference frame, which is ECEF:
from skyfield import framelib
from skyfield.api import load
ts = load.timescale()
t = ts.now()
planets = load('de421.bsp')
sun = planets['sun']
earth = planets['earth']
apparent = earth.at(t).observe(sun).apparent()
vector = apparent.frame_xyz(framelib.itrs)
print(vector.au)
The result:
[-0.653207 -0.62839897 -0.38480108]
The operations you can perform with reference frames are explained in more detail here:
https://rhodesmill.org/skyfield/positions.html#coordinates-in-other-reference-frames
I'm working on leaflet Js in which I expect user inputs of coordinates in X, Y form. i.e. Ghana meter Grid and I need to convert the X, Y into latitudes, and longitudes so that they can be plotted as markers on leaflet Js.
Since you are working with Javascript, I suggest to approach this problem with proj4js (the Javascript implementation of OSGeo's proj, the industry standard for converting coordinates between coordinate reference systems).
First, grab a release of proj4js, or use a CDN-hosted release, e.g.:
<script src='https://unpkg.com/proj4#2.6.2/dist/proj4.js'></script>
Proj4js doesn't come with the full definition list of CRSs (Coordinate Reference Systems), so you'll have to define the CRSs you want to work with. In your case, it's gonna be EPSG:25000 AKA "Ghana Metre Grid" and EPSG:4326 AKA "Equirectangular WGS84" AKA "latitude-longitude":
proj4.defs("EPSG:4326","+proj=longlat +datum=WGS84 +no_defs");
proj4.defs("EPSG:25000","+proj=tmerc +lat_0=4.666666666666667 +lon_0=-1 +k=0.99975 +x_0=274319.51 +y_0=0 +ellps=clrk80 +towgs84=-130,29,364,0,0,0,0 +units=m +no_defs");
You can fing the PROJ definitions of CRSs in either the data files of a PROJ release, or websites such as epsg.io.
Once the CRSs have been defined, call proj4js with their names and the coordinates to transform, e.g. to transform from EPSG:4326 to EPSG:25000...
console.log( proj4("EPSG:4326", "EPSG:25000", [-0.187, 5.6037]) );
...or from EPSG:25000 to EPSG:4326...
console.log( proj4("EPSG:25000", "EPSG:4326", [364346.57, 103339.95]) );
See a working example here.
Be wary of the order of coordinates (lat-lon vs lon-lat, or x-y vs y-x). Leaflet uses lat-lng, but proj uses x-y and lng-lat, so you'll have to flip the coordinates, e.g.
var accraLngLat = proj4("EPSG:25000", "EPSG:4326", [364346.57, 103339.95]);
L.marker([accraLngLat[1], accraLngLat[0]]).addTo(map);
or
var accraLngLat = proj4("EPSG:25000", "EPSG:4326", [364346.57, 103339.95]);
var accraLatLng = L.GeoJSON.coordsToLatLng(accraLngLat);
L.marker(accraLatLng).addTo(map);
See a working example here.
Also note that proj4js does all the reprojection work and there are no API calls involved.
Since you are specifically working with Leaflet, you might also be interested in proj4leaflet, although you might not need it. It'll be useful if you want to use Leaflet to display raster data (or map tiles) in different projections.
You can use this online converter: http://epsg.io/?q=Ghana
Using API parameters, you may be able to perform a GET request when the user inputs coordinates for conversion via your app.
I have a microbit project where the microbit is inserted into a kiktronics robot vertically.
I would like to get the heading of the robot, but the
compass.heading()
only works if the microbit is horizontal. I have tried reading the x,y,z co-ordinates of the compass using get_x(), get_y(), get_z()
But the ranges of numbers I am getting are scaled differently for the z axis and the x,y axis.
Does anyone know what the ranges are for the different sensors?
I used the test code below. I can get an accurate compass reading if I run the compass_calibrate() function first, even with the magnetometer vertical.
After commenting out the compass_calibrate() line, when moving the board around 3 axis in free space, I can see that the z value does not vary as much as x and y. So I got a small magnet. Moving that around the magnetometer makes the x,y,z values appear to change within roughly the same limits - this is a rough eyeball experiment.
Looking at the data sheet for the MAG3110 magnetometer, I can't see any indication that the 3 magnetometer axis are different. So why are the z-readings different without an external field? I hypothesise that there is a ground plane in the PCB. This is common in PCB construction. This could be acting as a shield for the z-axis.
from microbit import *
# compass.calibrate()
while True:
sleep(250)
# c = compass.heading()
x = compass.get_x()
y = compass.get_y()
z = compass.get_z()
print('x:{} y:{} z: {}'.format(x,y,z))
For those who are working with micro:bit v2, you can get the compass readings by using compass.heading() function too.
Code example:
from microbit import *
while True:
if button_a.was_pressed():
display.scroll(str(compass.heading()))
At the first time after running this code, the micro:bit board will ask you to tilt the board in different directions, and then it will start providing you with the required directions.
According to your questions about the compass ranges, here is a quick summarization for them as in the image below:
I'm currently working on a mapping app for iPhone. I've created some custom maps of various sizes, but I've run into an issue:
I would like to implement the ability for users' locations to be checked automatically, but since Im not using a MapView this is much more dificult. (see below)
given the different coordinate systems, I would like to receive a geolocation (green dot) and translate it into a pixel location on a custom map.
Ive got the geolocations for the 4 corners, but the rect is askew. Ive calculated the angle of rotation, but Im just generally confused.
note: the size of the maps arent big enough for the spherical nature of the earth to come into calculation.
Any help is appreciated!
To convert a geolocation to point you need to first understand the mapping. assuming you are using Mercator.
x = R*long
y = R*(1+sin(lat))/cos(lat)
where lat and long are in radians.R is radius of earth. the scale of the image would be from 0 to R*PI
so to get it within view.frame.size you may have to divide by a scale factor.
for difference between points.
x2-x1 = R* (long2-long1)
y2-y1 = R* ( (1+sin(lat2))/cos(lat2) - (1+sin(lat1))/cos(lat1) )
I would like to add placemarks to parts of collada geometry in google earth. To do this I need to translate the geometry coordinates in the xml to match the transformation to the model in google earth. Given longitude latitude and orientation how do I translate geometry coordinate to match the google earth transformation?
I believe models are located on the earth using only 1 coordinate which is used to position the 'center' of the model. How that is determined I am not sure, and might even depend on the kind/shape of the model.
The coordinate is easily found in the kml structure for the model which is referenced here
http://code.google.com/apis/kml/documentation/models.html
Determining how far a certain part of your geometry is from the middle might be very complex but here is an example of some code that will provide you the distance between two points.
http://earth-api-utility-library.googlecode.com/svn/trunk/extensions/examples/ruler.html
It is based upon the google earth extensions (gex) library
http://code.google.com/p/earth-api-utility-library/
Perhaps you could use SketchUp to help determine the location of your points of interest within the geometry?
http://sketchup.google.com/