Seems like a simple question, but I have been tearing my hair out for hours now.
I have a series of files ie.
kml_image_L1_0_0.jpg
kml_image_L2_0_0.jpg
kml_image_L2_0_1.jpg
kml_image_L2_1_0.jpg
kml_image_L2_1_1.jpg
etc. However just plotting them on the leaflet map surface understandibly puts the images at 0,0 on the earths surface, and the 0 zoom level inferred by the files should really be about 15 or so.
So I want to specify the latitude and longitude where the images should originate , and what zoom level they should start at. I have tried bounds (which doesn't display anything) and I have tried playing with offsetting the zoom level.
I need this because a user needs to click on an offline map to specify where they are and I need the GPS coordinates.
I also have a KML file but it seems to be of more help for plotting vector data on the map.
Any help is much appreciated, cheers.
If I understand correctly, the "kml_image_Lz_x_y.jpg" images that you have are actually tiles, with zoom, horizontal and vertical indices in their file name?
And your issue is that they use (z,x,y) numbers as if they started from the top-most level (zoom 0, single tile for entire world), but in fact they are just a small portion of the pyramid of tiles?
And you cannot use them as is because you still want to get actual geographic coordinates (latitude, longitude), which would be totally wrong if you used the tiles as if they were showing the entire world?
In that case, you have several options as workarounds:
The most simple and reliable would probably be to simply write a small script to rename all your tiles to their true (z,x,y) numbers.
Another option would be to modify the (z,x,y) numbers before they are written in the tile src attribute, and apply the appropriate offset (constant for z, scaled by z for x and y). That should probably happen in L.TileLayer.getTileUrl() method.
Good luck! :-)
Related
I was wondering how I can go about storing and displaying small, but geographically accurate distances in the mapbox unity SDK?
I'm storing radius' about markers on a map, I get the value in meters (from ~0.5m-10m), and then, adaptively with the zoom level, I want to accurately display those meters in Unity world space (draw an ellipse) using these stored values. The problem is that the mapbox api from my understanding only lets you to convert lat/long to unity world coordinates and I'm running into precision errors. I can get adequate precision when using the CheapRuler class and meters, but as soon as I use the _map.GeoToWorld(latlon) method the precision is lost.
How would I go about keeping adequate precession, is there a way I can use the marker as the reference point and the radius as the offset, and get the relative unity world coordinate distance (of the radius) that way? I know you can also store scale relative to the mapbox tiles, but I'm not sure how I can convert that back to a unity world distance. I'm operating on very small distances, so any warping due to lat/long being a Mercator projection can probably be ignored.
I figured out a round-about solution.
First I convert the meters into unity world space using whatever IMapScalingStrategy Mapbox is currently using.
Then I convert from world to the view space of whatever camera I want to scale to the given bounds.
After that, I use find out the scale of the bounds, solving for:
UnityRelativeScaleChange = 2Map Zoom Level Change; which (to my estimations) is the relationship between unity scale and mapbox zoom levels.
This solutions works great as long as you don't have to zoom in/out by too much, otherwise you'll run into precision problems as the functions rely on the relative view-based size of a given bounds to do their calculations which will lead to unstable results if those initially take a tiny portion of the screen.
I have a historical city map that I want to display using Leaflet.
I like to set the coordinates of this image to reflect the real world, e.g so I can click on the image and get the real coordinates.
I guess I can just make it an overlay to a real map, but there must be a better solution just define at what coordinates of the corners of the image.
For this image, the approx real world coordinates is NW: 60.34343, 18.43360, SE: 60.33761, 18.44819
My code, so far, is here:
http://stage1876.xn--regrund-80a.se/example3.html
Any ideas how to proceed? It feels like it there should be an easy way to do this?
Any help would be so appreciated!
EDIT: The implementation (so far) with tiles are optional. I could go for a one image-map as well.
I have a table that contains a bunch of Earth coordinates (latitude/longitude) and associated radii. I also have a table containing a bunch of points that I want to match with those circles, and vice versa. Both are dynamic; that is, a new circle or a new point can be added or deleted at any time. When either is added, I want to be able to match the new circle or point with all applicable points or circles, respectively.
I currently have a PostgreSQL module containing a C function to find the distance between two points on earth given their coordinates, and it seems to work. The problem is scalability. In order for it to do its thing, the function currently has to scan the whole table and do some trigonometric calculations against each row. Both tables are indexed by latitude and longitude, but the function can't use them. It has to do its thing before we know whether the two things match. New information may be posted as often as several times a second, and checking every point every time is starting to become quite unwieldy.
I've looked at PostgreSQL's geometric types, but they seem more suited to rectangular coordinates than to points on a sphere.
How can I arrange/optimize/filter/precalculate this data to make the matching faster and lighten the load?
You haven't mentioned PostGIS - why have you ruled that out as a possibility?
http://postgis.refractions.net/documentation/manual-2.0/PostGIS_Special_Functions_Index.html#PostGIS_GeographyFunctions
Thinking out loud a bit here... you have a point (lat/long) and a radius, and you want to find all extisting point-radii combinations that may overlap? (or some thing like that...)
Seems you might be able to store a few more bits of information Along with those numbers that could help you rule out others that are nowhere close during your query... This might avoid a lot of trig operations.
Example, with point x,y and radius r, you could easily calculate a range a feasible lat/long (squarish area) that could be used to help rule it out if needless calculations against another point.
You could then store the max and min lat and long along with that point in the database. Then, before running your trig on every row, you could Filter your results to eliminate points obviously out of bounds.
If I undestand you correctly then my first idea would be to cache some data and eliminate most of the checking.
Like imagine your circle is actually a box and it has 4 sides
you could store the base coordinates of those lines much like you have lines (a mesh) on a real map. So you store east, west, north, south edge of each circle
If you get your coordinate and its outside of that box you can be sure it won't be inside the circle either since the box is bigger than the circle.
If it isn't then you have to check like you do now. But I guess you can eliminate most of the steps already.
Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.
I have a series of nature reserves that need to be plotted, as polygon overlays, on a map using the coordinates contained within KML data. I’ve found a tutorial on the Apple website for displaying KML overlays on map instances.
The problem is that the reserves vary in size greatly - from a small pond right up to several hundred kilometers in size. As a result I can’t use the coordinates of the center point to find the nearest reserves. Instead I need to calculate the nearest point of the reserves polygon to find the nearest one. With the data in KML - how would I go about trying to achieve this?
I've only managed to find one other person ask this and no one had replied :(
Well, there are a couple different solutions depending on your needs. The higher the accuracy required, the more work required. I like Phil's meanRadius parameter idea. That would give you a rough idea of which polygon is closest and would be pretty easy to calculate. This idea works best if the polygons are "circlish". If the polygon are very irregular in shape, this idea loses it's accuracy.
From a math standpoint, here is what you want to do. Loop through all points of all polygons. Calculate the distance from those points to your current coordinate. Then just keep track of which one is closest. There is one final wrinkle. Imagine a two points making a line segment that is very long. You are located one meter away from the midpoint of the line. Well, the distance to these two points is very large, while, in fact you are very close to the polygon. You will need to calculate the distance from your coordinate to every possible line segment which you can do in a variety of manners which are outlined here:
http://www.worsleyschool.net/science/files/linepoint/distance.html
Finally, you need to ask yourself, am I in any polygons? If you're 10 meters away from a point on a polygon, but are, in fact, inside the polygon, obviously, you need to consider that. The best way to do that is to use a ray casting algorithm:
http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm