I am bringing up a tile server with renderd/Mapnik/postgresql. Everything works fine, the problem is that I have no option to restrict my map to a specific region - I need the whole world to be available. On the other hand, I really do not need all the data that the map has (roads, ski slopes, terrain, etc). The planet OSM file is too large, and it takes days to import it to Postgresql.
Is there an option to prepare or to get from somewhere a slimmer OSM map file that will contain only borders and cities?
Related
I'm using OpenMapTiles to download OSM data and create a mbtiles with mapbox vector tiles. This all works great, except I'm targeting an embedded platform.
At zoom level 14 with the default extent of 4096, a single tile can be over 1MB and cover an entire city. Not only is that a huge file to process for an embedded platform, it also means you're potentially sifting through every house in an entire city. I went as far as writing a streaming protobuf parser, but it takes 10 minutes to just parse such a file.
How can I generate mapbox vector tiles with a smaller extent?
I found there appears to be a parameter for it, but can't figure out where it actually gets used to generate tiles and how to modify it: https://github.com/openmaptiles/openmaptiles-tools/blob/4cc6e88dfdef83de69bd49845e0f23908d9edecc/openmaptiles/sqltomvt.py#L25
I'm not married to openmaptiles, but it's what I'm currently using to download and process openstreetmap data.
in case others need it, use https://github.com/mapbox/tilelive and specify maxzoom you want
bin/tilelive-copy --minzoom=0 --maxzoom=10 ~/Apps/tileserver/tiles/osm-2017-07-03-v3.6.1-planet.mbtiles ~/Apps/tileserver/tiles/small-osm-2017-07-03-v3.6.1-planet.mbtiles
On my map users will be saving coordinates in an external database, which will show up as markers on the map. I was wondering if it would make more sense to keep the data 100% in an external database and just pull the data for those markers on the fly, or to import the data into a dataset and then tileset? The thing is, the data will be added over time, so I would need to continually add new points into the tileset.
Is it faster when data is kept in the data/tileset vs pulled from another server? Are there size limitations in pulling external data vs importing to data/tileset?
What you're really asking is the difference between bulk loading a whole dataset as GeoJSON (or perhaps GeoBuf) and loading it via vector tiles.
As a basic rule of thumb, if the size of your data in GeoJSON is under 5MB, save yourself the complexity of vector tiles and just do that.
If it's over say 20MB you really have to use vector tiles. In the middle? Up to you.
I've created a base layer and 6 different overlay (Points of Interest) layers for a leaflet map.
The base layer of markers can appear on the map almost anywhere in the world, but I want the POI layers to appear only if they are in the same area (mapBounds) of the base layer. Probably the screen size.
All the data is pulled from a MySQL database and using Ajax I create the various sets of markers from two different tables, base and poi. This much is all done and working, you can see it at https://net-control.us/map2.php. The green and blue markers are from the base table, other markers are currently selected for view by clicking on the appropriate icon in the lower right. The only one active at the moment is 'Fire Station'. But if you zoom out far enough you will see additional fire stations in the Kansas City area, and in Florida. Those sets are not needed.
After the query runs I create a fitBounds variable of the base layer and another poiBounds for the poi layer. But I'm not sure I need the poiBounds. The number of base markers is generally less than 50 for the base query, but if all the poi markers are pulled world wide that number could be very large.
So I'm hoping someone can help me determine a best practice for this kind of scenario and maybe offer up an example of how it should be done. Should I...
1) Download all POIs and not worry about them appearing outside the base bounds layer? Should I inhibit them from showing in the javascript or in the SQL? How?
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
I'm fairly new at leaflet maps and would appreciate examples if appropriate.
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
You probably want a column of type POINT, a spatial index on such column (which internally is likely to be implemented as an R-Tree), and spatial relation functions on your SQL query to make use of that index.
Start by reading https://dev.mysql.com/doc/refman/8.0/en/spatial-types.html. Take your time, as spatial databases, spatial data types and spatial indices work a bit differently than their non-spatial equivalents.
I created a website (http://www.cartescolaire.paris) that basically loads a GeoJSON and displays it on a map using Leaflet.
This geoJSON is pretty large (over 2 Mb), the loading time can be very long (it doesn't even load on IE 11). More importantly the resulting map is not very responsive when zooming / navigating.
There are around 110 zones (clicking on a point in the map highlights the zone it belongs to), each of them made from dozens of polygons.
However the only important information that I want to visualize is the external boundaries of each zone.
Such a compressed geometry would be much more efficient performance-wise.
The complexity arises from the constraint that the zones shouldn't overlap.
The final result should be disjoint clusters.
Any idea how I could do that ?
Thanks a lot !
Bonjour,
You sound to need merging of polygons, so that you decrease your number of vector features, the weight of your GeoJSON file and map responsiveness. Keeping your resulting polygons disjoint should not be difficult.
You should have plenty resources on SO / GIS Stack Exchange and Google on this, for example:
https://gis.stackexchange.com/questions/118223/merge-geojson-polygons-with-wgs84-coordinate
https://gis.stackexchange.com/questions/118547/convert-geojson-with-multiple-polygons-and-multipolygons-into-single-multipolygo
http://morganherlocker.com/post/Merge-a-Set-of-Polygons-with-turf/
(see also the related posts on SO on the right menu of this page, just above "Hot Network Questions")
Your case might be slightly different as most of your polygons are not adjacent, but are actually separated by empty areas / a margin (streets).
You might also be interested in UTFGrid for the interaction (clicking on the map to open the school associated with that area), as it would dramatically restore your map responsiveness: instead of vector shapes, you have the equivalent of tiles. See an example: http://danzel.github.io/Leaflet.utfgrid/example/map.html
However, I do not think you can visually show the areas with UTFGrid.
But you could combine this approach with canvas-based tiles, or even pre-generate tiles on your server and have them ready for display, rather than keeping a GeoJSON for client-side computation.
Bon courage !
I'm working on a project using Leaflet. For this project I've created an interface to draw all the roofs of a large city as polygons. A lot of scripts calculate the surfaces, the addresses, the orientation and so on... I'll store each roof's datas in a geojson file (or files). We expect to get about 10 000 roofs or more. I don't know if Leaflet displays only visibles polygons depending window's boundaries or if all the polygons are drawn, and my problem is to find the better way to do this storage.
In one geojson file. It may be a problem because the 10 000 roofs are computed in the same time and waiting for polygons loading may be very boring for users.
In separated geojson files: for each roof I can approximatively calculate the coordinates of its center and put this roof in the right geojson file depending the lat/lng. By this way I can create (say 20 or 50) differents geojson files and call the right one depending boundaries. Then the question is: to create all the polygons, is it better to call the 6 (or 8 or 10) geojson usefull files for the area on screen, or is it better to create a new dynamic geojson file?
All the roof's datas are stored in a database or in a XML file and I have to detect boundaries and automatically create a dynamic geojson file. But each time user scroll or darg or zoom the map I'm supposed to recreate this unique geojson file...
Do you ever have a similar problem to solve and how do you solve it ? Thx.
I think the second scenario is the most robust, you need tiled GeoJSON tiles. Have a look at Leaflet Plugins. There is one called TileLayer.GeoJSON. Some links on how to create these tiles are on OpenStreetMap wiki.