Leaflet.js and complex polygons - leaflet

For a small project I am allowing users to add areas to the database. Their query is sent to http://nominatim.openstreetmap.org and I store the latitude and longitude. If available, I also store the geoJSON polygon outline data.
Example output: http://nominatim.openstreetmap.org/search?q=wyoming&format=xml&polygon_geojson=1&addressdetails=1
This outlined area is then displayed on a map using leaflet.js. For a lot of polygons this works out just fine, but it seems that there is a limit to the amount of data the library can process. Some rather complex areas (that require a longtext to store in mysql) simply do not get displayed at all, without an error being thrown.
I guess my question has two parts:
1 - Am I right to assume that the large datasets are the root of the problem or should leaflet.js be able to handle those?
2 - What would be the best way of simplifying such datasets? Leaflet has such an algorithm for displaying areas, but that seems to be the failing point already.
And while we are on the topic: Right now I'm converting Nominatim's lnglat polygons to leaflet's latlng by splitting up the data and patching it back together in javascript. Is there an easier/safer way to do that? Should I rather move that task to the server and use some php library/function?
I appreciate your help!
Edit: Forgot to mention: on the occasion that the polygon fails to render, my console gives me this error: TypeError: t is null

are the large datasets the root of the problem or should leaflet.js be able to handle those?
Leaflet.js will handle whatever you throw at it. There is, however, a limit on what your web browser can handle without slowing down.
Remember that every modern web browser has performance analysis tools that you can use to see what parts of the Leaflet code (or of the browser's internals) is taking up most of your time.
What would be the best way of simplifying such datasets?
You probably want to look at the Douglas-Peucker algorithm for a starting point in these algorithms.
Keep in mind that Leaflet uses this algorithm internally to simplify polygons on each zoom level, up to a pixel.
For some big complex polygons, slicing them up with something like Leaflet.VectorGrid might improve the performance.
There is no silver bullet for simplifying datasets, however. The best way will depend on the specific data you are using.
on the occasion that the polygon fails to render, my console gives me this error: TypeError: t is null
This is a different matter, and might be a symptom of malformed data.
In order to display prettier error messages, use the leaflet-src.js file instead of the leaflet.js file. Starting with Leaflet 1.0.0-beta2, leaflet-src.js has a sourcemap, which can point to the individual original files, allowing for better debugging.

Related

Best Practice to display local markers and a wider area of points of interest markers?

I've created a base layer and 6 different overlay (Points of Interest) layers for a leaflet map.
The base layer of markers can appear on the map almost anywhere in the world, but I want the POI layers to appear only if they are in the same area (mapBounds) of the base layer. Probably the screen size.
All the data is pulled from a MySQL database and using Ajax I create the various sets of markers from two different tables, base and poi. This much is all done and working, you can see it at https://net-control.us/map2.php. The green and blue markers are from the base table, other markers are currently selected for view by clicking on the appropriate icon in the lower right. The only one active at the moment is 'Fire Station'. But if you zoom out far enough you will see additional fire stations in the Kansas City area, and in Florida. Those sets are not needed.
After the query runs I create a fitBounds variable of the base layer and another poiBounds for the poi layer. But I'm not sure I need the poiBounds. The number of base markers is generally less than 50 for the base query, but if all the poi markers are pulled world wide that number could be very large.
So I'm hoping someone can help me determine a best practice for this kind of scenario and maybe offer up an example of how it should be done. Should I...
1) Download all POIs and not worry about them appearing outside the base bounds layer? Should I inhibit them from showing in the javascript or in the SQL? How?
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
I'm fairly new at leaflet maps and would appreciate examples if appropriate.
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
You probably want a column of type POINT, a spatial index on such column (which internally is likely to be implemented as an R-Tree), and spatial relation functions on your SQL query to make use of that index.
Start by reading https://dev.mysql.com/doc/refman/8.0/en/spatial-types.html. Take your time, as spatial databases, spatial data types and spatial indices work a bit differently than their non-spatial equivalents.

Optimizing and tracking Mapbox mapviews

I have a Mapbox mapping application that gets a LOT of map views per user — on the order of 30 per hit. Which seems kind of high! (And expensive!)
(The application tries to use MapboxGL by default and falls back to Mapbox.js if it can't. All versions also use Leaflet for whatever that is worth.)
I've been trying to debug this, but it's been hard for me to measure how different changes affect the overall number of map views.
Is there any way one can imagine that will allow me to get a realtime count of how many map views my application is generating? I am reasonably sure there isn't some kind of simple variable to query (that I have found), but maybe there is some way to count/keep track either in JS or the developer console? Any thoughts would be appreciated.
Are you using a Mapbox Studio style or your own tiles? In both cases you can count the tiles being requested by your app using the data event:
map.on('data', event => {
if (event.tile) tileCount++;
});
That's a very simple example. AFAIK one map view is consists of four tile requests.
If you got a lot different tile sources loading in parallel you end up with a lot of requests hence map views. If possible you could merge multiple sources into a single tile set (if you are using vector tiles).
If you use your own tiles, e.g. raster tiles, you can increase your tile size from 256px to 512px, which should result in less requests. For vector tiles the size is fixed to 256.

Simplify GeoJSON by retaining external boundaries (clustering)

I created a website (http://www.cartescolaire.paris) that basically loads a GeoJSON and displays it on a map using Leaflet.
This geoJSON is pretty large (over 2 Mb), the loading time can be very long (it doesn't even load on IE 11). More importantly the resulting map is not very responsive when zooming / navigating.
There are around 110 zones (clicking on a point in the map highlights the zone it belongs to), each of them made from dozens of polygons.
However the only important information that I want to visualize is the external boundaries of each zone.
Such a compressed geometry would be much more efficient performance-wise.
The complexity arises from the constraint that the zones shouldn't overlap.
The final result should be disjoint clusters.
Any idea how I could do that ?
Thanks a lot !
Bonjour,
You sound to need merging of polygons, so that you decrease your number of vector features, the weight of your GeoJSON file and map responsiveness. Keeping your resulting polygons disjoint should not be difficult.
You should have plenty resources on SO / GIS Stack Exchange and Google on this, for example:
https://gis.stackexchange.com/questions/118223/merge-geojson-polygons-with-wgs84-coordinate
https://gis.stackexchange.com/questions/118547/convert-geojson-with-multiple-polygons-and-multipolygons-into-single-multipolygo
http://morganherlocker.com/post/Merge-a-Set-of-Polygons-with-turf/
(see also the related posts on SO on the right menu of this page, just above "Hot Network Questions")
Your case might be slightly different as most of your polygons are not adjacent, but are actually separated by empty areas / a margin (streets).
You might also be interested in UTFGrid for the interaction (clicking on the map to open the school associated with that area), as it would dramatically restore your map responsiveness: instead of vector shapes, you have the equivalent of tiles. See an example: http://danzel.github.io/Leaflet.utfgrid/example/map.html
However, I do not think you can visually show the areas with UTFGrid.
But you could combine this approach with canvas-based tiles, or even pre-generate tiles on your server and have them ready for display, rather than keeping a GeoJSON for client-side computation.
Bon courage !

Translating GPS coordinates to map tile-like structure

I'm a complete illiterate when it comes to working with geographical data, so bear with me.
For our application we will be tracking a fairly large amount of rapidly changing points on a map. It would be nice to be able to cache the location of these points in some kind of map-tile structure so it would be easy to find all points currently in the same tile or neighbouring tiles, making it easier to quickly determine the nearest neigbours and have special logic for specific tiles, etc.
Although we're working for one specific (but already large) location, it would be nice if a solution would scale to other locations as well. Since we would only cache tiles that concern the system, would just tiling the enitre planet be the best option? The dimensions of a tile would then be measured in arc seconds/minutes, or is that a bad idea?
We already work with Postgres and this seems like something that could be done with PostGIS (is this what rasters are?), but jumping in to the documentation/tutorials without knowing what exactly I'm looking for is proving difficult. Any ideas?
PostGIS is all that you need. It can store your points in any coordinate reference system, but you'll probably be using longitude/latitude. Are your points coming from a GPS device?
PostGIS uses GIST indexing, making the search for points close to a given point quite efficient. One option you might want to look at, seeing that you are interested in tiling, is to "geohash" your points. Basically, this turns an (X,Y) coordinate pair into a single "string" with a length depending on the level of partitioning. Nearby points will have the same geohash value (= 1 tile) and are then easily identified with standard database search tools. See this answer and related question for some more considerations and an example in PostgreSQL.
You do not want to look at rasters. These are gridded data, think aerial photography or satellite images, weather maps, etc.
But if you want a more specific answer you should give some more details:
How many points? How are they collected?
Do you have large clusters?
Local? Regional? Global?
What other data does this relate to?
Pseudo table structure? Data layout?
etc
More info = better answer
Cheers, hope you get your face back

Is using MapServer to merge several MapLayers on runtime to use with Leaflet a good idea?

MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.