I have a Mapbox mapping application that gets a LOT of map views per user — on the order of 30 per hit. Which seems kind of high! (And expensive!)
(The application tries to use MapboxGL by default and falls back to Mapbox.js if it can't. All versions also use Leaflet for whatever that is worth.)
I've been trying to debug this, but it's been hard for me to measure how different changes affect the overall number of map views.
Is there any way one can imagine that will allow me to get a realtime count of how many map views my application is generating? I am reasonably sure there isn't some kind of simple variable to query (that I have found), but maybe there is some way to count/keep track either in JS or the developer console? Any thoughts would be appreciated.
Are you using a Mapbox Studio style or your own tiles? In both cases you can count the tiles being requested by your app using the data event:
map.on('data', event => {
if (event.tile) tileCount++;
});
That's a very simple example. AFAIK one map view is consists of four tile requests.
If you got a lot different tile sources loading in parallel you end up with a lot of requests hence map views. If possible you could merge multiple sources into a single tile set (if you are using vector tiles).
If you use your own tiles, e.g. raster tiles, you can increase your tile size from 256px to 512px, which should result in less requests. For vector tiles the size is fixed to 256.
Related
I am generating a layer UK scale, it is composed for several polygons, the idea for the layer is to display it fully and then the user can zoom to most specific area. The main problem is because the number of polygons when I create the .mbtile it split some of the polygon at determined zooms.
I have tried with different options of tippecanoe like --not duplication extend zooms if still dropping ... but I couldn't nail whit the correct commands to make it work.
tippecanoe -zg --read-parallel -l $LAYER -o "$OUT_MBTILES" "$OUT_JSON" --coalesce-densest-as-needed --extend-zooms-if-still-dropping --no-duplication
I suspect this is somehow related to --no-duplication. Per the documentation:
--no-duplication: As with --no-clipping, each feature is included intact instead of cut to tile boundaries. In addition, it is included only in a single tile per zoom level rather than potentially in multiple copies. Clients of the tileset must check adjacent tiles (possibly some distance away) to ensure they have all features.
I'm not sure why you're using that option, but disable it if possible.
It's hard to be more specific without seeing your data and having a lot more information about what you're trying to achieve, zoom levels, attributes, data size etc.
I've created a base layer and 6 different overlay (Points of Interest) layers for a leaflet map.
The base layer of markers can appear on the map almost anywhere in the world, but I want the POI layers to appear only if they are in the same area (mapBounds) of the base layer. Probably the screen size.
All the data is pulled from a MySQL database and using Ajax I create the various sets of markers from two different tables, base and poi. This much is all done and working, you can see it at https://net-control.us/map2.php. The green and blue markers are from the base table, other markers are currently selected for view by clicking on the appropriate icon in the lower right. The only one active at the moment is 'Fire Station'. But if you zoom out far enough you will see additional fire stations in the Kansas City area, and in Florida. Those sets are not needed.
After the query runs I create a fitBounds variable of the base layer and another poiBounds for the poi layer. But I'm not sure I need the poiBounds. The number of base markers is generally less than 50 for the base query, but if all the poi markers are pulled world wide that number could be very large.
So I'm hoping someone can help me determine a best practice for this kind of scenario and maybe offer up an example of how it should be done. Should I...
1) Download all POIs and not worry about them appearing outside the base bounds layer? Should I inhibit them from showing in the javascript or in the SQL? How?
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
I'm fairly new at leaflet maps and would appreciate examples if appropriate.
2) If I inhibit the unwanted points from SQL do I test one POI at a time to see if its included in the base bounds? How? Are there MySQL functions perhaps to work with this kind of data?
You probably want a column of type POINT, a spatial index on such column (which internally is likely to be implemented as an R-Tree), and spatial relation functions on your SQL query to make use of that index.
Start by reading https://dev.mysql.com/doc/refman/8.0/en/spatial-types.html. Take your time, as spatial databases, spatial data types and spatial indices work a bit differently than their non-spatial equivalents.
For a small project I am allowing users to add areas to the database. Their query is sent to http://nominatim.openstreetmap.org and I store the latitude and longitude. If available, I also store the geoJSON polygon outline data.
Example output: http://nominatim.openstreetmap.org/search?q=wyoming&format=xml&polygon_geojson=1&addressdetails=1
This outlined area is then displayed on a map using leaflet.js. For a lot of polygons this works out just fine, but it seems that there is a limit to the amount of data the library can process. Some rather complex areas (that require a longtext to store in mysql) simply do not get displayed at all, without an error being thrown.
I guess my question has two parts:
1 - Am I right to assume that the large datasets are the root of the problem or should leaflet.js be able to handle those?
2 - What would be the best way of simplifying such datasets? Leaflet has such an algorithm for displaying areas, but that seems to be the failing point already.
And while we are on the topic: Right now I'm converting Nominatim's lnglat polygons to leaflet's latlng by splitting up the data and patching it back together in javascript. Is there an easier/safer way to do that? Should I rather move that task to the server and use some php library/function?
I appreciate your help!
Edit: Forgot to mention: on the occasion that the polygon fails to render, my console gives me this error: TypeError: t is null
are the large datasets the root of the problem or should leaflet.js be able to handle those?
Leaflet.js will handle whatever you throw at it. There is, however, a limit on what your web browser can handle without slowing down.
Remember that every modern web browser has performance analysis tools that you can use to see what parts of the Leaflet code (or of the browser's internals) is taking up most of your time.
What would be the best way of simplifying such datasets?
You probably want to look at the Douglas-Peucker algorithm for a starting point in these algorithms.
Keep in mind that Leaflet uses this algorithm internally to simplify polygons on each zoom level, up to a pixel.
For some big complex polygons, slicing them up with something like Leaflet.VectorGrid might improve the performance.
There is no silver bullet for simplifying datasets, however. The best way will depend on the specific data you are using.
on the occasion that the polygon fails to render, my console gives me this error: TypeError: t is null
This is a different matter, and might be a symptom of malformed data.
In order to display prettier error messages, use the leaflet-src.js file instead of the leaflet.js file. Starting with Leaflet 1.0.0-beta2, leaflet-src.js has a sourcemap, which can point to the individual original files, allowing for better debugging.
I created a website (http://www.cartescolaire.paris) that basically loads a GeoJSON and displays it on a map using Leaflet.
This geoJSON is pretty large (over 2 Mb), the loading time can be very long (it doesn't even load on IE 11). More importantly the resulting map is not very responsive when zooming / navigating.
There are around 110 zones (clicking on a point in the map highlights the zone it belongs to), each of them made from dozens of polygons.
However the only important information that I want to visualize is the external boundaries of each zone.
Such a compressed geometry would be much more efficient performance-wise.
The complexity arises from the constraint that the zones shouldn't overlap.
The final result should be disjoint clusters.
Any idea how I could do that ?
Thanks a lot !
Bonjour,
You sound to need merging of polygons, so that you decrease your number of vector features, the weight of your GeoJSON file and map responsiveness. Keeping your resulting polygons disjoint should not be difficult.
You should have plenty resources on SO / GIS Stack Exchange and Google on this, for example:
https://gis.stackexchange.com/questions/118223/merge-geojson-polygons-with-wgs84-coordinate
https://gis.stackexchange.com/questions/118547/convert-geojson-with-multiple-polygons-and-multipolygons-into-single-multipolygo
http://morganherlocker.com/post/Merge-a-Set-of-Polygons-with-turf/
(see also the related posts on SO on the right menu of this page, just above "Hot Network Questions")
Your case might be slightly different as most of your polygons are not adjacent, but are actually separated by empty areas / a margin (streets).
You might also be interested in UTFGrid for the interaction (clicking on the map to open the school associated with that area), as it would dramatically restore your map responsiveness: instead of vector shapes, you have the equivalent of tiles. See an example: http://danzel.github.io/Leaflet.utfgrid/example/map.html
However, I do not think you can visually show the areas with UTFGrid.
But you could combine this approach with canvas-based tiles, or even pre-generate tiles on your server and have them ready for display, rather than keeping a GeoJSON for client-side computation.
Bon courage !
MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.