Leaflet.markercluster cluster of clusters - leaflet

I'm using multiple Leaflet.markercluster groups to cluster data from different layers, meaning a given cluster contains only markers from a single layer. I'm able to customize the icon for each type (i.e. provide iconCreateFunction) so the user can distinguish them visually. So far, so good.
What I'd like to do is combine clusters of different types, or even markers, when their icons are very close to one another. That is, maxClusterRadius for homogeneous might be 80 pixels, but heterogeneous clusters would use a radius of something like 5 pixels.
Has anyone tackled this, or at least given serious thought to what a solution might look like?

Related

how to avoid polygon being clipped with tileset?

I am generating a layer UK scale, it is composed for several polygons, the idea for the layer is to display it fully and then the user can zoom to most specific area. The main problem is because the number of polygons when I create the .mbtile it split some of the polygon at determined zooms.
I have tried with different options of tippecanoe like --not duplication extend zooms if still dropping ... but I couldn't nail whit the correct commands to make it work.
tippecanoe -zg --read-parallel -l $LAYER -o "$OUT_MBTILES" "$OUT_JSON" --coalesce-densest-as-needed --extend-zooms-if-still-dropping --no-duplication
I suspect this is somehow related to --no-duplication. Per the documentation:
--no-duplication: As with --no-clipping, each feature is included intact instead of cut to tile boundaries. In addition, it is included only in a single tile per zoom level rather than potentially in multiple copies. Clients of the tileset must check adjacent tiles (possibly some distance away) to ensure they have all features.
I'm not sure why you're using that option, but disable it if possible.
It's hard to be more specific without seeing your data and having a lot more information about what you're trying to achieve, zoom levels, attributes, data size etc.

DBSCAN clustering - what happens when border point of one cluster is considered to be core point of another cluster

I would like to know your opinion about dbscan clustering, I am trying to implement algorithm as published here. In my opinion there is possibility for one point from border of some cluster to be an core point of another one as shown in picture:
.
I think there are some of the possible solutions:
we could consider point as written to cluster and that cannot be changed - but we could lost second cluster because of that
we could be able to change border points cluster but without recomputing epsilon neighbourhood.
we could be able to add point into multiple clusters (worst one).
What do you think is the best? Or am I getting something completely wrong?
The core-point property is not cluster specific.
Either the point is a core point, or it is not; independent of which cluster it is in.
If it is a core point, then it cannot be a noise or border point anymore.
Whenever two core points are neighbors, they by definition are in the same cluster.
The known special case that can happen is that one point is border to more than one cluster. See end of page 229.

GKGraph GKGraphNode GKGridGraphNode, what's relationship for them?

I've read the document but still confused of them, could any guy can give me a clearly explaining, e.g.any image comparison? Thanks.
The Wikipedia article on Pathfinding might help, as might the related topics on graphs and graph search algorithms linked from there. Beyond that, here's an attempt at a quick explainer.
Nodes are places that someone can be, and their connections to other nodes define someone can travel between places. Together, a collection of (connected) nodes form a graph.
GKGraphNode is the most general form of node — these nodes don't know anything about where they are in space, just about their connections to other nodes. (That's enough for basic pathfinding, though... if you have a graph where A is connected to B and B is connected to C, the path from A to C goes through B regardless of where those nodes are located, like below.)
GKGraph is a collection of nodes, and provides functions that work the graph as a whole, like the important one for finding paths.
GKGridGraphNode and GKGraphNode2D are specialized versions of GKGraphNode that add knowledge of the node's position in space — either integer grid space (like a chessboard) or open 2D space. Once you've added that kind of information, a GKGraph containing these kinds of nodes can take distance into account when pathfinding.
For example, look at this image:
If we're just using GKGraphNode, all we're talking about is which nodes are connected to which. So if we ask for the shortest path from A to D, we can get either ACD or ABD, because it's an qual number of connections either way. But if we use GKGridGraphNode or GKGraphNode2D, we're looking at the lengths of the lines between nodes, in which case ACD is the shortest path.
Once you start locating your nodes in (some sort of coordinate) space, it helps to be able to operate on the graph as a whole in that space. That's where GKGridGraph and GKObstacleGraph come in.
GKGridGraph works with GKGridGraphNodes and lets you do things like create a graph to fill a set of dimensions (say, a 10x10 grid, with diagonal movement allowed) instead of making you create and connect a bunch of nodes yourself.
GKObstacleGraph adds more to free-2D-space graphs by letting you mark areas as impassable obstacles and automatically managing the nodes and connections to route around obstacles.
Hopefully this helps a bit. For more, besides the reference docs and guide, Apple also has a WWDC video that shows how this stuff works.

Is using MapServer to merge several MapLayers on runtime to use with Leaflet a good idea?

MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.

Multiple Annotations (IOS) Easiest way

Im using Annotations in IOS to display London Tube stations, but im looking at numbers and there are 280 or so.
Whats the easiest way to do this?
Individually or is there another option?
Cheers for all the advice
David
The performance is good with 280 annotations, the appearance is not. You have to group them into clusters when the user zooms out.
One way to do it is:
Decide how many cluster annotations you want to show.
Split the screen in x*y tiles so roughly x*y =~ numClusters and x/y=480/320=1.5
Add a cluster annotation per tile (it's a normal cluster with an array containing 0 or more annotations).
Run the k-means algorithm:
Iterate all annotations and add each one to the closest cluster.
Calculate a new center for each cluster, which will be an average of the centers of all its members.
Empty each cluster.
Repeat until no cluster moves any longer.
Remove empty clusters, if any.
You end up with numClusters clusters positioned according to the annotation density.
You can also leave a number of normal annotations on their own if they are away from the clusters. Depends on how you want it to look.