Is there a way to inject a custom clustering algorithm in mapbox-gl - mapbox

I have a specific use case where each cluster can only have a specific 'type' of geo units. I have written the code to partition them correctly but I'm at a loss on how to subtitute the default clustering algorithm. Is there some sort of plugin architecture to specify a custom clustering algorithm?
Similar to how one can use Supercluster with leaflet (as mentioned in the supercluster readme https://github.com/mapbox/supercluster).
I also am passing in the type data to the algorithm and had to modify the type validation to get it work in my custom build. What's the correct way of notifying mapbox-gl of additional 'valid' parameters to the clustering algorithm.
If the above is not possible is the only other alternative to do a custom build replacing supercluster in mapbox-gl for the build. One final question, if that is the case are there instructions on how to do a mapbox-gl build. When I attempt an npm run build-dev or npm run build-prod I get complaints of a missing rollup config file.

Related

Mapbox-gl and Supercluster: How to get access to reduce methods of supercluster instance in mapbox-gl

As I see supercluster uses like peer dependency for mapbox-gl.
Mapbox provides the possibility of clustering from the box and this is thanks supercluster.
Supercluster has very important method - reduce: a reduce function that merges properties of two clusters into one
But i can't find, how can i access to this method from mapbox instance
Is there any way to access to reduce? Or is there any other way to manage the merges properties?

Is there a good way to set dynamic labels for k8s resources?

I'm attempting to add some recommended labels to several k8s resources, and I can't see a good way to add labels for things that would change frequently, in this case "app.kubernetes.io/instance" and "app.kubernetes.io/version". Instance seems like a label that should change every time a resource is deployed, and version seems like it should change when a new version is released, by git release or similar. I know that I could write a script to generate these values and interpolate them, but that's a lot of overhead for what seems like a common task. I'm stuck using Kustomize, so I can't just use Helm and have whatever variables I want. Is there a more straightforward way to apply labels like these?
Kustomize's commonLabels transformer is a common way to handle this, sometimes via a component. It really depends on your overall layout.

Deploy Knowledge Studio dictionary pre-annotator to Natural Language Understanding

I'm getting started with Knowledge Studio and Natural Language Understanding.
I'm able to deploy a machine-learning model toNatural Language Understanding and use the API to query it.
I would know if there's a way to deploy only the pre-annotator.
I read from Knowledge Studio's documentation that
You can deploy or export a machine-learning annotator. A dictionary pre-annotator can only be used to pre-annotate documents within Watson Knowledge Studio.
Does exist a workaround to create a model that simply does the job of the pre-annotator, i.e. use dictionaries to find entities instead of the machine-learning model?
Does exist a workaround to create a model that simply does the job of the pre-annotator, i.e. use dictionaries to find entities instead of the machine-learning model?
You may need to explain this better in what you need.
WKS allows you to pre-annotate documents with dictionaries you upload. Once you have created a ML model, you can alternatively use that to annotate your training documents, and then manually correct. As you continue the amount of manual work will reduce after each model iteration.
The assumption is that you are creating a model with a reasonable amount of examples. In your model results, you will want the mention/relations to be outside or close to outside the gray area of the report.
The other interpretation of your request I took was you want to create a dictionary based model only. This is possible using the "Rule-Based Model" functionality. You would have to create the parsing rules but you just map what you want to find to the dictionary/rule.
Using this in production though is still limited. You should get a warning when you deploy these kinds of models.
It's slightly better than just a keyword search as you can map items to parts of speech.
The last point. The purpose of WKS is to create a machine learning model which will do the work in discovering new terms you haven't seen before. With the rule based engine it can only find what you explicitly tell it to find.
If all you want is just dictionary entries, then you can create a very simple string comparison solution, but you lose the linguistic features.

How to plot a JSON file generated by osrm for route optimization into an OSM map

I am a complete newbie to computation of gps, gis and all these geoinformatical stuff. First I describe my lessons learned to arrive at gps data. You can skip this and go to the last two paragraphs where I explain my problem with osrm and how to map a route onto an OSM map via qgis or similar tools.
I tried to do some route optimization for a bunch of addresses to support my son's paper deliverer job. I was able to generate a list of gps data by using the Nomatim engine that is available via geopy.geocoders. It's kind of a Travelling Salesman problem (TSP).
By using geopy's distance calculator and the or-tools from Google to Generate a shortest list recommendation. That worked well but it was only a TSP solution for air line :-(.
Then I was looking for route optimization toolkits but i struggled to get one for free. I thought osrm could be the right tool. I followed the descriptions given at Github, see here. I was able to generate a JSON file - at least I supposed that it's kind of a JSON file. But I was unable to project this back onto a map in QGis or any online tool from OSM. Can anyone help me?
The file with such JSON-like formatting:
{"code":"Ok","waypoints":[{"hint":"Jh4BgEUzI4BhAAAACwAAAKIAAABZAAAAkLAjQgpyikBay4dCWsuHQmEAAAALAAAAogAAAFkAAAArAAAAxwB4AARI3AI3AXgAWEbcAgIADwXVhXd1","location":...
Due to privacy issues I cannot post it here with any locations. Sorry for this. But does anyone have kind of a recipe / step-by-step guide what I need to do to plot it? I even have no idea how to "open" a map within qgis. You need to do this as kind of a database but this is totally new for me. I would prefer to work with an easier method to plot it.
Thanks in advance for any help.
Please follow the API documentation here. From that documentation
hint Unique internal identifier of the segment (ephemeral, not
constant over data updates) This can be used on subsequent request to
significantly speed up the query and to connect multiple services.
E.g. you can use the hint value obtained by the nearest query as hint
values for route inputs.
You can get the geometry in many ways. Widely GeoJSON is being used by developers. OSRM returns a very clean GeoJSON which can easily be used with Leaflet, Mapbox or other Map APIs. You need to send steps parameter true to get the full step by step direction. You can get the each segments of steps in legs. So call the GeoJSON within the geometries of every legs with loop. You can also get the geometry without passing the steps parameter true. For that you will get full geometry in a single GeoJSON within the routes property.

Transform HBase Scan to RowFilter

I'm using scio from spotify for my Dataflow jobs.
In last scio version, new bigtable java api is used (com.google.bigtable.v2)
Now scio bigtable entry point required "RowFilter" to filter instead of Hbase "Scan". Is there a simple way to transform "Scan" to "RowFilter" ? I looked for adapters in source code but I'm not sure how to use it.
I don't find documentation to easy migrate from hbase api to "new" api.
A simple scan I used in my code that I need to transform:
val scan = new Scan()
scan.setRowPrefixFilter("helloworld".getBytes)
scan.addColumn("family".getBytes, "qualifier".getBytes)
scan.setMaxVersions()
In theory, you can add the bigtable-hbase dependency to the project and call com.google.cloud.bigtable.hbase.adapters.Adapters.SCAN_ADAPTER.adapt(scan) to convert the Scan to a RowFilter, or more specifically a [ReadRowsRequest][3] which contains a [RowFilter][4]. (The links are to the protobuf definition of those objects which contain the variables and extensive comments).
That said, the bigtable-hbase dependency adds quite a few transitive dependencies. I would use the bigtable-hbase SCAN_ADAPTER in a standalone project, and then print the RowFilter to see how it's constructed.
In the specific case that you mention, the RowFilter is quite simple, but there may be additional complications. You have three parts to your scan, so I'll give a breakdown of how to achieve them:
scan.setRowPrefixFilter("helloworld".getBytes). This translates to a start key and end key on BigtableIO. "helloworld" is the start key, and you can calculate the end key with RowKeyUtil. calculateTheClosestNextRowKeyForPrefix. The default BigtableIO does not expose set start key and set end key, so the scio version will have to change to make those setters public.
scan.addColumn("family".getBytes, "qualifier".getBytes) translates to two RowFilters added to a RowFilter with a Chain (mostly analogous to an AND). The first RowFilter will have familyNameRegexFilter set, and the second RowFilter will have columnNameRegexFilter
scan.setMaxVersions() converts to a RowFilter with cellsPerColumnLimitFilter set. It would need to be added to a the chain from #2. Warning: If you use a timestampRangeFilter or value filter of a RowFilter to limit the range of the columns, make sure to put the cellsPerColumnLimitFilter at the end of the chain.