Separate files to load or long line of code - leaflet

http://rich.littlebigfoot.org.uk/test7.html
I am creating a map and will be loading 20 or so walks onto the map. Each walk will have upwards of 50 plus points which will create a very long file. Is it better to create a separate file for each walk, aiding any editing needs or just load one very long one please?
If I create separate walk files do I simply call them normally.
Thanks
Rich

20 paths with ~50 points each (i.e. 20*50 = 1,000 pairs of coordinates) is not that very long / big, actually. See for example this GeoJSON file with shapes of all countries in the world: it has 10k+ points.
A good practice is indeed to separate your data and application in different files, so that you can update them separately.
Then splitting again your data into separate paths is up to you, depending on the rate at which you will update them, if you have an automated process (or not) to generate them, and if your visitors are limited in terms of bandwidth. Just consider the benefit of caching on user's browser what does not change v.s. number of network requests to download separate files.
By the way, note that you can build a Polyline by passing an array of "array of coordinates", you do not have to build actual L.latLng points:
var polyline = L.polyline(
[
[50.2184,-5.4793],
[50.2166,-5.4850],
[50.2168,-5.4884] // etc.
],
polylineOptions);

Related

How to use VTK to efficiently write time-varying field data on a fixed mesh?

I am working on physics simulation research. I have a large fixed grid in one of my projects that does not vary with time. The fields on the grid, on the other hand, vary with time in the simulation. I need to use VTK to record the field data in each step for visualization (Paraview).
The method I am using is to write a separate *.vtu file to disk at each time step. This basically serves the purpose, but actually writes a lot of duplicate data (re-recording the geometry of the mesh at each step), which not only consumes more disk space, but also wastes time on encoding and parsing.
I would like to have a way to write the mesh information only once, and the rest of the time only new field data is written, while being able to guarantee the same visualization. Please let me know if VTK and Paraview provide such an interface and how to implement it.
Using .pvtu and refer to the same .vtu as Piece for each step should do the trick.
See this similar post on the ParaView discourse, and the pvtu doc
EDIT
This seems to be a side effect of the format, this is not supported by the writer.
The correct solution is to use another file format ...
Let me provide my own research findings for reference.
As Nico said, with the combination of pvtu/vtu files, we could theoretically implement a geometry structure stored in a separate vtu file, referenced by a pvtu file. Setting the NumberOfPieces attribute of the ptvu file to 1 would enable the construction of only one separate vtu file.
However, the VTK library does not expose a dedicated operation interface to control the writing process of vtu files. No matter how it is set, as long as the writer's input contains geometry structures, the writer will write geometry information to disk, and this process cannot be skipped through the exposed interface.
However, it is indeed possible to make multiple pvtu files point to the same vtu file by manually editing the piece node in the ptvu file, and paraview can recognize and visualize such a file group properly.
I did not proceed to try adding arrays to the unstructured grid and using pvtu output.
So, I think the conclusion is.
if you don't want to dive into VTK's library code and XML implementation, then this approach doesn't make sense.
if you are willing to write a series of files, delete most of them from the vtu file, and then point all the pvtu's piece nodes to the only surviving vtu file by editing the pvtu file, you can save a lot of disk space, but will not shorten the write, read, and parse times.
If you implement an XML writer by yourself, you can achieve all the requirements in theory, but it requires a lot of coding work.

Identify crossroad nodes in openstreetmap data (.pbf)

does anybody know if there is a way I can seperate only the crossroad nodes which are included in a .pbf file? Is this clue (if a node is crossroad or not) included in this file's format?
Another option to solve your issue would be to use the new Atlas project.
As part of loading .osm.pbf files into in-memory Atlas files, it takes care of doing way sectioning on roads:
Load your pbf file into an Atlas. You will then have an Atlas object that you could save to a file and re-use.
Use the Atlas APIs to access all the intersections
In the end, each Atlas Node which is connected to more than 4 Edges on a two-way road or 2 Edges on a one way road would be a candidate if I understand your question correctly.
I'm not aware of a ready-made solution for this task, but it should still be relatively easy to do.
For parsing the .pbf file, I recommend using an existing library like Osmosis or Osmium. That way, you only need to implement the actual semantics of your use case.
The nodes themselves don't have any special attributes that mark them as crossroads. So instead, you will have to look at the ways containing the nodes.
Some considerations when implementing this:
You need to check the way's tags to find out whether it's a road. The most relevant key for that is highway. The details depend on your specific use case – for example, you need to decide whether footways, forestry tracks, driveways, ... should count.
What matters is the number of connecting way segments at a node, not the number of ways. For example, a node that is part of two ways may be a crossroads (if at least one of the ways continues beyond that node), or may not (if both ways start/end at that node).

Where to store a real time strategy data?

I'm trying to make a basic RTS, but I have no idea where can I store data, for example units, buildings, etc. I'd like to avoid making a hundreds of .txt files (or one, very big .txt file). Well, I could just write a header with a class of every single object, but wouldn't it be too much? I mean, if I make about 20 units (in total, of course) with similar stats (range, attack value, health, etc.) and only with different special abilities, I think it is quite strange to set everything in 20 constructors, doesn't it?
Another problem is with storing a map. I think I'll try the .txt solution here, but I'm probably going to write some kind of map editor in WinAPI or sth like that, setting the map in the .txt file would be a torment. So I know how to represent tiles (I want the map to be a tiled one, it will be much easier to implement, I suppose), but what if there is a unit that takes more than only one tile, how can I deal with this?
Txt and XML are not great solutions, and also writing and reading from disk isn't the cheapest operation you can do in real time. The way to do this in Unity is through Serialization, basically you write a class that allow you to store data without instantiating a GameObject for it, and whenever you'd like to, you can save or load it at runtime. There is also a great tutorial about data persistence on Unity Tutorials page. (Link Here)
I highly recommend the Easy Save plugin. I'd set it up so it only saves to disk every few seconds, not a constant stream. Also, with Easy Save you can save just bits and pieces to a larger save file rather than saving everything with each pass. If the game crashes, you might lose a couple seconds of progress, but that should be an acceptable loss in the case of a crash or quit.

Loading big part of the graph in a single traversal/query

I would like to load a bigger part of a graph with one query/traversal (in order to save network requests).
So what I would like to do:
One traversal
Retrieve a big part of a graph.
Start from a given Vertex (by id normally)
In tree format (for processing afterwards)
Include edges (for processing afterwards)
Process the data afterwards to put it into a data structure.
For example in the graph below I would like to get everything starting from "hercules". But I do not want the "lives" edge and the data going out there.
So far I got this:
GraphTraversal traversal = titanGraph.traversal()
.V().has("name", "hercules").as("v").outE("battled").as("e").inV().tree();
traversal.next();
(source: thinkaurelius.com)

How should I store my large MATLAB data files during analysis?

I am having issues with 'data overload' while processing point cloud data in MATLAB. This is what I am currently doing:
I begin with my raw data files, each in the order of ~30Mb each.
I then do initial processing on them to extract n individual objects and remove outlying points, which are all combined into a 1 x n structure, testset, saved into testset.mat (~100Mb).
So far so good. Now things become complicated:
For each point in each object in testset, I will compute one of a number of features, which ends up being a matrix of some size (for each point). The size of the matrix, and some other properties of the computation, are parameters of the calculations. I save these computed features in a 1 x n cell array, each cell of which contains an array of the matrices for each point.
I then save this cell array in a .mat file, where the name specified the parameters, the name of the test data used and the types of features extracted. For example:
testset_feature_type_A_5x5_0.2x0.2_alpha_3_beta_4.mat
Now for each of these files, I then do some further processing (using a classification algorithm). Again there are more parameters to set.
So now I am in a tricky situation, where each final piece of the initial data has come through some path, but the path taken (and the parameters set along that path) are not intrinsically held with the data itself.
So my question is:
Is there a better way to do this? Can anyone who has experience in working with large datasets in MATLAB suggest a way to store the data and the parameter settings more efficiently, and more integrally?
Ideally, I would be able to look up a certain piece of data without having to use regex on the file strings—but there is also an incentive to keep individually processed files separate to save system memory when loading them in (and to help prevent corruption).
The time taken for each calculation (some ~2 hours) prohibits computing data 'on the fly'.
For a similar problem, I have created a class structure that does the following:
Each object is linked to a raw data file
For each processing step, there is a property
The set method of the properties saves the data to file (in a directory with the same name as
the raw data file), stores the file name, and updates a "status" property to indicate that this step is done.
The get method of the properties loads the data if the file name has been stored and the status indicates "done".
Finally, the objects can be saved/loaded, so that I can do some processing now, save the object, later load it and I immediately know how far along the particular data set is in the processing pipeline.
Thus, the only data in memory is the data that is currently being worked on, and you can easily know which data set is at which processing stage. Furthermore, if you set up your methods to accept arrays of objects, you can do very convenient batch processing.
I'm not completely sure if this is what you need, but the save command allows you to store multiple variables inside a single .mat file. If your parameter settings are, for example, stored in an array, then you can save this together with the data set in a single .mat file. Upon loading the file, both the dataset and the array with parameters are restored.
Or do you want to be able to load the parameters without loading the file? Then I would personally opt for the cheap solution of having a second set of files with just the parameters (but similar filenames).