Building a Vector Tile API from a netCDF file - mapbox-gl-js

Objective
I am trying to create a Vector Tile API (in javascript) that can be used in a react application using the mapbox-gl library.
Expected endpoint: url/{tileset_id}/{zoom}/{x}/{y}.{format}.
I only have a netCDF file for the moment.
My approach
From what I currently understand, I need to convert my netCDF file to Vector Tiles (perhaps using TileServer GC), then serve those tiles on an API (perhaps using Tippecanoe). Is this the best way to proceed or is there a simpler solution?
I'm not necessarily waiting for specific answers, but rather some possible leads / solutions to achieve this goal.

Related

H3 DGGS : General questions

Good afternoon,
I'm a newbie to H3. Before reading deeply the documentation and go further on tests with H3, I'm taking the liberty to ask you general questions regarding H3.In advance sorry if my questions seem naive or clumsy.
Which bindings are recommended for using H3? Is there one more suitable one for each fonctionality ? data integration ? display? Raster supported ? Sampling/quantification? : python? geopandas with jupyter notebook? postgis? R ? Bigquery ? js,etc.?
We wonder about the possibility with H3 to consider DGGS maritime trafficability shorter path analysis with some constraints. I past below a screen shot.
Does H3 allows the integration/fusion/combine of data? We would like do some test with multi-source/multi-date data fusion combination for the creation of a DTM (topographic or bathymetric)?
Is it possible to assign a weight to the THR data (importance flag in order to not decimate the Very Hihgt Resolution). So, Is it possible to manage and define metadata ?
Which type of data the tool is able to integrate ? (raster ? polygon? line? point ? point cloud?).
Does the tool offer different methods in terms of sampling and quantification? Is it possible for the user to decide at what level in the hierarchy of cells it is possible to assign the data?
Finally does H3 is compliant with OGC DGGS abstract standard. If no, do you know the existing gap ?
In advance, thank you very much for your useful replies.
Kind regards.
Best-effort answers to your questions:
A. Bindings: The bindings we're aware of are listed here. The bindings for Java, JavaScript, and Python are probably the best-maintained (though Python has been undergoing a major refactor and might be best used when this is finished).
B. Path Analysis: I haven't worked with this, but this tutorial suggests that all you need to implement this in a hex grid are neighbors and a distance function. Neighbors in H3 are available via kRing(origin, 1) and distance can be calculated via h3Distance(origin, target) (with some limitations at present - the two cells cannot be too far apart and the path cannot cross a pentagon).
C. Merging Data Sources: H3 is an excellent choice as a common unit for analysis that merges multiple data sources - you can convert multiple sources into H3 and then e.g. perform cell-based raster arithmetic to get a value for each hexagon. The H3 library itself only offers conversion functions, not data merging functions.
D. I don't fully understand this question, but it would be outside the purview of the H3 library.
E. Data Type Conversion: The library provides strong support for converting polygon data (via polyfill) and point data (via h3ToGeo). Raster data would probably need to be converted into a grid of points for conversion to cells. H3 uses a spherical plane that doesn't consider altitude, so it can't be used to convert a 3d point cloud without external logic about how to project the points onto the surface. Note that the H3 library itself has no logic to deal with file formats, etc.
F. Sampling/Quantification: The choice of resolution is user-specified, but otherwise the H3 library does not explicitly deal with sampling or quantification. Points are assigned to the cells in which they are found; when using polyfill, cells are assigned to polygons in which their centers are found. Further sampling choices are left to the user.
G. Adherence to DGGS Standard: See this paper for an assessment of H3 and an alternative DGGS in relation to the standard.

How to create a "Denoising Autoencoder" in Matlab?

I know Matlab has the function TrainAutoencoder(input, settings) to create and train an autoencoder. The result is capable of running the two functions of "Encode" and "Decode".
But this is only applicable to the case of normal autoencoders. What if you want to have a denoising autoencoder? I searched and found some sample codes, where they used the "Network" function to convert the autoencoder to a normal network and then Train(network, noisyInput, smoothOutput)like a denoising autoencoder.
But there are multiple missing parts:
How to use this new network object to "encode" new data points? it doesn't support the encode().
How to get the "latent" variables to the features, out of this "network'?
I appreciate if anyone could help me resolve this issue.
Thanks,
-Moein
At present (2019a), MATALAB does not permit users to add layers manually in autoencoder. If you want to build up your own, you will have start from the scratch by using layers provided by MATLAB;
In order to to use TrainNetwork(...) to train your model, you will have you find out a way to insert your data into an object called imDatastore. The difficulty for autoencoder's data is that there is NO label, which is required by imDatastore, hence you will have to find out a smart way to avoid it--essentially you are to deal with a so-called OCC (One Class Classification) problem.
https://www.mathworks.com/help/matlab/ref/matlab.io.datastore.imagedatastore.html
Use activations(...) to dump outputs from intermediate (hidden) layers
https://www.mathworks.com/help/deeplearning/ref/activations.html?searchHighlight=activations&s_tid=doc_srchtitle
I swang between using MATLAB and Python (Keras) for deep learning for a couple of weeks, eventually I chose the latter, albeit I am a long-term and loyal user to MATLAB and a rookie to Python. My two cents are that there are too many restrictions in the former regarding deep learning.
Good luck.:-)
If you 'simulation' means prediction/inference, simply use activations(...) to dump outputs from any intermediate (hidden) layers as I mentioned earlier so that you can check them.
Another way is that you construct an identical network but with the encoding part only, copy your trained parameters into it, and feed your simulated signals.

matlab osm webread slow

i am using Matlab's webread() function to download openstreetmap osm files.
My goal is to write an application that downloads osm data, parses it in Matlab, then reads out coordinate values and plots them.
Unfortunately, the overall code seems to run pretty slow, and running the profiler showed that most of the runtime comes from using the webread() function, even though the requested .osm are comparatively small (<500kb) and downloaded via Browser within a fraction of the time the function uses.
What can i do to speed up the downloading process? Is it recommended to use the webread function for this task at all? I was also considering downloading the files by using another language/a bash application that can be called from within Matlab.
EDIT: osm data is downloaded as XML file

LMDB files and how they are used for caffe deep learning network

I am quite new in deep learning and I am having some problems in using the caffe deep learning network. Basically, I didn't find any documentation explaining how I can solve a series of questions and problems I am dealing right now.
Please, let me explain my situation first.
I have thousands of images and I must do a series of pre-processing operations on them. For each pre-processing operation, I have to save these pre-processed images as 4D matrices and also store a vector with the images labels. I will store this information as LMDB files that will be used as input for the caffe googlenet deep learning.
I tried to save my images as .HD5 files, but the final file size is 80GB, which is impossible to process with the memory I have.
So, the other option is using LMDB files, right? I am quite newbie in this file format and I appreciate your help in understanding how to create them in Matlab. Basically, my rookie questions are:
1- These LMDB files have extension .MDB, right? is this extension the same used by microsoft access? or the right format is .lmdb and they are different?
2- I find this solution for creating .mdb files (https://github.com/kyamagu/matlab-leveldb), does it create the file format needed by caffe?
3- For caffe, should I have to create one .mdb file for labels and other for images or both can be fields of the same .mdb file?
4- When I create an .mdb file I have to label the database fields. Can I label one field as image and other as label? does caffe understand which field means?
5- what does the function (in https://github.com/kyamagu/matlab-leveldb) database.put('key1', 'value1') and database.put('key2', 'value2') do? Should I have to save my 4-d matrices in one field and the label vector in another?
There is no connection between LMDB files and MS Access files.
As I see it you have two options:
Use the "convert_imageset" tool - it is located in caffe under the tools folder to convert a list of image files and label to lmdb.
Instead of "data layer" use "image data layer" as an input to the network. This type of layer takes a file with a list of image file names and labels as source so you don't have to build a database (another benefit for training - you can use the shuffle option and get slightly better training results)
In order to use an image data layer just replace the layer type from Data to ImageData. The source file is the path to a file containing in each line a path of an image file and the label seperated by space. For example:
/path/to/filnename.png 23
If you want to do some preprocessing of the data without saving the preprocessed file to disk you can use the transformations available by caffe (mirror and cropping) (see here for information http://caffe.berkeleyvision.org/tutorial/data.html) or implement your own DataTransformer.
Caffe doesn't use LevelDB - but it uses LMDB 'Lightning' db from Symas
You can try using this Matlab LMDB wrapper
I personally had no experience with using LMDB with Matlab, but there is nice library for doing this from Python: py-lmdb
LMDB database is a Key/Value db (similar to HashMap in Java or dict in Python). In order to store 4D matrices you need to understand the convention Caffe uses to save images into LMDB format.
This means that the best approach to convert images to LMDB for Caffe would be doing this with Caffe.
There are examples in Caffe on how to convert images into LMDB - I would try to repeat them and then modify scripts to use your images.

Plot data in google map from matlab

Is there anyway to plot my data consisting of lat/lon and some feature values in google map from matlab. I have certain data points having different properties based upon that I want to show like markers with different color/size on google map. Is that possible
Google Maps allows you to import data in the form of a KML file. There are various tutorials available online that show how to perform this import step (here's one that I just quickly found). Also, here is some basic info on KML from google.
So then the only challenge becomes exporting your data from MATLAB into KML form. If you have MATLAB's Mapping Toolbox, then this is extremely easy. Just use the kmlwrite command.
If you don't have the Mapping Toolbox, already, it's probably a good idea to have if you are performing any sort of complex mapping operations (things get pretty complicated when you try to flatten a round globe into a map). If this is just a one-off project and that toolbox is overkill, then you may be able to manually create a KML file by writing XML from MATLAB (either using xmlwrite or going the very manual route of writing with fprintf).
Additionally, I would not be too surprised if Google Maps allows you to import certain data in the form of CSV files (though perhaps this has limitations compared to KML). If so, you can simply make use of csvwrite from MATLAB to export your data (no extra toolboxes required).
==EDIT==
If you'd like to find out how to convert from CSV to KML, this previous SO post might help.
There is the KLM-Toolbox that doesn't require the matlab Mapping Toolbox:
http://www.mathworks.com/matlabcentral/fileexchange/34694
It should do the job.