Is there a way to generate a .mbtiles file from .osm.pbf file - openstreetmap

I have an .osm.pbf file which I want to use to generate vector tiles with (.mbtiles).
Im currently on a windows machine utilising docker, I have tried to use the tool tilemaker (https://github.com/systemed/tilemaker) though I cannot get it to work on my files and get issues like so
"
terminate called after throwing an instance of 'std::runtime_error'
what(): Exception during zlib decompression: (-5)
"
I was just wondering if anyone else was able to generate these tiles from said file type, if so could you provide a low level detailed guide on how you did so, as I am new to vector tiles and am getting confused within some circumstances.
For anyone interested I use this code to run the docker:
docker run tilemaker tilemaker --input=sud-latest.osm.pbf --output=sud.mbtiles
I have to put tilemaker twice as otherwise it says it cannot open the .osm.pbf otherwise

I made a tutorial on how to generate tiles using maptiler:
https://blog.kleunen.nl/blog/tilemaker-generate-map
It is focused on linux, but you can run it on windows as well. You can find a pre-built version of maptiler on the CI:
https://github.com/systemed/tilemaker/pull/208/checks?check_run_id=2143761163
Probably soon they will also become available on the github page.
Once you have the prebuilt executable and the resources (config and process lua), you can simply do:
tilemaker.exe --input=sud-latest.osm.pbf --output=sud.mbtiles --process resources/process-openmaptiles.lua --config resources/config-openmaptiles.json
The output works best from zoom level 8 - 14, borders are still missing, so lower zoom levels look pretty empty.

You can use ogr2ogr (see other answer here) to translate osm.pbf into geojson, and then Mapbox's tippecanoe tool to convert the geojson to mbtiles.

possible solutions :
1.Might be RAM issue try to run small size osm.pbf file with tilemaker
2.Run tilemaker.exe from executable file (by making build from github tilemaker clone) ---> it may solve most of issues

Related

dsl.ContainerOp with python

What are the options to download .py files into the execution environment?
In this example:
class Preprocess(dsl.ContainerOp):
def __init__(self, name, bucket, cutoff_year):
super(Preprocess, self).__init__(
name=name,
# image needs to be a compile-time string
image='gcr.io/<project>/<image-name>/cpu:v1',
command=['python3', 'run_preprocess.py'],
arguments=[
'--bucket', bucket,
'--cutoff_year', cutoff_year,
'--kfp'
],
file_outputs={'blob-path': '/blob_path.txt'}
)
run_preprocess.py file is being called from CLI.
The question is: how to get that file in there?
I have seen this interesting example: https://github.com/benjamintanweihao/kubeflow-mnist/blob/master/pipeline.py , and it clones the code before running the pipeline.
The other way would be git cloning with Dockerfile (although the image would take forever to build).
What are other options?
To kickstart KFP development using python, try the following tutorial: Data passing in python components
it clones the code before running the pipeline
The other way would be git cloning with Dockerfile (although the image would take forever to build)
Ideally, the files should be inside the container image (the Dockerfile method). This ensures maximum reproducibility.
For not very complex python scripts, the Lightweight python component feature allows you to create component from a python function. In this case the script code is store in the component command-line, so you do not need to upload the code anywhere.
Putting scripts somewhere remote (e.g. cloud storage or website) is possible, but can reduce reliability and reproducibility.
P.S.
although the image would take forever to build
It shouldn't. The first time it might be slow due to having to pull the base image, but after that it should be fast since only the new layers are being pushed. (This requires choosing a good base image that has all dependencies installed, so your Dockerfile only adds your scripts).

Buildroot - extract a custom board/buildroot config/kernel config out of tree

I have customized buildroot with the new board ( derived from raspberry pi zero ). So my changes are (in-tree):
.config
board/passkeeper/genimage-passkeeper.cfg
board/passkeeper/post-build.sh
board/passkeeper/post-image.sh
board/passkeeper/rootfs_overlay/etc/init.d/S41passkeeper
board/passkeeper/rootfs_overlay/etc/mdev.conf
board/passkeeper/rootfs_overlay/etc/udhcpd.conf
configs/passkeeper_defconfig
output/build/linux-custom/.config
Now, reading the documentation - I am a bit confused on how to put all these things into the separate folder via BR2_EXTERNAL. Also I'm not sure how do I move the linux configuration from output/build/linux-custom/.config
make linux-update-defconfig BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=/tmp/passkeeper/linux/linux-config
results in
Unable to perform linux-update-defconfig when using a defconfig rule
Can somebody please provide step-by-step guide on that?
[You are asking two questions. I will answer only the question about saving the linux .config file; the other question is too generic.]
You need to set the appropriate options in menuconfig, not just override on the command line, otherwise they are inconsistent.
The process complete for creating a linux defconfig based on a pre-existing in-tree defconfig is the following. You have already done steps 1, 2 and 3.
In the Buildroot configuration, select BR2_LINUX_KERNEL_USE_DEFCONFIG or BR2_LINUX_KERNEL_USE_ARCH_DEFAULT_CONFIG.
Run make linux-menuconfig and adapt the linux configuration to your needs.
Build and test, iterate over 2 until you have the configuration you want.
In the Buildroot configuration, switch to BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG and set BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE to the place where you want to save it (typically board/passkeeper/linux.config or $(BR2_EXTERNAL_PASSKEEPER)/board/passkeeper/linux.config if you are using an external).
Run make linux-update-defconfig. It is essential you do this before doing anything else, otherwise Buildroot will complain that the file doesn't exist.

Including multiple folders (with images, scripts, etc) within a Matlab standalone GUI.exe

I have a software that has multiple GUIs. To organize things better (or at least that was my thought), I have created several folders within the root directory as it can be seen in this image.
Within the folders i have both files with different formats and also some Matlab scripts.
When creating the Matlab executable using the Application compiler, and after selecting the main file, Matlab does not directly detected that these same folders are important for the code to run. Therefore I decided to add the folders manually.
Once the setup is created and installed, by running the application within the Matlab environment, I was able to debug one possible issue why the software is not running.
As you can see in the first image, the "play.png" is within the Images folder.
My question is pretty straight forward: how to force the Matlab Compiler to learn that all these folders are to be included in the setup? Not only to be included but their paths'
Two things could be going on:
You are not including the files in the package.
Make sure that you include them using the -a option of mcc:
mcc -m hello.m -a ./testdir/*
You can also use the GUI, of course, see here.
You are looking for the included files in the wrong place. Use ctfroot as the root of all paths in your code:
img_file_name = fullfile(ctfroot,'Images','brain.jpg'));
Check the unpacked CTF file (it is automatically unpacked when executed) to see the directory structure in it. ctfroot points to the root of the unpacked CTF file.
PS: This blog post might give you some more pointers.

is there a way to run storage statistics on a yocto-produced filesystem?

I used Yocto to build a filesystem, using a .bbappend of core-image-minimal. Two questions:
how can i figure out which package is taking huge storage space on the rootfs?
I can't think of a way other than to look into the ${D} of every package and see how big its components are. There's gotta be a more systematic, and intelligent way to do that.
From what i can decipher from the manifest, there is nothing related to the size of the package that is being included.
Also, removing some of the packages I added using the IMAGE_INSTALL object, seems to remove the package but the end result of the built image doesn't show a change in its size!!
I compared the size of a particular .so on the build machine and on the installation device (a vm) and found that the size on the installation device was 20-30% of the original size seen on the build machine. Any explanation?
Thanks!
1) One way is to enable buildhistory, by adding the following to local.con
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "1"
This will create a directory (git repo) buildhistory in your $BUILDDIR. There you'll be able to find e.g.
images/$MACHINE/eglibc/$IMAGE/installed-package-sizes.txt
That file will give you the sizes of all installed packages.
There are a lot more things you can learn from buildhistory, see buildhistory introduction
2) Where did you compare the particular .so-file? If it was from the package's ${B} (i.e. where the library is built), it's not surprising, as the installed .so-file will be stripped. The debug information is installed into -deb.rpm (as the debug info is usually useless on the target and the smaller size is of much higher importance).
With some looking inside the scripts/ subdir and some googling about some of the existing scripts, it turns out that the good people of Yocto do have these scripts properly functioning out of the box:
scripts/tiny/dirsize.py and ksize.py.
dirsize.py will give you a breakdown of pkg sizes for your rootfs; while ksize.py will give you the equivalent info for the kernel.

Developing with Qooxdoo and multiple developers

I'm interested in Qooxdoo as a possible web development framework. I have downloaded the SDK and installed it in a central location on my PC as I expect to use it on multiple projects. I used the create-application.py script to make a new test application and added all the generated files to my version control system.
I would like to be able to collaborate on this with other developers on other PCs. They are likely to have the SDK installed in a different location. The auto-generated files in Qooxdoo seem to include the SDK path in both config.json and generator.py: if the SDK path moves, the generator.py script stops working. generator.py doesn't seem to be too much of a problem as it looks in config.json for an updated path, but I'm not sure how best to handle config.json.
The only options I've thought of so far are:
Exclude it from the VCS, but there doesn't seem to be a script to regenerate it automatically, so this could be dangerous.
Add it to the VCS but have each developer modify the path line and accept that it might need to be adjusted whenever changes are merged.
Change config.json to be a path and a single 'include' line that points to a second file that contains all the non-SDK-path related information.
Use a relative path to the SDK and keep a separate, closely located copy of the SDK for every project that uses it.
Approach 1 would be ideal if the generation script existed; approach 2 is really nasty; I couldn't get approach 3 to work and approach 4 is a bit messy as it means multiple copies of the SDK littered about the place.
The Android SDK seems to deal with this very well (using approach 1), with the SDK path in its own file with a script that automatically generates that file. As far as I can tell, Qooxdoo puts lots of other important information in config.json and the only way to automatically generate that file is to create a new project.
Is there a better/recommended way to deal with this?
As an alternative to using symlinks, you can override the QOOXDOO_PATH macro on the command line:
./generate.py source -m QOOXDOO_PATH:<local_path_to_qooxdoo>
(Depending on the shell you are using you might have to apply some proper quoting of the -m argument). This way, every programmer can use his locally installed qooxdoo SDK. You can even drop the QOOXDOO_PATH entry from config.json to enforce this.
We work with a symbolic link pointing to the sdk ... config.json contains just the path of the link.