I am new to openmaptiles. Actually, I want to buy a package and create my own map tile server where I could fetch and display the map from my own server. But before that, I want to understand that how to create my own map server and display the map by using leaflet library.
So I followed the 'Docker' way to download the map free version and deployed it. But I do not know how to integrate with leaflet library. I read the documentation, But I was not able to understand.
Question 1: Can someone explain to me how to integrate created tile server with leaflet library?
Question 2: How to run docker command with explicitly specify which mbtiles to use?
'docker run --rm -it -v $(pwd):/data -p 8080:80 klokantech/openmaptiles-server openmaptiles-2017-07-03_planet_z0_z14.mbtiles'.
I tried above command. But it is not working.
Simple answer to run the command is, install docker first: Docker
As docker will help to run the example and see everything working together on your pc, the real understanding of your question is that you want to have a server that can serve .mbtiles to your remote application, for that I have setup php server which are cheaper then nodejs hosts: .mbtiles Server PHP
Related
I read docker support was not ready yet, but docker recently published a preview of their new version that seems to be working. Next issue is that MySQL images are only available on x86_64 and not arm, but MariaDb has one. Unfortunately, I'm not versed enough in Docker to be able to get that working. Finally, I tried installing prisma with a postgres db but couldn't get that working either.
Has anyone had success getting prisma working on the new Apple MC M1 devices? If so, please advise how you got it working.
Edit: Here is the Error Message when I prisma init for a mysql db and run docker-compose up -d
"ERROR: no matching manifest for linux/arm64/v8 in the manifest list entries"
I've come to learn that this is because there isn't a mysql image for arm yet (but there is one for mariadb).
So when I changed the docker-compose file to load the mariadb image instead, docker-compose up -d runs successfully, but when I open http://localhost:4466 in my browser localhost refuses to connect.
The same "localhost refused to connect" result also occurs if I set up prisma for postgres or mongodb too.
Any help is appreciated
I'm setting up an OpenMapTiles-server-dev to work in Docker container and as a map source i downloaded and configured planet. The map is not showing up in view and also in main UI.
My source of OMT-Server, also tried other servers:
docker pull klokantech/openmaptiles-server
I grant all permissions, give more resoursces to Docker, installed Node.js etc. The best part is that i run another map source like Spain or Switzeland and it works like a charm.
PowerShell commands:
docker run -it -v D:\spain:/data -p 8080:80 klokantech/tileserver-gl
docker run -it -v D:\planet:/data -p 8080:80 klokantech/tileserver-gl
Output of both maps are identical after executing ^commands
So the both maps(planet, Spain) was configured succesfuly, but only Spain works properly. Also using another PC with normal Windows 10 I was able to display properly planet map.
[SLOVED]
I just installed Windows 10 edu for testing and it works fine. So if you have similar problem just have in mind that Windows 2019 server is not compatible to run larger maps.
I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.
I'd like to make the deployment to production the easiest it can be, but struggling with the way how to do it.
If I will have docker for production, it will be nice to have docker image with my application deployables, but I'm not sure if it is good approach.
I have several concerns:
wouldn't the layer system bloat, when I will replace the file every time in new version of image?
Is it good idea to make DB scripts and migration tool part of this image?
The last concern is how to run it conveniently. I don't want to go there stop the tomcat container and start it again using volume from new application image(as the new app container name cannot be the same).
I have seen ways to do that, but I don't like them very much i.e. deploy to Tomcat docker image ,create Tomcat image with application already bundled or use host system volume. I like to have something like install "CD". I'd like to evaluate my idea with other approaches, speaking about the proper tool to run it is maybe for other question.
wouldn't the layer system bloat, when I will replace the file every time in new version of image?
No because you can clean up dangling images
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Is it good idea to make DB scripts and migration tool part of this image?
Yes, if your startup script knows to detect if it needs to apply them.
I don't like them very much i.e. deploy to Tomcat docker image ,create Tomcat image with application already bundled or use host system volume.
If your data volume container is separate from the app, that shouldn't be an issue.
From the discussion, the OP adds:
using this docker create --name <container_name> <image_name> with different image name can retain the container name and I can run Tomcat container with the same volumes-from?
docker run -it --rm -p 8888:8080 --volumes-from <container_name> <image_name>
That is the idea, but it won't work if there is already a create data container with that name.
If there is no persistent data in it, one can docker rm that data container, and recreate it with the same name.
If there are persistent data, then it is best to copy the new updated data through an intermediate (docker run) container which would mount temporarily the data container.
I am trying to deploy a Scala based application to dokku, the application runs a http server and a customised sshd server.
The problem I have is it seems that dokku only supports one port for the application.
I need dokku to expose both my applications ports to the web.
In docker this is possible and quite straight forward to do, but when I implement the same technique in the dokku file, I get an error.
Any suggestions on allowing two ports to be accessible?
Since this is, after all, docker, you can use an ambassador...
You will need a line like:
docker run -t -i -link mysql:mysql -name mysql_ambassador -p 3306:3306 ctlc/ambassador
Replacing with your port and mysql with your container name (from docker images)
See https://www.ctl.io/developers/blog/post/deploying-multi-server-docker-apps-with-ambassadors
NOTE: Make sure you docker pull svendowideit/ambassador:latest before...