I wanna use docker to run a zookeeper image. The instruction command is
docker run -d -p 2181:2181 -p 2888:2888 -p 3888:3888 --name zookeeper confluent/zookeeper
I am not clear with -p option, what does these three "-p "options mean in this zookeeper example ? and why we have two same port value in a single -p options. I would expect like 2181:localhost, not 2181:2181.
The -p flag specifies which of the container you choose to expose in your container (they are all closed by default).
The purpose of using : annotation is to instruct which port of the container should be forwarded to the localhost port.
Referring to your question - mapping the port like 2181:localhost would mean nothing, because localhost is inferred automatically, but the port isn't. The reason Docker gives you the choice is because port 2181 could be occupied on your localhost, so they give you the freedom to choose a port of your choice to forward into.
Related
I want to change the database port from the postgres docker image. The following properly opens the port 5000 but does not do the first step of telling postgres to start on a non-default port :
docker run -it -p 5000:5000 --env POSTGRES_DB=mydb --env POSTGRES_PASSWORD=mypw --env POSTGRES_USER=myuser my.company.repo/postgres:11-alpine
Do I need to create my own image that does
from my.company.repo/postgres:11-alpine
# some command to change the pg port to 5000
Is there another way to simply change the postgres port?
Update Thinking on this.. i'm guessing the way to do it is to ensure the docker image is expecting POSTGRES_PORT as an environment variable. If not the image has to be changed..
Instead of changing the port that PostgreSQL uses you can just map the host port you want to use to the default port in the container (5432)
docker run -p 5000:5432 ...
when I start docker container like this:
sudo docker run -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
I can see it's started by command sudo docker ps. But when I try to connect to the container from host using
psql -Utest_user -p5432 -h localhost -d test_db
it just hangs for several minutes and then reports that wasn't able to connect.
But when I add --net host option like this:
sudo docker run --net host -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
everything starts working as expected, I can connect to the postgresql the same psql command.
The same happens to other containers which I run, not only created from postgres:12 image.
I can only make requests to them when I set --net host option.
But I need to expose different ports like for example 2000:5432 to run, for example, several postgres containers simultaneously.
What should I do to make it work? My machine is Ubuntu:20, in case if it matters, and docker is fresh new one installed by instruction from the official site yesterday.
You can't connect to database container because by default it only allows connections from the localhost ( local machines in the same network ).
When you start docker container it makes it's own network ( usually in 172.0.0.0/something ip range).
When you set the flag -net host, docker takes your host's ip address for it's own, and that's why you are able to connect to the database ( because then you are both on the same network ).
The solution is either use the -net host flag, or to edit the config file for the database container to allow external connections which is not recommended.
I would like to run MongoDB in a container, this works:
docker run -p 27017:27017 --name cdt -d mongo
then I want to run a server in another container, like so:
docker run --name foo --link cdt:mongo exec /bin/bash -c "node server.js"
The node.js server attempts to make a mongodb connection to localhost:27017, but it fails to make the connection.
Anyone know why that might happen? Am I not linking the containers correctly?
Note that I can successfully connect to the mongodb container from outside a container, but not from the server inside the "foo" container.
So localhost from a container is always (99.5% of the time) referring to the container itself. This is also 99.5% of the time not what you want. If you're using links like this, you need to change localhost:27017 to mongo:27017 as that's what you're 'mounting' the link as (--link cdt:mongo).
A better option is to use Docker networks instead of links (which are deprecated). So:
$ docker network create my-net
$ docker run --name cdt --net my-net -d mongo
$ docker run --name foo --net my-net exec /bin/bash -c "node server.js"
Now you'd refer to your db via cdt:27017 as the names of the containers become resolvable via DNS on the same network. Note that you don't need to expose a port if you're not intending to connect from the outside world, inter-connectivity between containers on the same network doesn't require port mapping.
i have two docker containers running, following the instructions given here: https://github.com/swri-robotics/bag-database.
I can now look at the database in the browser using: localhost:8080, so it's set up properly and running fine.
So my question is, how can I get the other container that is running on port 5432 to list the database with all the other databases that I have locally using psql - l?
Right now I can only look at it if I open the container first.
I run it like this:
docker run -d -p 5432:5432 --name bagdb-postgres -v /var/lib/bagdb-postgres:/var/lib/postgresql/data -h 127.0.0.1 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=user -e POSTGRES_DB=bag_database mdillon/postgres:lastest
Thanks!
The programm is executed in a container. The intention of containers is to create a environment capsuled off your host operating system. You added some flags like -p and -v which define some connections between the host and the container. These are the only connections you have and you can use docker commands to connect to your container. It is not intended that you can execute code inside a container as if it was not inside a container. It is not exposed to your operating system and as far as I know there is no way to change that.
I've created a docker container which hosts a postgres server. I'm trying to get two instances of this running which index two completely different databases, and thus rely on a different set of volumes.
I'm running the following two commands one after the other:
docker run -v ... -p 5432:9001 -P --name psql-data postgres-docker
docker run -v ... -p 5432:9002 -P --name psql-transactions postgres-docker
The first container is created and runs, but the second call throws the following error:
Error response from daemon: failed to create endpoint psql-transactions on network bridge: Bind for 0.0.0.0:5432 failed. Port already in use.
I'm finding this a little confusing, because I though the point of containers was to isolate port binding. I could understand if I'd had both containers map 5432 onto the same port on the host machine, but I'm trying to mount them to 9001 and 9002 respectively.
How do I prevent this issue?
The order of the ports should be reversed. It should be -p host_port:container_port
First of all, only publish (-p) ports if you need to access them from outside the Docker host; if the database is only used by other services running in a container, there's no need to publish the ports; containers can access the database through the docker network.
If you intend to access the database externally, you need to swap the order of the ports in your -p; -p <host-port>:<container-port>. So in your case;
docker run -v ... -p 9001:5432-P --name psql-data postgres-docker
docker run -v ... -p 9002:5432 -P --name psql-transactions postgres-docker
To avoid the port clash you need to run it like this:
docker run -v ... -p 9001:5432 -P --name psql-data postgres-docker
docker run -v ... -p 9002:5432 -P --name psql-transactions postgres-docker