No access to docker container's exposed port from host - postgresql

when I start docker container like this:
sudo docker run -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
I can see it's started by command sudo docker ps. But when I try to connect to the container from host using
psql -Utest_user -p5432 -h localhost -d test_db
it just hangs for several minutes and then reports that wasn't able to connect.
But when I add --net host option like this:
sudo docker run --net host -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
everything starts working as expected, I can connect to the postgresql the same psql command.
The same happens to other containers which I run, not only created from postgres:12 image.
I can only make requests to them when I set --net host option.
But I need to expose different ports like for example 2000:5432 to run, for example, several postgres containers simultaneously.
What should I do to make it work? My machine is Ubuntu:20, in case if it matters, and docker is fresh new one installed by instruction from the official site yesterday.

You can't connect to database container because by default it only allows connections from the localhost ( local machines in the same network ).
When you start docker container it makes it's own network ( usually in 172.0.0.0/something ip range).
When you set the flag -net host, docker takes your host's ip address for it's own, and that's why you are able to connect to the database ( because then you are both on the same network ).
The solution is either use the -net host flag, or to edit the config file for the database container to allow external connections which is not recommended.

Related

Postgres container does not accept connections when run host mode

I am trying to run Postgres in a container.
When I start the container using the following command wherein I map the port 5432 of my machine with that of the container, the Postgres accepts connections from another process and everything works as intended.
docker run --name postgres --rm -e POSTGRES_HOST_AUTH_METHOD=trust -p 5432:5432 -d postgres
netstat-ing the port also works well i.e
nc -z localhost 5432
Connection to localhost port 5432 [tcp/postgresql] succeeded!
Now if I use the host mode to run the postgres container, it stops accepting connections. Basically the following doesn't work:
docker run --name postgres --rm -e POSTGRES_HOST_AUTH_METHOD=trust --net=host -d postgres
I saw a similar question on StackOverflow but it doesn't explain why things don't work. Here is the link to that question:
Connection Error with docker postgres using network=host
Any ideas why the second command doesn't work are appreciated. Thank you.
I wasn't aware of the fact that host networking is not supported on mac.
A snippet from https://docs.docker.com/network/host/
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
Another related SO question:
How to tell docker on mac to use host network for all purpose?

Access Docker postgres container from another container

I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.
The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp

'--link' does not seem to work to connect two Docker containers

I would like to run MongoDB in a container, this works:
docker run -p 27017:27017 --name cdt -d mongo
then I want to run a server in another container, like so:
docker run --name foo --link cdt:mongo exec /bin/bash -c "node server.js"
The node.js server attempts to make a mongodb connection to localhost:27017, but it fails to make the connection.
Anyone know why that might happen? Am I not linking the containers correctly?
Note that I can successfully connect to the mongodb container from outside a container, but not from the server inside the "foo" container.
So localhost from a container is always (99.5% of the time) referring to the container itself. This is also 99.5% of the time not what you want. If you're using links like this, you need to change localhost:27017 to mongo:27017 as that's what you're 'mounting' the link as (--link cdt:mongo).
A better option is to use Docker networks instead of links (which are deprecated). So:
$ docker network create my-net
$ docker run --name cdt --net my-net -d mongo
$ docker run --name foo --net my-net exec /bin/bash -c "node server.js"
Now you'd refer to your db via cdt:27017 as the names of the containers become resolvable via DNS on the same network. Note that you don't need to expose a port if you're not intending to connect from the outside world, inter-connectivity between containers on the same network doesn't require port mapping.

List docker database with local databases

i have two docker containers running, following the instructions given here: https://github.com/swri-robotics/bag-database.
I can now look at the database in the browser using: localhost:8080, so it's set up properly and running fine.
So my question is, how can I get the other container that is running on port 5432 to list the database with all the other databases that I have locally using psql - l?
Right now I can only look at it if I open the container first.
I run it like this:
docker run -d -p 5432:5432 --name bagdb-postgres -v /var/lib/bagdb-postgres:/var/lib/postgresql/data -h 127.0.0.1 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=user -e POSTGRES_DB=bag_database mdillon/postgres:lastest
Thanks!
The programm is executed in a container. The intention of containers is to create a environment capsuled off your host operating system. You added some flags like -p and -v which define some connections between the host and the container. These are the only connections you have and you can use docker commands to connect to your container. It is not intended that you can execute code inside a container as if it was not inside a container. It is not exposed to your operating system and as far as I know there is no way to change that.

Postgres in Docker; two instances clashing ports

I've created a docker container which hosts a postgres server. I'm trying to get two instances of this running which index two completely different databases, and thus rely on a different set of volumes.
I'm running the following two commands one after the other:
docker run -v ... -p 5432:9001 -P --name psql-data postgres-docker
docker run -v ... -p 5432:9002 -P --name psql-transactions postgres-docker
The first container is created and runs, but the second call throws the following error:
Error response from daemon: failed to create endpoint psql-transactions on network bridge: Bind for 0.0.0.0:5432 failed. Port already in use.
I'm finding this a little confusing, because I though the point of containers was to isolate port binding. I could understand if I'd had both containers map 5432 onto the same port on the host machine, but I'm trying to mount them to 9001 and 9002 respectively.
How do I prevent this issue?
The order of the ports should be reversed. It should be -p host_port:container_port
First of all, only publish (-p) ports if you need to access them from outside the Docker host; if the database is only used by other services running in a container, there's no need to publish the ports; containers can access the database through the docker network.
If you intend to access the database externally, you need to swap the order of the ports in your -p; -p <host-port>:<container-port>. So in your case;
docker run -v ... -p 9001:5432-P --name psql-data postgres-docker
docker run -v ... -p 9002:5432 -P --name psql-transactions postgres-docker
To avoid the port clash you need to run it like this:
docker run -v ... -p 9001:5432 -P --name psql-data postgres-docker
docker run -v ... -p 9002:5432 -P --name psql-transactions postgres-docker