Access Docker postgres container from another container - postgresql

I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.

The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp

Related

No access to docker container's exposed port from host

when I start docker container like this:
sudo docker run -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
I can see it's started by command sudo docker ps. But when I try to connect to the container from host using
psql -Utest_user -p5432 -h localhost -d test_db
it just hangs for several minutes and then reports that wasn't able to connect.
But when I add --net host option like this:
sudo docker run --net host -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
everything starts working as expected, I can connect to the postgresql the same psql command.
The same happens to other containers which I run, not only created from postgres:12 image.
I can only make requests to them when I set --net host option.
But I need to expose different ports like for example 2000:5432 to run, for example, several postgres containers simultaneously.
What should I do to make it work? My machine is Ubuntu:20, in case if it matters, and docker is fresh new one installed by instruction from the official site yesterday.
You can't connect to database container because by default it only allows connections from the localhost ( local machines in the same network ).
When you start docker container it makes it's own network ( usually in 172.0.0.0/something ip range).
When you set the flag -net host, docker takes your host's ip address for it's own, and that's why you are able to connect to the database ( because then you are both on the same network ).
The solution is either use the -net host flag, or to edit the config file for the database container to allow external connections which is not recommended.

Docker port mismatch from inside another container

I have a simple setup of 2 docker containers, one for the database and one for the web service.
I start the DB docker container like so:
docker run -d --name dbs.service -p 5434:5432 -e POSTGRES_DB=my_app -e POSTGRES_USER=my_user -e POSTGRES_PASSWORD=my_password postgres:9.6.2
This works fine. And from localhost, i can connect to it fine as well (using pgcli for connection)
pgcli postgres://my_user:my_password#dbs.service:5434/my_app
Now I start the web service container, which works fine
docker run -d --name web.service --link dbs.service:dbs.service web-service:latest
However here's the problem. From inside the container, I cannot connect to DB using port 5434 but I can connect to DB using port 5432.
So I login to container using
docker exec -it web.service bash
Now this works
pgcli postgres://my_user:my_password#dbs.service:5432/my_app
but this does not
pgcli postgres://my_user:my_password#dbs.service:5434/my_app
I can't understand why it can connect to 5432 but not 5434. Any suggestions?
-p 5434:5432
This option publishes the port for access from outside of the docker host to your container. The host will listen on 5434 and route the traffic through to the container's port 5432.
However, container-to-container traffic doesn't use that. Container to container traffic simply needs a common docker network. From there, any container can talk to any other container on the same network. The port used is the container listening port, not the published port on the host. You don't even need to publish the port for it to be accessible by other containers.

'--link' does not seem to work to connect two Docker containers

I would like to run MongoDB in a container, this works:
docker run -p 27017:27017 --name cdt -d mongo
then I want to run a server in another container, like so:
docker run --name foo --link cdt:mongo exec /bin/bash -c "node server.js"
The node.js server attempts to make a mongodb connection to localhost:27017, but it fails to make the connection.
Anyone know why that might happen? Am I not linking the containers correctly?
Note that I can successfully connect to the mongodb container from outside a container, but not from the server inside the "foo" container.
So localhost from a container is always (99.5% of the time) referring to the container itself. This is also 99.5% of the time not what you want. If you're using links like this, you need to change localhost:27017 to mongo:27017 as that's what you're 'mounting' the link as (--link cdt:mongo).
A better option is to use Docker networks instead of links (which are deprecated). So:
$ docker network create my-net
$ docker run --name cdt --net my-net -d mongo
$ docker run --name foo --net my-net exec /bin/bash -c "node server.js"
Now you'd refer to your db via cdt:27017 as the names of the containers become resolvable via DNS on the same network. Note that you don't need to expose a port if you're not intending to connect from the outside world, inter-connectivity between containers on the same network doesn't require port mapping.

List docker database with local databases

i have two docker containers running, following the instructions given here: https://github.com/swri-robotics/bag-database.
I can now look at the database in the browser using: localhost:8080, so it's set up properly and running fine.
So my question is, how can I get the other container that is running on port 5432 to list the database with all the other databases that I have locally using psql - l?
Right now I can only look at it if I open the container first.
I run it like this:
docker run -d -p 5432:5432 --name bagdb-postgres -v /var/lib/bagdb-postgres:/var/lib/postgresql/data -h 127.0.0.1 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=user -e POSTGRES_DB=bag_database mdillon/postgres:lastest
Thanks!
The programm is executed in a container. The intention of containers is to create a environment capsuled off your host operating system. You added some flags like -p and -v which define some connections between the host and the container. These are the only connections you have and you can use docker commands to connect to your container. It is not intended that you can execute code inside a container as if it was not inside a container. It is not exposed to your operating system and as far as I know there is no way to change that.

Docker container not able to access port 5432

I have a spring application which is trying to connect postgres db (jdbc:postgresql://xxxxxxx:5432/My_DB). It connects fine when I run the jar using "java -jar app.jar" command.But when i run inside the docker container it fails to connect. Below is the command , I used to run.
docker run -p 5432:5432 my_image:latest
It looks like 5432 is not open inside the container seems. I came across a similar post for this, but didnt give any solutions.
Docker container for Postgres 9.1 not exposing port 5432 to host
Any thoughts on this?
Thanks
You have to specifically user Expose 5432 in your docker file. Otherwise it is not allowed to expose it via a container.