I have a simple setup of 2 docker containers, one for the database and one for the web service.
I start the DB docker container like so:
docker run -d --name dbs.service -p 5434:5432 -e POSTGRES_DB=my_app -e POSTGRES_USER=my_user -e POSTGRES_PASSWORD=my_password postgres:9.6.2
This works fine. And from localhost, i can connect to it fine as well (using pgcli for connection)
pgcli postgres://my_user:my_password#dbs.service:5434/my_app
Now I start the web service container, which works fine
docker run -d --name web.service --link dbs.service:dbs.service web-service:latest
However here's the problem. From inside the container, I cannot connect to DB using port 5434 but I can connect to DB using port 5432.
So I login to container using
docker exec -it web.service bash
Now this works
pgcli postgres://my_user:my_password#dbs.service:5432/my_app
but this does not
pgcli postgres://my_user:my_password#dbs.service:5434/my_app
I can't understand why it can connect to 5432 but not 5434. Any suggestions?
-p 5434:5432
This option publishes the port for access from outside of the docker host to your container. The host will listen on 5434 and route the traffic through to the container's port 5432.
However, container-to-container traffic doesn't use that. Container to container traffic simply needs a common docker network. From there, any container can talk to any other container on the same network. The port used is the container listening port, not the published port on the host. You don't even need to publish the port for it to be accessible by other containers.
Related
when I start docker container like this:
sudo docker run -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
I can see it's started by command sudo docker ps. But when I try to connect to the container from host using
psql -Utest_user -p5432 -h localhost -d test_db
it just hangs for several minutes and then reports that wasn't able to connect.
But when I add --net host option like this:
sudo docker run --net host -p5432:5432 -d -e POSTGRES_PASSWORD=test_pass -e POSTGRES_USER=test_user -e POSTGRES_DB=test_db --name postgres postgres:12
everything starts working as expected, I can connect to the postgresql the same psql command.
The same happens to other containers which I run, not only created from postgres:12 image.
I can only make requests to them when I set --net host option.
But I need to expose different ports like for example 2000:5432 to run, for example, several postgres containers simultaneously.
What should I do to make it work? My machine is Ubuntu:20, in case if it matters, and docker is fresh new one installed by instruction from the official site yesterday.
You can't connect to database container because by default it only allows connections from the localhost ( local machines in the same network ).
When you start docker container it makes it's own network ( usually in 172.0.0.0/something ip range).
When you set the flag -net host, docker takes your host's ip address for it's own, and that's why you are able to connect to the database ( because then you are both on the same network ).
The solution is either use the -net host flag, or to edit the config file for the database container to allow external connections which is not recommended.
All the questions on SO about this seem to refer to an opposite case of creating a postgres container and connecting it from Mac host. But I am trying to do the opposite, without success. I have localhost running on my Mac host machine, and despite setting port flags, I cannot get code inside my container to talk to my localhost postgres (talks to remote host postgres just fine).
docker run -it -p 5000:5000 -p 5432:5432 yard-stats
Then inside docker:
telnet 0.0.0.0 5432
Trying 0.0.0.0...
telnet: Unable to connect to remote host: Connection refused
or telnet 127.0.0.1 or localhost. Connection is refused.
Edit: I also tried with flag --network="host", which did not change anything except break inbound connections to the container on localhost:5000 as well.
If you are using docker for mac, you can use use host.docker.internal special DNS name which resolves to the internal IP address used by the host.
You can also use --network="host" with your docker run command to run the container in host network. Then the localhost interface inside the container will be same as localhost interface of the host machine when run in host network. So you should be able to use localhost:5432 to connect to postgresql. You can remove -p option as it has no effect when running with --network="host".
docker run -it --network=host yard-stats
I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.
The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp
This community is my last resort for this problem, as I have been fighting with this for several hours now.
I have a go app running in one container, in the other container I am running a postgres db. I am able to connect to the postgres db from my go application as long as only my postgres is within a container, and my go app is running locally as usual. However, when my go app is trying to access the postgres from within a docker container i am getting the following error:
dial tcp 127.0.0.1:8080: connect: connection refused
Below I try to provide enough information, but will gladly add more if requested.
I have 2 docker containers running with the following ports:
go application, port info: 8081/tcp -> 0.0.0.0:8081
postgres db, port info: 5432/tcp -> 0.0.0.0:8080
I am running the go app with:
docker run -it --rm --name gographqlserver --link postgresdb:postgres -d -p 8081:8081 gogogopher;
and the postgres db with:
docker run -it --rm --name postgresdb -e POSTGRES_PASSWORD=hello123 -d -p 8080:5432 postgresimage;
both containers can be started without any problems.
I have also tried connecting both containers within a docker network, which did not help.
help would be immensely appreciated!
You are using localhost address within the container which is not the same as your host's address. You should do one of the following instead:
Use your actual host's IP from app's container
Use postgresdb container's IP with the native port (5432). You can discover this IP using docker inspect postgresdb.
Use postgresdb as host name and the native port (5432) when connecting both containers to the same network
I've got a docker container which is supposed to run a (HTTP) service.
This container should be able to connect to PostgresSQL running on the host machine (so it's not part of the container). The container uses the host's network settings:
docker run -e "DBHOST=localhost:5432" -e "DB=somedb" -e "AUTH=user:pw" -i -t --net="host" myservice
I'm using MacOSX, so Docker is running on a Virtualbox VM. I guess I need port forwarding to make this work. I've tried to configure that:
VBoxManage controlvm "default" natpf1 "rule1,tcp,,5432,,5432";
But this doesn't work. If I start up the service, all I get is a connection refused message and the service cannot connect to Postgres.
Postgres is running on port 5432, on the host machine. The "default" is the name of the VM created by Docker installer.
What am I doing wrong? Please help!
I've had success with this using the --add-host flag, which adds an entry into the /etc/hosts in your container. Boot2docker and docker-machine both assign an ip you can use to hit your localhost from inside a container, so you just want to add an entry that points back to this.
With boot2docker, where the default host ip is 192.168.59.3, you can just do docker run --add-host=my_localhost:192.168.59.3 ...
With docker-machine, I think you'll need to lookup your localhost's mapped ip in Virtualbox, and then you can do the same: docker run --add-host=my_localhost:[localhost_mapped_ip_from_docker] ...
Try setting that up and then trying to connect to your Postgres instance through my_localhost. Make sure you correctly set access and accepted inbound ip permissions in Postgres as well, as if it's not listening on the container's ip or 0.0.0.0, it won't work no matter what.