I have a VPC with a public subnet and a private subnet. The private subnet is has an instance for a database server. In the database server i have successfully installed docker with a postgres container. However after getting access to the bash with command :
docker run -it --rm postgres /bin/bash
I now have to create a user that i will link to a database i will create later
createuser cs594user -P --createdb -h cs594db -U postgres
After using the command above in the bash i get the error :
createuser: error: could not connect to database postgres: could not translate host name "cs594db" to address: Name or service not known
I tested to see if the docker container was running and it was. I was able to ping the localhost successfully. I am able to access the postgres console with the command :
docker run -it --rm --link cs594db:postgres postgres psql -h postgres -U postgres
I am currently out of sorts on how to solve this issue considering this is my first time working with docker and postgres.
After sometime reading and editing the /etc/hosts and replacing the command with the ip-address of the database server instance i was able to connect and initiate the command.
for example :
docker run -it --rm --link cs594db:postgres postgres psql -h 10.0.1.24 -U postgres
Related
I use the official Postgres image from the Docker Hub
docker pull postgres
I start my container in my machine on local:
docker run --name some-postgres -e POSTGRES_USER=user -e POSTGRES_PASSWORD=password -e POSTGRES_DB=test postgres
The container would have to create my test base with my user on port 542
I have this error when I want to connect on my db
$ sudo docker run -it postgres /bin/bash
root#ef4407c26a96:/# su postgres
postgres#ef4407c26a96:/$ psql
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
postgres#ef4407c26a96:/$
Please provide the database that you want to connect to. As you've provided POSTGRES_DB=test, use it when you wish to connect to psql:
psql -d test -U user
Also, I was able to connect with psql both as root and postgres user.
I’m currently trying to run fossology in Gitlab CI. Fossology requires an external database that can be set up from a schema created using pg_dump. When I'm trying to use psql I get the title error.
At the moment, I have a script that sets up a container that runs the required version of postgres (9.6). It then tries to run an .sql script via psql in the postgres container via docker exec. Upon doing so it gets the title error.
I have tried specifying both a port and a host when issuing the psql statement, neither of which worked. I have tried using localhost, 127.0.0.1, the IP address of the postgres container and the name of the container as a host. I have tried rewriting things in different scripts, but nothing seems to work.
After extensive google searching, many people seem to have the same error message but not for the same reasons and not usually when using a docker container to host the database.
When I have run the contents of my script in the command line, i do not get this error, the script works fine and I can connect to Fossology. The issue only arises when trying to do the same in Gitlab CI.
The sequence of steps (i.e. pasted line by line) that works when using the command line on Mac:
# creates blank database and hosts it in a docker container
docker run -d --name fossdb -p 5432:5432 postgres:9.6
docker cp /fossology_db_schema.sql fossdb:/fossy.sql
docker exec -it fossdb bash
psql postgres -U postgres
# creates user needed for database to work with fossology
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
\q
# builds the fossology database in the hosted blank database
psql fossology < fossy.sql
psql postgres -U postgres
\connect fossology
exit
What I am attempting in GitLab CI:
# creates container with postgres image
docker run -d --name fossdb -p 5432:5432 --network foss-net postgres:9.6
# creates blank database (error occurs here)
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) -f ./createBlank.sql -U postgres
# builds fossology database from schema
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) fossology < ./schema.sql -U postgres
createBlank.sql:
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
Expected results: runs createBlank.sql to create a blank database called fossology, then builds fossology database from schema
Actual results: psql: could not connect to server: Connection refused
Is the server running on host "172.19.0.2" and accepting
TCP/IP connections on port 5432?
Are you sure you set up postgres completely?
A few quick checks you can perform:
(Excuse me, you DID do that. goto suggestion 2)
suggestion 1: Did you tell postgres there is a user with a password? (createuser command)
https://www.postgresql.org/docs/9.2/app-createuser.html
suggestion 2: Did you tell postgres that user can connect, and how? (tcp or local sockets)
https://www.postgresql.org/docs/9.2/auth-pg-hba-conf.html
I am attempting to run both PostgreSQL and pgAdmin in Docker containers. The idea is that the PostgreSQL database should be accessible to any applications I have running on the host machine, and also to pgAdmin.
I am using this command to run PostgreSQL:
docker run -d -e POSTGRES_USER=username -e POSTGRES_PASSWORD=password --name postgres -p 5432:5432 postgres
And to run pgAdmin:
docker run -d -p 1111:1111 --name pgadmin -e "PGADMIN_LISTEN_PORT=1111" -e "PGADMIN_DEFAULT_EMAIL=admin#test.com" -e "PGADMIN_DEFAULT_PASSWORD=test" dpage/pgadmin4
If I go to localhost:1111, I can connect to pgAdmin and login. However, when I try to connect to my local PostgreSQL instance, it gets no response.
Therefore, I tried to run pgAdmin with access to the host internet using --net=host instead of -p 1111:1111:
docker run -d --net=host --name pgadmin -e "PGADMIN_LISTEN_PORT=1111" -e "PGADMIN_DEFAULT_EMAIL=admin#test.com" -e "PGADMIN_DEFAULT_PASSWORD=test" dpage/pgadmin4
Now, when I try to go to localhost:1111 to connect to pgAdmin, I get no response in my browser.
Docker Compose is a possible solution, as I could link the two containers together so they could access each other without having to worry about ports, but I also need pgAdmin to be able to access PostgreSQL instances on other machines, as well as my local one.
I feel like --net=host is broken in Docker. There's a whole thread here with a lot of confusion.
My setup:
Host: Windows 10
Docker: Docker Desktop Community v2.0.0.3 (31259)
Update
I have now tried using --link postgres on the pgAdmin container and it allows me to connect to my local instance of PostgreSQL but not non-local ones, the full command is:
docker run -d -p 1111:1111 --link postgres --name pgadmin -e "PGADMIN_LISTEN_PORT=1111" -e "PGADMIN_DEFAULT_EMAIL=admin#test.com" -e "PGADMIN_DEFAULT_PASSWORD=test" dpage/pgadmin4
The commands were not wrong, only the connection in pgAdmin, so the full list of commands are:
For PostgreSQL:
docker run -d -e POSTGRES_USER=username -e POSTGRES_PASSWORD=password --name postgres -p 5432:5432 postgres
And pgAdmin:
docker run -d -p 1111:1111 --name pgadmin -e "PGADMIN_LISTEN_PORT=1111" -e "PGADMIN_DEFAULT_EMAIL=admin#test.com" -e "PGADMIN_DEFAULT_PASSWORD=test" dpage/pgadmin4
Now the pgAdmin container won't connect to localhost but needs the IP of the PostgreSQL container. Run:
docker inspect postgres
Inspect result:
[
{
...
"NetworkSettings": {
...
"Networks": {
...
"IPAddress": "172.17.0.3",
...
}
}
}
]
We're only interested in the IPAddress from the inspect command. This is the IP which pgAdmin should connect to.
pgAdmin is also capable of access external IPs from your machine.
When you create docker containers it creates a bridge network. First find the network range for the bridge network. You can use ifconfig to find it. Let's say 172.17.0.5 is ip of pgAdmin, Then create a user 'root'#'172.17.0.5' for PostgreSQL and give database permissions for that user. Then you can connect to the database. Also check if port 3306 is accessible using telnet.
I am running a postgres docker container by using the commands below: (reference: https://docs.docker.com/engine/examples/postgresql_service/)
docker build -t eg_postgresql .
docker run --rm -P --name pg_test eg_postgresql
This works but the port number is dynamic. I can connect to the database by giving the port number. (the port I see in docker ps command)
I would like to connect to this docker database from Python so I need a static port number.
I tried the parameters below:
-p 127.0.0.1:5432:5432
-p 5432:5432
In that case, the docker container's port number was set as 5432. However, I could not connect to the database. I get docker user does not exist error message.
What is your advice?
I took the Dockerfile from the link you posted. After building the container with
docker build -t eg_postgresql .
I started the container with
docker run --rm -p 5432:5432 --name pg_test eg_postgresql (which binds localhost port 5432 to the container port 5432)
and then I tried to connect with
psql -h localhost -p 5432 -d docker -U docker --password
It works like a charm. If you get a message that docker user does not exist please double check that all steps from the Dockerfile are executed succesfully during the docker build command as the creation of the docker user is done in the command RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker. Make also sure that you have no PostgreSQL server running on your localhost so that you can be sure that you are trying to connect to PostgreSQL inside the container.
I have a Postgres 9.5.4 database running inside a docker container. I am using the official Postgres images and I have been trying to create backups from my database, but not luck so far. According docker documentation I was expecting the following command to store the dump.tar file in the /releases/ folder in the host machine (outside the temporal docker container)
docker run -i --rm --link my_postgres:postgres --volume /releases/:/tmp/ postgres:latest pg_dumpall > /tmp/dump.tar
But this throws the following error:
pg_dumpall: could not connect to database "template1": could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Any idea what could be wrong?
Actually I have found the solution
docker run -i --rm --link my_container:postgres -v /releases/:/tmp/ postgres:latest bash -c 'exec pg_dumpall -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres > /tmp/my_backup.tar'
Where Docker arguments:
-i For interactive processes, like a shell, bash, etc.
--rm For deleting the container once the backup is finished
--link For linking this temporal container to my_container, which contains the currently running Postgres database.
-v /releases/:/tmp/ Creates shared volume. The content inside the folder /tmp/ in the container (In this case my_backup.tar) will be automatically visible on folder /releases/ in the host machine.
And bash arguments:
pg_dumpall To export PostgreSQL cluster into a script file.
-h "$POSTGRES_PORT_5432_TCP_ADDR" Obtains the ip address out of the Postgres variable.
-p "$POSTGRES_PORT_5432_TCP_PORT" Obtains the Port out of the Postgres variable (These variables are defined only in the temporal container we just created, as result of the linkage with my_container)
-U The username for connecting to the database