i try on my localhost recommended commands to learn playing with docker.
The exact command is :
docker run -it --rm postgres psql
The error message i get is :
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
In fact the file .s.PGSQL.5432 does no exist in the container, while it exists
on the host machine.
So, what is wrong in my reasoning/command ?
You should think of containers as, conceptually, separate machines. Separate from the host and separate from each other.
When you run psql without any parameters, like you do here, it'll look for a postgres database running on the local machine, on port 5432. But since psql is running in a container it looks for the database inside the container. And there isn't one. That's what the error message is trying to tell you.
To get it to work, you need to specify the -h parameter on the psql command to tell it where the database is located. To get the address of the host machine, you can add --add-host=host.docker.internal:host-gateway to the docker run command. It's customary to call the host host.docker.internal.
So you end up with the command
docker run -it --rm --add-host=host.docker.internal:host-gateway postgres psql -h host.docker.internal
which should then let you connect to the postgres database on the host machine.
Related
I have private subnet for my EC2 instance, I use NAT Gateway for it to access internet. I created user-data script for my EC2 where I've created docker-compose.yml with postgres service defined. After defining this i have:
docker-compose up -d
# i use terraform for ${PGPASSWORD} template variable
export PGPASSWORD="${PGPASSWORD}"
psql -h localhost -U my_user -p 5432 -d my_db < ./mydump.sql
Inside EC2 I already have mydump.sql file.
The problem here it gives me error for psql like this:
psql: error: connection to server at "localhost" (127.0.0.1), port 5432 failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
I can't understand why. Although my postgres container is up and LOG: database system is ready to accept connections. Can someone help with this?
Maybe this problem relates to root under which user-data script is executed and session closes so it interrupts connection for psql. Or this relates to docker and port-forwarding. Can't understand the reason actually
This is seemingly the same as this issue, though I thought I'd provide a simple example:
docker run -it \
-v /pg_socket_on_host:/pg_socket_in_container \
-e PGPASSWORD=${PGPASSWORD} \
postgres \
psql -h /pg_socket_in_container -U postgres postgres
Where the path /pg_socket_on_host is a directory containing the file .s.PGSQL.5432. I've tried a few different versions of this, but I keep ending up with the same result:
psql: error: connection to server on socket "/pg_socket_in_container/.s.PGSQL.5432" failed: Connection refused
Is the server running locally and accepting connections on that socket?
Is there a reason that this is a problem with Docker?
Follow up:
I ensured that the permissions and the user (name and id, as well as group and id) for the host and container path/volume line up based on this post, but I still get the same error. I am able to connect to the socket on the host machine from the host machine. I am also able to connect to the host via host.docker.internal from the docker container. Any other ideas about debugging strategies?
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 1 year ago.
I pulled pgAdmin4 docker image into my linux debian machine and followed the process specified here to configure the container. I run docker run -p 8000:8000 --env-file ./pgadmin_docker_env.list -d dpage/pgadmin4. For clarity, the pgadmin_docker_env.list specified in the command contains the environmental variables:: PGADMIN_DEFAULT_EMAIL=my_email#example.com PGADMIN_DEFAULT_PASSWORD=my_password. With the container running in detached mode, I run localhost:8080 in my web browser to access pgAdmin 4 in server mode. However, I was unable to create a server connection to the localhost postgres database from inside the pgadmin. I got the following error after input of the connection parameters (shown in the screenshot attached below)
Unable to connect to server: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?
UPDATE
I used host.docker.internal in place of localhost but I still got an error
Unable to connect to server: could not translate host name "host.docker.internal" to address: Name does not resolve
You can skip a step if you've already done it
Using psql, alter the authentication credential of default postgres user, postgres with the following commands
sudo -u postgres psql
ALTER USER postgres PASSWORD 'newPassword';
Optionally, you can also create a user for your current account as a superuser with CREATE ROLE user_name WITH LOGIN SUPERUSER CREATEDB CREATEROLE REPLICATION;
Modify /etc/postgresql/13/main/pg_hba.conf and add
host all all 0.0.0.0/0 md5 to the end of the file
Modify the pgadmin_docker_env.list file to include your choice port
PGADMIN_LISTEN_PORT=8000
Stop the previously running container pgadmin docker stop pgadmin and remove the containerdocker rm pgadmin. Then run docker run --env-file ./pgadmin_docker_env.list --network="host" --name pgadmin dpage/pgadmin4 to run the container in host network mode. See more on host network mode
Run localhost:8000 in your web browser and create a server connection using the same connection parameter as in the screenshot.
localhost in this scenario refers to the PgAdmin container, where there is not a Postgres instance running.
You want to connect to Postgres running on the host machine from the container (from what I can tell anyway?) so use host.docker.internal instead of localhost.
I’m currently trying to run fossology in Gitlab CI. Fossology requires an external database that can be set up from a schema created using pg_dump. When I'm trying to use psql I get the title error.
At the moment, I have a script that sets up a container that runs the required version of postgres (9.6). It then tries to run an .sql script via psql in the postgres container via docker exec. Upon doing so it gets the title error.
I have tried specifying both a port and a host when issuing the psql statement, neither of which worked. I have tried using localhost, 127.0.0.1, the IP address of the postgres container and the name of the container as a host. I have tried rewriting things in different scripts, but nothing seems to work.
After extensive google searching, many people seem to have the same error message but not for the same reasons and not usually when using a docker container to host the database.
When I have run the contents of my script in the command line, i do not get this error, the script works fine and I can connect to Fossology. The issue only arises when trying to do the same in Gitlab CI.
The sequence of steps (i.e. pasted line by line) that works when using the command line on Mac:
# creates blank database and hosts it in a docker container
docker run -d --name fossdb -p 5432:5432 postgres:9.6
docker cp /fossology_db_schema.sql fossdb:/fossy.sql
docker exec -it fossdb bash
psql postgres -U postgres
# creates user needed for database to work with fossology
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
\q
# builds the fossology database in the hosted blank database
psql fossology < fossy.sql
psql postgres -U postgres
\connect fossology
exit
What I am attempting in GitLab CI:
# creates container with postgres image
docker run -d --name fossdb -p 5432:5432 --network foss-net postgres:9.6
# creates blank database (error occurs here)
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) -f ./createBlank.sql -U postgres
# builds fossology database from schema
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) fossology < ./schema.sql -U postgres
createBlank.sql:
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
Expected results: runs createBlank.sql to create a blank database called fossology, then builds fossology database from schema
Actual results: psql: could not connect to server: Connection refused
Is the server running on host "172.19.0.2" and accepting
TCP/IP connections on port 5432?
Are you sure you set up postgres completely?
A few quick checks you can perform:
(Excuse me, you DID do that. goto suggestion 2)
suggestion 1: Did you tell postgres there is a user with a password? (createuser command)
https://www.postgresql.org/docs/9.2/app-createuser.html
suggestion 2: Did you tell postgres that user can connect, and how? (tcp or local sockets)
https://www.postgresql.org/docs/9.2/auth-pg-hba-conf.html
How to run a Postgresql command inside a docker container?
i tried using this line:
docker-compose run db psql pfe
But i get and error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I need to run the command in the working container so thats why i need to use docker-compose exec instead of docker-compose run. Also i need to mention the user by adding -U flag to the command:
docker-compose exec db psql pfe -U admin
db: the containser name
pfe: the database name
admin: the database user
WORKING VERRY FINE!