How to run a Postgresql command inside a docker container?
i tried using this line:
docker-compose run db psql pfe
But i get and error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I need to run the command in the working container so thats why i need to use docker-compose exec instead of docker-compose run. Also i need to mention the user by adding -U flag to the command:
docker-compose exec db psql pfe -U admin
db: the containser name
pfe: the database name
admin: the database user
WORKING VERRY FINE!
Related
i try on my localhost recommended commands to learn playing with docker.
The exact command is :
docker run -it --rm postgres psql
The error message i get is :
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
In fact the file .s.PGSQL.5432 does no exist in the container, while it exists
on the host machine.
So, what is wrong in my reasoning/command ?
You should think of containers as, conceptually, separate machines. Separate from the host and separate from each other.
When you run psql without any parameters, like you do here, it'll look for a postgres database running on the local machine, on port 5432. But since psql is running in a container it looks for the database inside the container. And there isn't one. That's what the error message is trying to tell you.
To get it to work, you need to specify the -h parameter on the psql command to tell it where the database is located. To get the address of the host machine, you can add --add-host=host.docker.internal:host-gateway to the docker run command. It's customary to call the host host.docker.internal.
So you end up with the command
docker run -it --rm --add-host=host.docker.internal:host-gateway postgres psql -h host.docker.internal
which should then let you connect to the postgres database on the host machine.
I can run SQL commands inside a postgres docker container using psql:
psql -U postgres -c "SELECT * FROM USERS;"
Now, I want to run this command using docker-compose. Here's what I have so far but no luck:
docker-compose run db psql -U postgres -c "SELECT * FROM USERS;"
# psql: error: could not connect to server: could not connect to server: No such file or directory
# Is the server running locally and accepting
# connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Any idea what I'm doing wrong or is this not possible?
I think you can't. See postgresql Dockerfile:
CMD ["postgres"]
The final CMD is postgres which will start the postgresql server.
If you use docker-compose run db psql -U postgres -c "SELECT * FROM USERS;", see this:
the command passed by run overrides the command defined in the service configuration. For example, if the web service configuration is started with bash, then
docker-compose run web python app.py overrides it with python app.py.
So, the original postgres command never have chance to run, which means the postgresql server not start, then you have your error to connect to server.
I don't know if next alternative meet your requirement, just FYI:
First, use docker-compose up -d db to start the service.
Second, use docker-compose exec db psql -U postgres -c "select * from pg_user;" for example to run your command.
I have a Postgres 9.6 instance on OSX that is up and running, but Sqitch throws the following error when I try sqitch status in a working directory with a sqitch.conf:
$ sqitch status
# On database db:pg:my_db
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
This is odd because I have already checked that Postgres is running by checking its status and logging in directly:
$ pg_isready
/tmp:5432 - accepting connections
$ psql -U postgres
psql (9.6.19)
Type "help" for help.
postgres=#
This seems to be just a problem with sqitch.
For more detail, this Postgres was installed via brew install postgresql#9.6 and is located in the default directory:
$ which psql
/usr/local/opt/postgresql#9.6/bin/psql
Regarding Sqitch, I have tried both installing with Homebrew and using Docker (my current approach). The docker install is based on the official instructions:
docker pull sqitch/sqitch
curl -L https://git.io/JJKCn -o sqitch && chmod +x sqitch
./sqitch status
I tried setting psql explicitly as well with sqitch config --user engine.pg.client /usr/local/opt/postgresql#9.6/bin/psql
Regardless, I still get the following with any sqitch command:
$ sqitch status
# On database db:pg:my_db
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I'm not sure what I'm missing and could use some input. Thanks in advance.
I don't know what's up with the Brew-installed Sqitch, but when you run it from the Docker image, that container does not have Postgres running inside it, so connections to localhost will fail. You instead need to connect to Postgres outside the container. If you're running it on your Mac, this is straightforward to do: Make sure Postgres is listening on the IP ports, not just a Unix domain socket, and specify host.docker.internal as the host name, like so:
sqitch status db:pg://host.docker.internal/my_db
I’m currently trying to run fossology in Gitlab CI. Fossology requires an external database that can be set up from a schema created using pg_dump. When I'm trying to use psql I get the title error.
At the moment, I have a script that sets up a container that runs the required version of postgres (9.6). It then tries to run an .sql script via psql in the postgres container via docker exec. Upon doing so it gets the title error.
I have tried specifying both a port and a host when issuing the psql statement, neither of which worked. I have tried using localhost, 127.0.0.1, the IP address of the postgres container and the name of the container as a host. I have tried rewriting things in different scripts, but nothing seems to work.
After extensive google searching, many people seem to have the same error message but not for the same reasons and not usually when using a docker container to host the database.
When I have run the contents of my script in the command line, i do not get this error, the script works fine and I can connect to Fossology. The issue only arises when trying to do the same in Gitlab CI.
The sequence of steps (i.e. pasted line by line) that works when using the command line on Mac:
# creates blank database and hosts it in a docker container
docker run -d --name fossdb -p 5432:5432 postgres:9.6
docker cp /fossology_db_schema.sql fossdb:/fossy.sql
docker exec -it fossdb bash
psql postgres -U postgres
# creates user needed for database to work with fossology
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
\q
# builds the fossology database in the hosted blank database
psql fossology < fossy.sql
psql postgres -U postgres
\connect fossology
exit
What I am attempting in GitLab CI:
# creates container with postgres image
docker run -d --name fossdb -p 5432:5432 --network foss-net postgres:9.6
# creates blank database (error occurs here)
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) -f ./createBlank.sql -U postgres
# builds fossology database from schema
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) fossology < ./schema.sql -U postgres
createBlank.sql:
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
Expected results: runs createBlank.sql to create a blank database called fossology, then builds fossology database from schema
Actual results: psql: could not connect to server: Connection refused
Is the server running on host "172.19.0.2" and accepting
TCP/IP connections on port 5432?
Are you sure you set up postgres completely?
A few quick checks you can perform:
(Excuse me, you DID do that. goto suggestion 2)
suggestion 1: Did you tell postgres there is a user with a password? (createuser command)
https://www.postgresql.org/docs/9.2/app-createuser.html
suggestion 2: Did you tell postgres that user can connect, and how? (tcp or local sockets)
https://www.postgresql.org/docs/9.2/auth-pg-hba-conf.html
I have a working Postgres Dockerfile that I modify and unfortunately after applying modifications Postgres container stops working as expected. I'd like to ask your for explanation of what I'm doing wrong.
Working example
Here's the Postgres Dockerfile that works and which I modify:
# Use ubuntu image
FROM ubuntu
# Install database
RUN apt-get update && apt-get install -y postgresql-9.3
# Switch to postgres user.
USER postgres
# Create databse and user with all privileges to the database.
RUN /etc/init.d/postgresql start && \
psql --command "CREATE DATABASE docker;" && \
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
psql --command "GRANT ALL PRIVILEGES ON DATABASE docker TO docker;"
# Allow remote connections to the database.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I build it like that:
docker build --tag postgres-image .
Then I create a container:
docker run -d -it -p 32768:5432 --name=postgres postgres-image
And I connect with database:
psql -h localhost -p 32768 -d docker -U docker --password
First modification
I don't need to have any volumes because I'm going to use data-only container that will store all Postgres data. When I remove the line:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and do all steps like in working example I get the following error after passing password in the last step:
psql: FATAL: the database system is starting up
FATAL: the database system is starting up
So the question is: Why do I need VOLUME instruction in the Dockerfile?
Second modification
This modification doesn't include the first one. Both modification are independent.
The parameters used in CMD instraction points to default Postgres data directory and configuration file so I wanted to simplify it by setting CMD to the command I always use to start Posgres:
service postgres start
After setting CMD to:
CMD ["service", "postgres", "start]
and doing all steps like in working example I get the following error after passing password in the last step:
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 32768?
The question is: Why the command that works on my host system doesn't work in Docker container?
I'm not sure about the first problem. It may be that Postgres doesn't like running on top of the UFS.
The second problem is just that a container will exit when its main process ends. So the command "service postgres start" runs, starts Postgres in the background then immediately exits and the container halts. The first version works because Postgres stays running in the foreground.
But why are you doing this? Why not just use the official Postgres image?