Docker - How can run the psql command in the postgres container? - postgresql

I would like to use the psql in the postgres image in order to run some queries on the database.
But unfortunately when I attach to the postgres container, I got that error the psql command is not found...
For me a little bit it is a mystery how I can run postgre sql queries or commands in the container.
How run the psql command in the postgres container? (I am a new guy in Docker world)
I use Ubuntu as a host machine, and I did not install the postgres on the host machine, I use the postgres container instead.
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
yiialkalmi_app_1 /bin/bash Exit 0
yiialkalmi_nginx_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
yiialkalmi_php_1 php-fpm Up 9000/tcp
yiialkalmi_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp
yiialkalmi_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
Here the containers:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
315567db2dff yiialkalmi_nginx "nginx -g 'daemon off" 18 hours ago Up 3 hours 0.0.0.0:80->80/tcp, 443/tcp yiialkalmi_nginx_1
53577722df71 yiialkalmi_php "php-fpm" 18 hours ago Up 3 hours 9000/tcp yiialkalmi_php_1
40e39bd0329a postgres:latest "/docker-entrypoint.s" 18 hours ago Up 3 hours 5432/tcp yiialkalmi_postgres_1
5cc47477b72d redis:latest "docker-entrypoint.sh" 19 hours ago Up 3 hours 6379/tcp yiialkalmi_redis_1
And this is my docker-compose.yml:
app:
image: ubuntu:16.04
volumes:
- .:/var/www/html
nginx:
build: ./docker/nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
php:
build: ./docker/php/
expose:
- 9000
links:
- postgres
- redis
volumes_from:
- app
postgres:
image: postgres:latest
volumes:
- /var/lib/postgres
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
redis:
image: redis:latest
expose:
- 6379

docker exec -it yiialkalmi_postgres_1 psql -U project -W project
Some explanation
docker exec -it
The command to run a command to a running container. The it flags open an interactive tty. Basically it will cause to attach to the terminal. If you wanted to open the bash terminal you can do this
docker exec -it yiialkalmi_postgres_1 bash
yiialkalmi_postgres_1
The container name (you could use the container id instead, which in your case would be 40e39bd0329a )
psql -U project -W project
The command to execute to the running container
U user
W Tell psql that the user needs to be prompted for the password at connection time. This parameter is optional. Without this parameter, there is an extra connection attempt which will usually find out that a password is needed, see the PostgreSQL docs.
project the database you want to connect to. There is no need for the -d parameter to mark it as the dbname when it is the first non-option argument, see the docs: -d "is equivalent to specifying dbname as the first non-option argument on the command line."
These are specified by you here
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project

This worked for me:
goto bash :
docker exec -it <container-name> bash
from bash :
psql -U <dataBaseUserName> <dataBaseName>
or just this one-liner :
docker exec -it <container-name> psql -U <dataBaseUserName> <dataBaseName>
helps ?

After the Postgres container is configured using docker, open the bash terminal using:
docker exec -it <containerID>(postgres container name / ID) bash
Switch to the Postgres user:
su - postgres
Then run:
psql
It will open the terminal access for the Postgres.

If you need to restore the database in a container you can do this:
docker exec -i app_db_1 psql -U postgres < app_development.back
Don't forget to add -i.
:)

You can enter inside the postgres container using docker-compose by typing the following
docker-compose exec postgres bash
knowing that postgres is the name of the service. Replace it with the name of the Postgresql service in you docker-compose file.
if you have many docker-compose files, you have to add the specific docker-compose.yml file you want to execute the command with. Use the following commnand instead.
docker-compose -f < specific docker-compose.yml> exec postgres bash
For example if you want to run the command with a docker-compose file called local.yml, here the command will be
docker-compose -f local.yml exec postgres bash
Then, use psql command and specify the database name with the -d flag and the username with the -U flag
psql -U <database username you want to connect with> -d <database name>
Baammm!!!!! you are in.

If you have running "postgres" container:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c "exec psql -h \$POSTGRES_PORT_5432_TCP_ADDR -p \$POSTGRES_PORT_5432_TCP_PORT -U postgres"

We can enter the container with a terminal sh or bash by using,
docker run -it <container id | name> <sh | bash>
if assume it is sh,
psql -U postgres
will work

RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker &&\

Just fired up a local test, not sure if -c is what you were after from the cli.
docker run -it --rm --name psql-test-connection -e PGPASSWORD=1234 postgres psql -h kubernetes.docker.internal -U awx -c "\conninfo"
You are connected to database "awx" as user "awx" on host "kubernetes.docker.internal" (address "192.168.65.4") at port "5432".

In many common setups, the PostgreSQL port is published out to the host.
postgres:
ports:
- '12345:5432'
If this is the case, you don't need to do anything Docker-specific to connect to the database. You can use the psql client directly on your host system pointing to the first ports: number.
psql -h localhost -p 12345 -U project
This approach only requires psql or another ordinary PostgreSQL client be installed on the host and that the database container be configured with ports: making it accessible from outside Docker. (The ports: are not necessary for inter-container communication and a production-oriented setup could reasonably not have them.) This does not require the ability to run docker commands and the attendant security concerns, and it can avoid multiple layers of additional command quoting from a docker exec sh -c '...' sequence.

Without using an external terminal a person can run SQL commands within the container CLI.
psql -d [database-name] -U [username] -W
** Don't forget to replace [database-name] with your db-name & [username] with your actual username
Flags:
-d : Specify the database name you want to connect
-U : Specify the username as whom you want to connect
-W : Prompt for the password

Related

POSTGRES_DB and POSTGRES_USER don't work in Docker run

So I am running the following command:
docker run --name psql-instance -d -p 5432:5432 -e POSTGRES_DB=mydb -e POSTGRES_USER=root -e POSTGRES_PASSWORD=pass postgres
This creates the container.
However when I run:
docker exec -it psql-instance psql -U root
I get the following error:
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: database "root" does not exist
I've seen many similar questions and they all say its docker-compose causing the error for them and that docker run works fine but for me, docker run doesn't work.
How can I fix this?
You need to tell psql to connect to the mydb database like this
docker exec -it psql-instance psql -U root mydb

Docker compose cannot run command service not running

I created docker-compose.yml which content you can find below. I navigate to the folder where file resist and run command:
docker-compose up -d
This was shown:
Starting postgres ... done
then i run that query:
docker-compose ps
Result:
Name Command State Ports
---------------------------------------------------------
postgres docker-entrypoint.sh postgres Exit 1
Now i wanted to run some command:
docker exec -it postgres psql -h localhost -p 54320 -U robert
This is what i get:
Error response from daemon: Container ae1565a84bcf0b3662b47d4f277efd2830273554b6bcf4437129e33b31c88b35 is not running
Is my container not running or? please of support.
docker-compose.yml:
version: "3"
services:
# Create a service named db.
db:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container the name my_postgres. You can changes to something else.
container_name: "postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=robert
- POSTGRES_PASSWORD=robert
- POSTGRES_DB=mydb
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
Can you attempt exec
docker run -it postgres psql -h localhost -p 54320 -U robert
?
$ docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Since your container has the status exit, you can't use docker exec
Can you use this docker-compose file?
version: "3"
volumes:
postgres_app: ~
services:
# Create a service named db.
postgres:
image: "postgres"
environment:
POSTGRES_USER: robert
POSTGRES_PASSWORD: robert
POSTGRES_DB: "mydb"
volumes:
- "postgres_app:/var/lib/postgresql/data"
ports:
- "54320:5432"
restart: always
And this command docker-compose exec postgres psql -U robert -d mydb
I hope this will help!
On my computer i executed this file

Docker container can't connect circleCI postgres database

I am trying to set up a circleCI test, I have created a database in circleCI and I have a docker container which needs to connect to the database, but it can't. Inside my docker container is a script which before it does anything it runs pg_isready, this cannot connect to the database. Here's my circle job creation
postgres_tests:
docker:
- image: circleci/python:3.7
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
steps:
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: /tmp/workspace
- run:
name: Install awscli docker-squash
working_directory: /
command: sudo pip3 install awscli docker-squash
- run: eval `aws ecr get-login --no-include-email --region eu-west-1`
- checkout
- run: echo 'export PATH=/usr/lib/postgresql/9.6/bin/:$PATH' >> $BASH_ENV
- run: sudo apt-get update && sudo apt-get install -y postgresql-client
- run: psql -h localhost -U postgres --command "ALTER USER postgres WITH PASSWORD 'password';"
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
/tmp/workspace/sql/t/run_tests.sh
The run_tests.sh is a script which pulls my docker image from the company repo and then does a docker run on that image.
I have read other people have issues where the database isn't ready so to test this I added pg_isready before the docker run
So my script looks like this
DB_HOST=`psql -X -A -h localhost -U postgres -p 5432 -t -c "select inet_server_addr()"`
DB_PORT=5432
DB_NAME=my_test
DB_USER=postgres
DB_PASSWORD=password
pg_isready -h "${DB_HOST}" -p "${DB_PORT}"
#restore database from supplied image
docker run \
-e SAPIENTIA_DB_HOST=$DB_HOST \
-e SAPIENTIA_DB_PORT=$DB_PORT \
-e SAPIENTIA_DB_NAME=$DB_NAME \
-e SAPIENTIA_DB_PASSWORD=$DB_PASSWORD \
-e SAPIENTIA_DB_USER=$DB_USER \
$EMPTY_DB_FULL_PATH \
path_to_file/file
I have also tried setting the DB_HOST variable directly to 'localhost' the result is exactly the same
Here's what I get as a result:
127.0.0.1:5432 - accepting connections
127.0.0.1:5432 - no response
I have also tried re-running the test with ssh and connecting myself. Same result, I can connect to the database, but i I then run docker exec and try to connect from inside the docker container it can't connect.
I'm pretty stumped here, so any help would be useful.
EDIT: I've found this documentation page about your issue:
It is not possible to start a service in remote docker and ping it directly from a primary container or to start a primary container that can ping a service in remote docker. To solve that, you’ll need to interact with a service from remote docker, as well as through the same container
That line is not 100% clear to me, but I understand that they tell us that we should run the containers we want to communicate from another container manually. Therefore:
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
docker run -d --name postgres --env POSTGRES_USER=postgres --env POSTGRES_DB=my_test circleci/postgres:9.6.2-alpine
/tmp/workspace/sql/t/run_tests.sh
Since the postgres container is not accessible anymore through the local network, your up check could be docker exec postgres pg_isready
You can then set your DB_HOST to postgres in your run script.
Original answer:
I'm not well versed into CircleCI configuration, but my guess would be that your Docker container you run manually is not attached to the same network as the containers launched by CircleCI.
From what I see in the documentation, you can specify the hostname of the service container:
The name the container is reachable by. By default, container services are accessible through localhost
So maybe if you try something lile this:
- image: circleci/postgres:9.6.2-alpine
name: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
You can then set your DB_HOST to postgres in your run script.

Docker container Postgres connection error

i have been trying out docker container for postgres. So far i have not been able to connect to the database inside the container.
My steps to recreate the problem below.
dockerfile start
FROM postgres
ENV POSTGRES_DB db
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY db_schema.sql /docker-entrypoint-initdb.d/
I built the docker file like so
$ docker build -t db_con .
And created a container
$ docker run -it -p 5432:5432 --name test_db db_con /bin/bash
View of running container as below
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
82347f1114c4 db "docker-entrypoint..." 3 hours ago Up 2 sec 0.0.0.0:5432->5432/tcp test_db
I inspected the container for the address info..
$ docker inspect test_db
--extract start--
"Networks": {
"bridge": {
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
}
}
--extract end--
Now, i have tried within the container, but i have NOT been successful.
I have tried all the commands below with the error below.
root#82347f1114c4:/# psql -U postgres -h 0.0.0.0 -p 5432 -d db
root#82347f1114c4:/# psql -U postgres -h 172.17.0.1 -p 5432 -d db
root#82347f1114c4:/# psql -U postgres -h 172.17.0.2 -p 5432 -d db
**response for all the above**
psql: could not connect to server: Connection refused
Is the server running on host "0.0.0.0" and accepting
TCP/IP connections on port 5432?
I will be delighted if anyone can point me in the right direction. I've hit a wall here, any assistance is much appreciated.
it looks like you override default postgres cmd to /bin/bash.
Why do you put /bin/bash at the end of command?
docker run -it -p 5432:5432 --name test_db db_con /bin/bash
Try to execute
docker run -it -p 5432:5432 --name test_db db_con
Also, postgres will be available only when db dump was restored.
You need to add a new rule in your pg_hba.conf:
nano /etc/postgresql/9.3/main/pg_hba.conf
Add:
host all all [Docker Server IP]/16 md5
Next, you need to uncomment the follow line in the postgres.conf:
listen_addresses = '*' # what IP address(es) to listen on;
Now restart your postgres service, and try again.

how do i backup a database in docker

i'm running my app using docker-compose with the below yml file
postgres:
container_name: postgres
image: postgres:${POSTGRES_VERSION}
volumes:
- postgresdata:/var/lib/postgresql/data
expose:
- "5432"
environment:
- POSTGRES_DB=42EXP
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
node:
container_name: node
links:
- postgres:postgres
depends_on:
- postgres
volumes:
postgresdata:
As you can see here ,i'm using a named volume to manage postgres state.
According to the official docs, i can backup a volume like the below
docker run --rm --volumes postgresdata:/var/lib/postgresql/data -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Some other tutorials suggested i use the pg-dump function provided by postgres for backups.
pg_dump -Fc database_name_here > database.bak
I guess i would have to go inside the postgres container to perform this function and mount the backup directory to the host.
Is one approach better/preferable than the other?
To run pg_dump you can use docker exec command:
To backup:
docker exec -u <your_postgres_user> <postgres_container_name> pg_dump -Fc <database_name_here> > db.dump
To drop db (Don't do it on production, for test purpose only!!!):
docker exec -u <your_postgres_user> <postgres_container_name> psql -c 'DROP DATABASE <your_db_name>'
To restore:
docker exec -i -u <your_postgres_user> <postgres_container_name> pg_restore -C -d postgres < db.dump
Also you can use docker-compose analog of exec. In that case you can use short services name (postgres) instead of full container name (composeproject_postgres).
docker exec
docker-compose exec
pg_restore
Since you have
expose:
- "5432"
you can run
pg_dump -U <user> -h localhost -Fc <db_name> > 1.dump
pg_dump connects to 5432 port to make dump since it is listened by postgres in container you will dump db from container
You can also run
docker-compose exec -T postgres sh -c 'pg_dump -cU $POSTGRES_USER $POSTGRES_DB' | gzip > netbox.sql.gz
where netbox.sql.gz is the name of the backup file.
restoring would be
gunzip -c netbox.sql.gz | docker-compose exec -T postgres sh -c 'psql -U $POSTGRES_USER $POSTGRES_DB'