How to start Phoenix by using PostgreSQL through container? - postgresql

I tried:
$ alias psql="docker exec -ti pg-hello-phoenix sh -c 'exec psql -h localhost -p 5432 -U postgres'"
$ mix ecto.create
but got:
** (RuntimeError) could not find executable psql in path, please guarantee it is available before running ecto commands
lib/ecto/adapters/postgres.ex:106: Ecto.Adapters.Postgres.run_with_psql/2
lib/ecto/adapters/postgres.ex:83: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:34: anonymous fn/2 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:604: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:604: Enum.each/2
(mix) lib/mix/cli.ex:58: Mix.CLI.run_task/2
(elixir) lib/code.ex:363: Code.require_file/2
Also I tried to create symlink /usr/local/bin/psql:
#!/usr/bin/env bash
docker exec -ti pg-hello-phoenix sh -c "exec psql -h localhost -p 5432 -U postgres $#"
and then:
$ sudo chmod +x /usr/local/bin/psql
check:
$ which psql
/usr/local/bin/psql
$ psql --version
psql (PostgreSQL) 9.5.1
run again:
$ mix ecto.create
** (Mix) The database for HelloPhoenix.Repo couldn't be created, reason given: cannot enable tty mode on non tty input
.
Container with PostgreSQL is launched:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
013464d7227e postgres "/docker-entrypoint.s" 47 minutes ago Up 47 minutes 5432/tcp pg-hello-phoenix

I was able to do this by going into /config/.exs In my case it was development, so /config/dev.exs and left the hostname as localhost but added another setting for port: 32768 because that's the port that docker exposed.
Make sure to put a space between the port: and the number (not string). Otherwise it won't work.
Worked as usual after that. The natural assumption is that the username / password matches on the container as well.

To me I did the following:
sudo docker exec -it postgres-db bash
After I got the interactive shell
psql -h localhost -p 5432 -U postgres
Then I create my db manually:
CREATE DATABASE cars_dev;
Then finally:
mix ecto.migrate
Everything worked fine after that :) hope it helps.

Related

bash script to have a postgres DB in a docker container

I'm having trouble in creating a Postgres DB using this bash script:
#! /bin/bash
docker pull postgres
docker run --name coverage-postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres
export CONTAINER_ID=$(sudo docker ps -a | grep coverage-postgres | head -c12)
sleep 2s
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "create user coverage_user with password 'password';"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "create database coverage owner coverage_user;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "grant all privileges on database coverage to coverage_user;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "\c coverage coverage_user" # it seems useless...
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "CREATE TABLE IF NOT EXISTS postal_codes (id,...;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "CREATE UNIQUE INDEX ... ;"
# exit from container
exit
# restart container
docker start $CONTAINER_ID
In particular, the database is created, the user is created, the table is created but... it's not in the coverage db but in postgres db.
I've tried to add "CREATE TABLE coverage.postal_codes" but coverage is a db and not a schema and it didn't work.
I've tried to use: psql -U coverage_user but the system tells me that database coverage_user doesn't exist.
So of course I thought "I have to specify the database of course!". Then I've tried to use: psql -U coverage as the name of the database but this time, the system makes fun of me and, changing its mind, tells me that the role coverage doesn't exists.
I tried a workaround: within the command -c "\c coverage coverage_user" I concatenated the other commands this way:
-c "\c coverage coverage_user; CREATE TABLE...; CREATE UNIQUE INDEX...;"
but, of course, neither this worked at all.
I make a premise: I know there are other ways to do this but I would like to understand what I am missing with these specific commands.
Solution
#! /bin/bash
docker pull postgres
docker run --name coverage-postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres
export CONTAINER_ID=$(sudo docker ps -a | grep coverage-postgres | head -c12)
sleep 2s
docker exec -it $CONTAINER_ID psql -U postgres -c "create user coverage_user with password 'password';"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U postgres -c "create database coverage owner coverage_user;"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U postgres -c "grant all privileges on database coverage to coverage_user;"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U coverage_user -c "CREATE TABLE IF NOT EXISTS postal_codes (id int)" coverage
Explanation
https://www.postgresql.org/docs/9.2/app-psql.html
psql [option...] [dbname [username]]
Just add dbname after options. And change user as -U option. You can pass dbname also as an rgument -d

Docker exec - cannot call postgres with environment variables

I have multiple Environment Variables defined on my Postgres container, such as POSTGRES_USER. The container is running and I want to connect to Postgres from the command line using exec.
I'm unable to connect with the following:
docker exec -it <CONTAINER-ID> psql -U $POSTGRES_USER -d <DB NAME>
I understand that the variable is defined on the container and the following does work:
docker exec -it <CONTAINER-ID> bash -c 'psql -U $POSTGRES_USER -d <DB NAME>'
Is there a way for me to execute the psql command directly from docker exec and call the environment variable on the container?
docker exec -it <CONTAINER-ID> psql -U ????? -d <DB NAME>
Depending on your use case, what you could do, instead of passing a user to the psql command is to define the environment variable PGUSER to the container at boot time.
This way, it will be the default user for PostgreSQL, if you do not specify any, so you won't even have to specify it in order to connect:
$ docker run --name postgres -e POSTGRES_PASSWORD=bar -e POSTGRES_USER=foo -e PGUSER=foo -d postgres
e250f0821613a5e2021e94772a732f299874fc7a16b340ada4233afe73744423
$ docker exec -ti postgres psql -d postgres
psql (12.4 (Debian 12.4-1.pgdg100+1))
Type "help" for help.
postgres=#
The reason this isn't working for you is because when you run the command
docker exec -it <CONTAINER-ID> psql -U $POSTGRES_USER -d <DB NAME>
You're running it on your host. So, $POSTGRES_USER refers to the environment variable on your host, not your container. That variable isn't set on your host.
The second command
docker exec -it <CONTAINER-ID> bash -c 'psql -U $POSTGRES_USER -d <DB NAME>'
works because you're passing the command in the quotes to the shell in the container, where that variable actually exists.
The method in the second command is the way to do what you're trying to do, unless you set the variable on your host somehow and make sure it has the same value as it does in your container.
The easiest way to do this would be to reference your host variable at image build time.
So, in your Dockerfile, if you write ENV POSTGRES_USER=${POSTGRES_USER} it will look in the host environment for that value, and use it.
If you set the variables this way, then your command will work.

What is the difference between psql and postgresql-client?

I have access to two postgres database servers on different hosts. On server A I access the client using:
psql -h localhost -U user -W db_name
db_name=>
And on the second host B I access the client using (docker image):
docker run -it --rm --network fiware_default jbergknoff/postgresql-client\
postgresql://postgres:password#postgres-db:5432/postgres
postgres=#
Now I need to dump database file copied from A (now on B) using:
psql -U postgres -d targetdb -f sourcedb.sql
However, the command psql isnĀ“t recognised second host B. I mean I cannot run commands using psql B
what is then the difference between psqland postgres-clienthere please?
The docker image postgresql-client has psql defined as an entrypoint. See https://github.com/jbergknoff/Dockerfile/blob/master/postgresql-client/Dockerfile#L3 .
So you basically ran psql psql and psql does not understand that. Just leave psql out and start straight with the args.
You can read up on CMD vs ENTRYPOINT here What is the difference between CMD and ENTRYPOINT in a Dockerfile? or here http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/ .

docker-machine ssh command for mongodump

Setup for the problem:
Create a data volume container
$ docker create --name dbdata -v /dbdata mongo /bin/true
Start mongo in a container linked to the data volume container
$ docker run -d --name mongo --volumes-from dbdata mongo
Verify you can connect to mongo using the mongo client
$ docker run -it --link mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
The problem:
The docker-machine ssh takes a host and a command argument to execute on the host. I'd like to execute the following mongodump command, which works once I ssh into the docker host:
$ docker-machine ssh demo
root#demo:~# docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
2015-09-15T16:34:02.676+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T16:34:02.678+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T16:34:02.678+0000 done dumping test.samples (1 document)
2015-09-15T16:34:02.679+0000 writing test.system.indexes to /backup/test/system.indexes.bson
However, using the docker-machine ssh command to execute the above command in a single step doesn't work for me:
$ docker-machine ssh demo -- docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
SSH cmd error!
command: docker run --rm --link mongo:mongo -v /Users/tony:/backup mongo bash -c mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR
err : exit status 1
output : 2015-09-15T16:53:07.717+0000 Failed: error connecting to db server: no reachable servers
So if the container to run the mongodump command can't connect to the mongo container, I figure there's probably an issue with --host $MONGO_PORT_27017_TCP_ADDR (it should be passed as is into the container, so premature expansion causing an empty string?), but a bit stumped trying to get it right. Any ideas are appreciated.
Update: I'm one step closer. The following appears to execute the command correctly, although the data isn't written to the system and the session hangs:
$ docker-machine ssh demo -- $(docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR')
2015-09-15T18:02:03.347+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T18:02:03.349+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T18:02:03.349+0000 done dumping test.samples (1 document)
2015-09-15T18:02:03.350+0000 writing test.system.indexes to /backup/test/system.indexes.bson
The question asked for a solution based on docker ssh, but since no one responded, I'll answer the question myself with what is a better solution anyway.
As suggested by Nathan LeClaire (#upthecyberpunks) to me over twitter, the better solution is to avoid the hassle altogether and simply run a container to execute the mongodump command.
$ docker run \
--rm \
--link mongo:mongo \
-v /root:/backup mongo bash \
-c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
Not technically required for the answer, but the resulting test db backup file can then be transferred from the docker host machine to your current directory via docker scp:
$ docker-machine scp -r dev:/root/test .
Since I cannot add a comment to the orginal nice answer, just add a little explanation here, $MONGO_PORT_27017_TCP_ADDR should be the ip of our machine, for example, the virtual machine's ip in my virtualbox is 100.100.100.10, so the last line shoulb be:
-c 'mongodump --out /backup --host 100.100.100.10' or
-c 'mongodump --out /backup --host 100.100.100.10:27017'.
If not add the host field, great chances are that we will encounter some error like:
*** Failed: error connecting to db server: no reachable servers.
And thanks again to the orginal answer ^_^.

How to generate a Postgresql Dump from a Docker container?

I would like to have a way to enter into the Postgresql container and get a data dump from it.
Use the following command from a UNIX or a Windows terminal:
docker exec <container_name> pg_dump <schema_name> > backup
The following command will dump only inserts from all tables:
docker exec <container_name> pg_dump --column-inserts --data-only <schema_name> > inserts.sql
I have container named postgres with mounted volume -v /backups:/backups
To backup gziped DB my_db I use:
docker exec postgres pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz
Now I have
user#my-server:/backups$ ls
my_db-2016-11-30.tar.gz
Although the mountpoint solution above looked promising, the following is the only solution that worked for me after multiple iterations:
docker run -it -e PGPASSWORD=my_password postgres:alpine pg_dump -h hostname -U my_user my_db > backup.sql
What was unique in my case: I have a password on the database that needs to be passed in; needed to pass in the tag (alpine); and finally the hosts version of the psql tools were different to the docker versions.
This one, using container_name instead of database_scheme's one, works for me:
docker exec {container_name} pg_dump -U {user_name} > {backup_file_name}
In instance, for me, database name, user and password are supposed declared in docker-compose.yaml
I wish it could help someone
for those who suffered with permissions, I used this following command with success to perform my dump:
docker exec -i MY_CONTAINER_NAME /bin/bash -c "PGPASSWORD=MY_PASSWORD pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /home/MY_USER/db-$(date +%d-%m-%y).backup
This will mount the pwd and include your environment variables
docker run -it --rm \
--env-file <(env) \
-w /working \
--volume $(pwd):/working \
postgres:latest /usr/bin/pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /working/db-$(date +%d-%m-%y).backup
Another workaround method is to start postgre sql with a mountpoint to the location of the dump in docker.
like docker run -v <location of the files>.
Then perform a docker inspect on the docker running container
docker inspect <image_id>
you can find "Volumes" tag inside and a corresponding location.Go to the location and you can find all the postgresql/mysql files.It worked for me.Let us know if that worked for you also.
Good luck
To run the container that has the Postgres user and password, you need to have preconfigured variables as container environment variable.
For example:
docker run -it --rm --link <container_name>:<data_container_name> -e POSTGRES_PASSWORD=<password> postgres /usr/bin/pg_dump -h <data_container_name> -d <database_name> -U <postgres_username> > dump.sql