How to recreate Docker container? - postgresql

I'm new to docker and I'm using docker compose. For some reason my postgres container is now broken
I'm trying this command docker-compose up --no-deps --build db
And it's returning me this:
MacBook-Pro-de-Javier:goxo.api javier$ docker-compose up --no-deps --build db
Recreating testapi_db_1
Attaching to testapi_db_1
db_1 | LOG: database system was shut down at 2017-04-20 17:19:05 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
Whenever I try to connect (with the same connection arguemnts than before) I get this:
^[[Adb_1 | FATAL: database "test" does not exist
This is part of my docker-compose.yml
version: "3"
services:
db:
image: postgres
ports:
- "3700:5432"
environment:
POSTGRES_HOST: "127.0.0.1"
POSTGRES_DB: "test"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres1"
tmpfs:
- /tmp
- /var/run/postgresql
volumes:
- db:/var/lib/postgresql/data
- ./config/postgres-initdb.sh:/docker-entrypoint-initdb.d/initdb.sh
Any ideas on how can I recreate the docker image to be how it was before? It was working as it was created the first time
Thanks
EDIT 1: If I run docker-compose build && docker-compose up
Terminal throws this:
db uses an image, skipping
EDIT 2: This command does not create database again neither:
docker-compose up --force-recreate --abort-on-container-exit --build db

have you tried to rebuild your single postgres container?
docker build -t <postgrescontainer>
or with docker-compose:
docker-compose up --build
to recreate the images and not use the old 'used' ones.

You can have a look at the images on your system with
docker images
which should show your image, and then
docker history --no-trunc your_image
should show the commands used for the creation of the image
This my be insufficient, as when you see something like
ADD * /opt
you do not know exactly which files were copied, and what thoses files contained
There is also dockerfile-from-image
https://github.com/CenturyLinkLabs/dockerfile-from-image
which seems to have a bug recently (I do not know if it is fixed)

Related

Docker runs PostgreSQL in "trust" mode

I am currently learning docker and trying to run a docker container with the PostgreSQL database. I managed that once, and everything seemed to work fine. After some time, I tried to run another docker container with almost identical settings, however, it didn't go as expected. My problem is that now, whenever I try to run PostgreSQL container, initdb initializes the database in "trust" mode and accepts any connections without the password.
So far, I've tried running the command from the console:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -p 32000:5432 -d postgres:14.5-alpine
As well as running the docker-compose.yaml:
services:
db:
container_name: Test_container
image: postgres:14.5-alpine
restart: unless-stopped
ports:
- "32000:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mysecretpassword
Additionally, I tried ordering tags differently, different images, and different values, cleaning docker: removing all containers, images, and volumes, and even reinstalling docker, however, whenever I inspect logs of a newly created container, I get:
sh: locale: not found
2022-08-16 09:35:50.709 UTC [30] WARNING: no usable system locales were found
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok
One of my assumptions was that docker, for some reason, doesn't see the password I am specifying and thus starts the database in "trust" mode, however, if I add
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
to the docker-compose.yaml, test_db database is being created.
I'd appreciate any suggestions on how to make docker run PostgreSQL containers not in a "trust" mode as it should by default if the password is specified.
Juan González pointed out:
Note 1: The PostgreSQL image sets up trust authentication locally so you may notice a password is not required when connecting from localhost (inside the same container). However, a password will be required if connecting from a different host/container.
So, according to the docs, I updated my docker-compose.yaml file:
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
POSTGRES_HOST_AUTH_METHOD: scram-sha-256
POSTGRES_INITDB_ARGS: --auth-host=scram-sha-256
and once again tried swapping order and\or removing POSTGRES_INITDB_ARGS, but database still runs in "trust" mode.
As stated in Postgres' DockerHub documentation:
Note 1: The PostgreSQL image sets up trust authentication locally so you may notice a password is not required when connecting from localhost (inside the same container). However, a password will be required if connecting from a different host/container.
However, if you don't want trust mode even in local connections, you can set the POSTGRES_HOST_AUTH_METHOD environment variable to override this behavior. More info at the documentation mentioned above.
As #jjanes pointed out in the comment to my question, the solution is to add POSTGRES_INITDB_ARGS: --auth=scram-sha-256 which would set both local and host types of connections.

User not created in postgres docker image

I have a docker image that's not accepting credentials for a user that is defined in the yaml docker-compose file. When I go to the docker console for the container and check users it only lists postgres. Not sure what I am missing - here's the yaml file:
version: '3.8'
services:
db:
container_name: drewreport_container
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mpassword
POSTGRES_USER: thedrewreport
POSTGRES_DB: thedrewreportdb
ports:
- "5432:5432"
volumes:
- thedrewreportdata:/var/lib/postgresql/data/
volumes:
thedrewreportdata:
Any ideas?
I can't reproduce your problem. Running docker-compose up, I see:
Creating network "docker_default" with the default driver
Creating volume "docker_thedrewreportdata" with default driver
Creating docker_client_1 ...
Creating docker_db_1 ...
Creating docker_client_1 ... done
Creating docker_db_1 ... done
Attaching to docker_db_1, docker_client_1
[...]
db_1 | 2021-07-19 23:03:39.676 UTC [1] LOG: database system is ready to accept connections
If I then connect with psql, I can authenticate using the username
and password you've defined in your docker-compose.yml:
# psql -h localhost -U thedrewreport thedrewreportdb
Password for user thedrewreport:
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
thedrewreportdb=# \du
List of roles
Role name | Attributes | Member of
---------------+------------------------------------------------------------+-----------
thedrewreport | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
thedrewreportdb=#
Note that any volumes specified in your docker-compose.yml will
persist between a docker-compose down and a docker-compose up, so
if you ever brought your stack up with different credentials, those
will never be replaced unless your explicitly destroy the volume by
running docker-compose down -v.
You can tell that docker-compose is re-using a volume if you don't
see a message like this when you run docker-compose up:
Creating volume "docker_thedrewreportdata" with default driver
If existing, the existing DB data you mount to /var/lib/postgresql/data/ will take precedence over the environment variables to initialize it.
You have 2 options:
Update the existing DB data to add your user / password / database. To do so you can use docker compose exec db bash and then connect using psql command to make your changes.
Delete or move your existing thedrewreportdata local volume, for instance updating it to ./thedrewreportdata_postgres:/var/lib/postgresql/data/
Once done, you can use docker compose exec db psql --username thedrewreport --dbname thedrewreportdb to doublecheck you can connect with your credentials to the updated DB.

PostgreSQL not creating tables when running on docker

I am trying to run PostgreSQL on a linux container docker in my Windows server, but when I run it, it creates database, but there is no table and no data in it, while it should create all tables and add datas to it using that Actibook_latest.sql
Here's the code of Dockerfile
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY Actibook_latest.sql /tmp/psql_data/
COPY init_docker_postgres.sh /docker-entrypoint-initdb.d/
EXPOSE 5432
Here's the code of init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="postgres"
DB_DUMP_LOCATION="/tmp/psql_data/Actibook_latest.sql"
echo "*** CREATING DATABASE ***"
psql "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
And here's the code of docker-compose
version: '2'
services:
db:
build: '.\Main Database Backup'
environment:
POSTGRES_DB: ${DB_POSTGRES_APP_DATABASE}
POSTGRES_USER: ${DB_POSTGRES_APP_USER}
POSTGRES_PASSWORD: ${DB_POSTGRES_APP_PASSW}
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
restart: unless-stopped
Main Database Backup is the folder that contains that Dockerfile and init_docker_postgres.sh. Also that Actibook_latest.sql contains sql to create the tables, data, etc.
And when I run docker-compose up while other serives go up and running, here's what it shows to logs:
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /var/lib/postgresql/data/pgdata/base/1 ... ok
initializing pg_authid ... ok
setting password ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
postgres -D /var/lib/postgresql/data/pgdata
or
pg_ctl -D /var/lib/postgresql/data/pgdata -l logfile start
waiting for server to start....LOG: database system was shut down at 2018-11-14 16:18:07 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
done
server started
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init_docker_postgres.sh
/usr/local/bin/docker-entrypoint.sh: /docker-entrypoint-initdb.d/init_docker_postgres.sh: /bin/bash^M: bad interpreter: No such file or directory
LOG: database system was interrupted; last known up at 2018-11-14 16:18:16 UTC
LOG: database system was not properly shut down; automatic recovery in progress
LOG: record with zero length at 0/16A4780
LOG: redo is not required
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
I'm thinking that there might be a firewall issue since its running perfectly in other servers that I have, the problem appears only in this one. So is that any chance that postgresql in this machine is preventing it?
Update
I tried to restart docker and then used these commands:
docker rm $(docker ps -a -q)
docker rmi $(docker images -a -q)
docker volume rm $(docker volume ls -q)
now its showing this error:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init_docker_postgres.sh
*** CREATING DATABASE ***
: No such file or directory/init_docker_postgres.sh: line 4: /tmp/psql_data/Actibook_latest.sql
/docker-entrypoint-initdb.d/init_docker_postgres.sh: line 4: $'\r': command not found
*** DATABASE CREATED! ***
and when I go to docker to check for file:
docker exec -it db bin
cd /tmp/psql_data
ls
it shows that there exists Actibook_latest.sql
It seems that your entrypoint init script was not working, make sure data_volume is empty before run your container.
This is because in docker-entrypoint.sh, at line 57, it checks your data volume, if it exists, it won't execute your init scrip.

DB, user not created using postgres docker-compose

I was setting up my django project and postgres but everytime I was getting this error
role doesn't exist
or
Db doesn't exist
So when I only tried to setup postgres and see if postgres is creating user and db correctly, but it wasn't
Here is my docker compose file :
version: "3"
services:
templates_db:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_pass
- POSTGRES_DB=my_db
volumes:
- ./data/postgres:/var/lib/postgresql/data
I ran my compose file using docker-compose up --build and got below logs
templates_db_1 | LOG: database system was interrupted; last known up at 2017-05-16 05:48:39 UTC
templates_db_1 | LOG: database system was not properly shut down; automatic recovery in progress
templates_db_1 | LOG: invalid record length at 0/14F0080: wanted 24, got 0
templates_db_1 | LOG: redo is not required
templates_db_1 | LOG: MultiXact member wraparound protections are now enabled
templates_db_1 | LOG: database system is ready to accept connections
templates_db_1 | LOG: autovacuum launcher started
When I logged into postgres shell as
docker exec -it **container_id** sh
su - postgres
$ psql
postgres=# \l
It didn't have the db I mentioned in compose file my_db
I had the same issue but my problem was that I had also installed POSTGRESSQL itself.
This was caching more than it needed too and also interfering with user and password validation. After de-installing POSTGRESSQL itself, docker-compose down -v && docker-compose up -d, run application and it worked swimmingly.

Docker-compose environment variables

I am trying to setup a postgres container and want to setup the postgres login with:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
So I have created the docker-compose.yml like so
web:
build: .
ports:
- "62576:62576"
links:
- redis
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_USER: docker
redis:
image: redis
I have also tried the other syntax for environment variable declaring the db section as:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=docker
However neither of these options seem to work because for whatever reason whenever I try to connect to the postgres database using the various connection strings:
postgres://postgres:postgres#db:5432/users
postgres://postgres:docker#db:5432/users
postgres://docker:docker#db:5432/users
They all give me auth failures as opposed to complaining there is no users database.
I struggled with this for a while and wasn't having luck with the accepted answer, I finally got it to work by removing the container:
docker-compose rm postgres
And then the volume as well:
docker volume rm myapp_postgres
Then when I did a fresh docker-compose up I saw CREATE ROLE fly by, which I'm assuming is what was missed on the initial up.
The reasons for this are elaborated on here, on the Git repo for the Docker official image for postgres.
If you're using Docker.
Try to check if your local DB is active because mostly it's conflicting with Docker, if so, you can deactivate it or change port number or uninstall it in order to avoid the conflict.
I had the same problem, and in my case problem was fixed with a single command:
docker-compose up --force-recreate
The authentication error you got would help a lot!
I fired up the postgres image with your arguments:
docker run --name db -d -e POSTGRES_PASSWORD=docker -e POSTGRES_USER=docker postgres
Then I exec'ed in :
docker exec -it db psql -U docker user
psql: FATAL: database "user" does not exist
I get the error message you are expecting because I have trust authentication :
docker exec -it db cat /var/lib/postgresql/data/pg_hba.conf | grep -v '^#'
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host all all 0.0.0.0/0 md5
To simulate your web container, I'll run another instance of the postgres container and link the db container and then connect back to the db container:
core#ku1 /tmp/i $ docker run --rm --name web --link db:db -it postgres psql -h db -Udocker user
Password for user docker:
psql: FATAL: password authentication failed for user "docker"
I get an authentication error if I enter the incorrect password. But, if I enter the correct password:
core#ku1 /tmp/i $ docker run --rm --name web --link db:db -it postgres psql -h db -Udocker user
Password for user docker:
psql: FATAL: database "user" does not exist
It all seems to be working correctly. I put it all in a yaml file and tested it that way as well:
web:
image: postgres
command: sleep 999
ports:
- "62576:62576"
links:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_USER: docker
then fired it up with docker-compose:
core#ku1 /tmp/i $ docker-compose -f dc.yaml up
Creating i_db_1...
Creating i_web_1...
Attaching to i_db_1, i_web_1
db_1 | ok
db_1 | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
db_1 | initializing pg_authid ... ok
db_1 | initializing dependencies ... ok
db_1 | creating system views ... ok
db_1 | loading system objects' descriptions ... ok
db_1 | creating collations ... ok
db_1 | creating conversions ... ok
db_1 | creating dictionaries ... ok
db_1 | setting privileges on built-in objects ... ok
db_1 | creating information schema ... ok
db_1 | loading PL/pgSQL server-side language ... ok
db_1 | vacuuming database template1 ... ok
db_1 | copying template1 to template0 ... ok
db_1 | copying template1 to postgres ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | postgres -D /var/lib/postgresql/data
db_1 | or
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: CREATE DATABASE "docker" ;
db_1 |
db_1 | backend>
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: CREATE USER "docker" WITH SUPERUSER PASSWORD 'docker' ;
db_1 |
db_1 | backend>
db_1 | LOG: database system was shut down at 2015-04-12 22:01:12 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
^Z
[1]+ Stopped docker-compose -f dc.yaml up
core#ku1 /tmp/i $ bg
you can see that the user and password were created. I exec in:
core#ku1 /tmp/i $ docker exec -it i_web_1 psql -Udocker -h db user
Password for user docker:
psql: FATAL: password authentication failed for user "docker"
core#ku1 /tmp/i $
db_1 | FATAL: password authentication failed for user "docker"
db_1 | DETAIL: Connection matched pg_hba.conf line 95: "host all all 0.0.0.0/0 md5"
core#ku1 /tmp/i $ docker exec -it i_web_1 psql -Udocker -h db user
Password for user docker:
psql: FATAL: database "user" does not exist
db_1 | FATAL: database "user" does not exist
So the only thing I can think of is that you are trying to connect to the database from your host, not the web container? Or your web container is not using the 'db' as the host to connect to? Your definition for the web container does not contain any errors that I can see.
Thanks to Bryan with the docker-compose exec containername env I have discovered that the need is also to delete volumes. Since for the docker-compose volume rm volumename you need to know the exact name it is easier just to delete all with:
docker-compose down --volumes
This helped me
docker stop $(docker ps -qa) && docker system prune -af --volumes && docker compose up
In my case, running postgres:13-alpine in a windows 10 WSL2, none of the above solutions did the trick.
My mistake was that the docker network name I was using was shared with another project. Let's say I have projects A and B, both with the following structure:
myappfolder
- docker-compose.yml
- services
- app
- depends on db
- db
It happens that, by default, docker-compose takes the network name from the parent directory name of the docker-compose.yml file. Therefore both projects, A and B were trying to connect to the same network: myappfolder_default.
To solve this:
ensure network names are unique among projects:
a. either change the name of the root folder to be unique
b. or edit you docker-compose.yml to set an explicit network name
do docker-compose down -v this will reset all the possible dbs you had defined in your network > make sure you make a psql dump before proceeding
do docker-compose up
More networking docs here: https://docs.docker.com/compose/networking/
I had a similar situation. Following the answer from #Greg, I did a docker-compose up, and it picked up the environment variable.
Prior to that, I had just been using docker-compose run and it wasn't picking up the environment variable as proven by running docker-compose exec task env. Strangely, docker-compose run task env showed the environment variable I was expecting.