Docker postgres persisting data without volumes (should not persist) - postgresql

I am using a docker container to run postgres for testing purposes, it should NOT persist data between different runs.
This is the dockerfile:
FROM postgres:alpine
ENV POSTGRES_PASSWORD=1234
EXPOSE 5432
And this is my compose file:
version: "3.9"
services:
web:
build:
context: ../../.
dockerfile: ./services/web/Dockerfile
ports:
- "3000:3000"
db:
build: ../db
ports:
- "5438:5432"
graphql:
build:
context: ../../.
dockerfile: ./services/graphql/Dockerfile
ports:
- "4000:4000"
indexer:
build:
context: ../../.
dockerfile: ./services/indexer-ts/Dockerfile
volumes:
- ~/.aws/:/root/.aws:ro
However, I find that between sessions all data is being persisted and I have no clue why. This is totally messing my tests and is not expected to happen.
Even after running docker system prune, all data still persists, meaning that the container is probably using a volume somehow
Does anyone know why this is happening and how to not persist the data?

When your stop your docker-compose environment by typing CTRL-C or similar, next time you run docker-compose up it will restart the same container if the configuration hasn't changed. So even absent volumes, any data that was there previously will continue to be there.
To ensure you're starting with fresh containers, always run:
docker-compose down
If you have explicit volumes defined in your configuration, adding -v will also delete those volumes:
docker-compose down -v
(That's not necessary in this situation.)
Unrelated to your question, but why are you building a custom postgres image? You could just set things up in your docker-compose.yaml file:
db:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
ports:
- "5438:5432"
(And then set POSTGRES_PASSWORD in your .env file.)

You are correct, it is using a volume.
You can use the -v switch to clean up:
docker-compose rm -v db

Related

Problem with create database in docker-compose / postgres

This is my docker-compose.yml
version: '3.8'
services:
db:
container_name: "postgres"
image: postgres:14
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB:database
ports:
- "127.0.0.1:5434:5432"
volumes:
- ./db:/var/lib/postgresql/data
Docker don't create database for this container, what is wrong?
IF IT IS NOT AN ISSUE WITH TYPO here - POSTGRES_DB:database instead of - POSTGRES_DB=database
Did you ever start the Postgres container without specifying the user and database?
This could happen when the docker volume had already stored data from previous runs, where you didn't set the user and DB, then later runs with different env variables will not change the users. That is only done on the first initialization.
So you need to run
docker volume prune
There is already a closed issue on GitHub about that, see: https://github.com/docker-library/postgres/issues/453

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

How to force postgres docker container to start with new DB

I have a docker compose file that looks like:
version: "3"
services:
redis:
image: 'redis:3.2.7'
# command: redis-server --requirepass redispass
postgres:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
# - PGDATA=/var/lib/postgresql/data
webserver:
image: airflow:develop
depends_on:
- postgres
ports:
- "8080:8080"
command:
- webserver
After I run docker-compose up I see all the services started and seemingly working well. My webserver service connects to postgres with the following sqlalchemy connection string: postgresql+psycopg2://airflow:airflow#postgres/airflow
Whenever I kill the composition (with ctrl-c or docker-compose stop) and then restart it, the data in postgres seems to persist. The reason I believe it persists is because my webserver starts with data from the previous session.
I read the docs for the postgres docker image and found the PGDATA environment variable. I tried to force set it to the default (as seen in the commented line in my docker compose file, but that didn't help. I'm not sure how else to debug why data seems to be persisting between container starts.
How can I force my postgres container to start fresh with each new initialization?
Your data was persisted because you did not destroy postgres container. You used docker-compose stop which only hibernate containers. Use docker-compose down instead. It completely destroys containers (but not images).

docker-compose: postgres data not persisting

I have a main service in my docker-compose file that uses postgres's image and, though I seem to be successfully connecting to the database, the data that I'm writing to it is not being kept beyond the lifetime of the container (what I did is based on this tutorial).
Here's my docker-compose file:
main:
build: .
volumes:
- .:/code
links:
- postgresdb
command: python manage.py insert_into_database
environment:
- DEBUG=true
postgresdb:
build: utils/sql/
volumes_from:
- postgresdbdata
ports:
- "5432"
environment:
- DEBUG=true
postgresdbdata:
build: utils/sql/
volumes:
- /var/lib/postgresql
command: true
environment:
- DEBUG=true
and here's the Dockerfile I'm using for the postgresdb and postgresdbdata services (which essentially creates the database and adds a user):
FROM postgres
ADD make-db.sh /docker-entrypoint-initdb.d/
How can I get the data to stay after the main service has finished running, in order to be able to use it in the future (such as when I call something like python manage.py retrieve_from_database)? Is /var/lib/postgresql even the right directory, and would boot2docker have access to it given that it's apparently limited to /Users/?
Thank you!
The problem is that Compose creates a new version of the postgresdbdata container each time it restarts, so the old container and its data gets lost.
A secondary issue is that your data container shouldn't actually be running; data containers are really just a namespace for a volume that can be imported with --volumes-from, which still works with stopped containers.
For the time being the best solution is to take the postgresdbdata container out of the Compose config. Do something like:
$ docker run --name postgresdbdata postgresdb echo "Postgres data container"
Postgres data container
The echo command will run and the container will exit, but as long as don't docker rm it, you will still be able to refer to it in --volumes-from and your Compose application should work fine.