How to make sure docker-compose will not remove my volume with postgres data - postgresql

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.

It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

Related

Postgres running via docker not persisting data after initialization script

I'm using docker for the first time to set up a test database that my team can then use. I'm having some trouble getting my data on DBeaver after running my docker-compose file. The issue I'm facing is that my database does not show up in DBeaver (along with relevant Schemas and Tables that I also create/populate in my initialization sql script).
Here is my docker-compose.yml
version: "3"
services:
test_database:
image: postgres:latest
build:
context: ./
dockerfile: Dockerfile
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=testdb
container_name: test_database
In this, I specify the docker file I want it to use for building. Here is the dockerfile:
# syntax = docker/dockerfile:1.3
FROM postgres:latest
ADD test_data.tar .
COPY init_test_db.sql /docker-entrypoint-initdb.d/
Now, when I run docker-compose build and docker-compose up, I can see through the logs that my SQL commands (CREATE, COPY, etc.) do get executed and the rows do get added. But when I connect to this instance through DBeaver, I can't see this at all. In fact, the only database on there is the default Postgres database, even through the logs say I'm connected to test_database.
I followed some other solutions and used docker volume prune as well, but that didn't affect anything (I read some solutions about clearing up volumes, and at that point, I had volumes: /tmp:/tmp as well). Any ideas?
Wow, this wasn't an error after all. All I had to do was go on the connection settings on DBeaver and check 'Show all databases' under the Postgres tab. Hope this can help someone :)

Can't connect with docker-compose to Postgres database

I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!
Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.

Postgres inside docker; reload database / init script every time the container is started

Following the offical postgres docker image, you can set up an entrypoint where you put your initilization scripts.
This works fine. For development/testing, I want a clean database on every container startup, not only on it's first.
All scripts inside the docker-entrypoint-initdb.d are only run once (the first time the container is started).
Is there an easy way to execute the script every time the container is started via docker-compose?
I put DROP TABLE IF EXISTS in front of every CREATE TABLE, so the .sql script will work even on a new startup.
Relevant part of the docker-compose if anyone needs that:
postgres-myname:
image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres-db
ports:
- "54320:5432"
build:
context: .
dockerfile: postgresql-config/Dockerfile
networks:
- my-network
You need a "cycle script" for restarting, which should contain:
docker-compose rm -vs postgres-myname
docker volume prune -f --filter label=protgres-myname
docker-compose up -d
I recommend exploring docker volume prune before using it in a script.
I also recommend having a named volume mapped to postgres data directory (/var/lib/postgresql/data) and removing the volume explicitly instead of pruning.
# docker-compose.override.yml
postgres-myname:
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
- pgdata
And then
docker volume rm -f "$(basename "`/bin/pwd`")_pgdata"

Creating mongo docker container with local storage on hos

I want to run mongo db in docker container. I've pulled image and run it. So it seems work ok.
But every time I start it the DB is overwritten so I loose any changes. So I want to want to map somehow internal container storage on my local host folder.
Should I write Dockerfile or/and docker-compose.yaml? I suppose this is simple question but being new in docker I can't understand what to read to get full understanding.
You do not need to write Dockerfile and make thing complex, just use offical image as mentioned in command or compose file.
You can use both options either docker run or docker-compose but the path should be correct in mapping to keep data persistent.
Here is way
Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir.
Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
The -v /my/own/datadir:/data/db part of the command mounts the
/my/own/datadir directory from the underlying host system as /data/db
inside the container, where MongoDB by default will write its data
files.
mongo docker volume
with docker-compose
version: "2"
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=pastime
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_password
volumes:
- /my/own/datadir:/data/db

docker-compose on Windows volume not working

I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.