Save Postgres Data to Directory in Docker Named Volume - postgresql

Problem
I have an application with postgres. I want to be able to back up the initial database data so that I don't have to re enter it each deployment. However, despite having a named volume set up in my compose file.
What I'm not sure of is how to have postgres save its data into the directory associated with the volume. I'm also not sure exactly how to associate a directory with the named volume. What I want is for the docker host server to be able to see the postgress data in the named volume's associated directory.
Could someone please provide an explanation/some examples of how to handle this? Right now even though the volume is associated with the docker service in the compose file, it doesn't write any data to the database_volume/ directory. This is what I would like to address.
Code
Here's my Dockerfile:
FROM python:3.6
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
ADD /scripts/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
EXPOSE 8001
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
And my docker-compose.yml:
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=sasite.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
db:
restart: always
image: postgres:10.1-alpine
container_name: ps01
environment:
POSTGRES_DB: sasite_db
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pguser123
ports:
- "5432:5432"
volumes:
- database_volume:/var/lib/postgresql/data
networks:
- main
nginx:
restart: always
image: nginx
container_name: ng01
volumes:
- ./config/nginx-prodtest.conf:/etc/nginx/conf.d/default.conf:ro
- ./static:/usr/share/nginx/sasite/static
- ./media:/usr/share/nginx/sasite/media
ports:
- "80:80"
- "443:443"
networks:
- main
depends_on:
- app
networks:
main:
volumes:
database_volume:
driver_opts:
type: none
device: ./database_volume
o: bind

Related

Docker wipes out mongoDB container data

I have created a program and tested that works just fine. I decided to dockerize it, and it seems after maybe some hours or few days the data of mongoDB container get all deleted. The docker-compose.yml file:
version: '3'
services:
node:
restart: always
build: ./nodeServer
container_name: nodeserver
ports:
- 5000:5000
depends_on:
- database
networks:
twitter_articles:
ipv4_address: 172.24.0.2
environment:
- TZ=Europe/Athens
database:
restart: always
build: ./mongoDump/database
container_name: mongodb
ports:
- 27017:27017
networks:
twitter_articles:
ipv4_address: 172.24.0.4
volumes:
- ./data:/data/db
environment:
- TZ=Europe/Athens
pythonscript:
restart: always
build: ./python
container_name: pythonscript
depends_on:
- database
networks:
twitter_articles:
ipv4_address: 172.24.0.3
environment:
- TZ=Europe/Athens
networks:
twitter_articles:
ipam:
config:
- subnet: 172.24.0.0/24
And the three Dockerfile's that they are builded:
nodeserver:
FROM node:14.16.1
COPY package*.json ./
RUN npm install
COPY . ./
CMD [ "npm", "start"]
mongodb:
FROM mongo:5.0.3
CMD docker-entrypoint.sh mongod
pythonscript
FROM python:3.9
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
CMD [ "python", "-u", "./init2.py" ]
As mentioned before without Docker the app works just fine and there isn't that kind of behaviour of database getting wiped out. I have tried also internal Docker storage which also does the same thing. I have tried to check the logs and I saw that there is an error happening in pythonscript container each time database wipes out. I know that an error should happen in pythonscript but there is no such a code anywhere in the app to perform deletion of collections or databases (also without Docker this error still happens but nothing gets deleted).
Any ideas?
You can create an external volume and add the data of the mongoDB into it. That way your data doesn't get wiped even when you turn off your docker-compose.
version: '3'
services:
node:
restart: always
build: ./nodeServer
container_name: nodeserver
ports:
- 5000:5000
depends_on:
- database
networks:
twitter_articles:
ipv4_address: 172.24.0.2
environment:
- TZ=Europe/Athens
database:
restart: always
build: ./mongoDump/database
container_name: mongodb
ports:
- 27017:27017
networks:
twitter_articles:
ipv4_address: 172.24.0.4
volumes:
- mongo_data:/data/db
environment:
- TZ=Europe/Athens
pythonscript:
restart: always
build: ./python
container_name: pythonscript
depends_on:
- database
networks:
twitter_articles:
ipv4_address: 172.24.0.3
environment:
- TZ=Europe/Athens
networks:
twitter_articles:
ipam:
config:
- subnet: 172.24.0.0/24
volumes:
mongo_data:
external: true
now you have to create a volume in your docker using
docker volume create --name=mongo_data
then
docker-compose down
and
docker-compose up --build -d
I have been advised that it is always better idea to save data outside of docker container in separate volume. Look for this tutorial volumes.
You need to make an persistant volume for your database, because as you noted on your docker-compose.yml file you got:
restart: always
so everytime your python script got an error, it's stopped and it's depending on Mariadb, so it's restarted and data got wiped.
Make sure the data is stored outside the docker container because are treated like cattles and not pets. New containers are created freshly with no data from previous version.
I'd ensure that container user has a pre-configured ID with write access to the host folder targeted for db data persistence.
I'd use an absolute path on the host side too when mapping persistent data folders in Docker.
Referring to:
volumes:
- ./data:/data/db

docker compose phpmyadmin php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution

I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.

How to solve problem with empty docker-entrypoint-initdb.d? (PostgresQL + Docker)

Here is a part of my project structure:
Here is a part of my docker-compose.yml file:
Here is my Dockerfile (which is inside postgres-passport folder):
I have init.sql script which should create user, database and tables (user and db are the same as in docker-compose.yml file)
But when I look into my docker-entrypoint-initdb.d folder it is empty (there is no init.sql file). I use this command:
docker exec latest_postgres-passport_1 ls -l
docker-entrypoint-initdb.d/
On my server (Ubuntu) I see:
I need your help, what am I doing wrong? (how can I copy a folder with init.sql script. Postgres tell me that
/usr/local/bin/docker-entrypoint.sh: ignoring
/docker-entrypoint-initdb.d/*
(as he can't find this folder)
All code in text format below:
Full docker-compose.yml:
version: '3'
volumes:
redis_data: {}
proxy_certs: {}
nsq_data: {}
postgres_passport_data: {}
storage_data: {}
services:
# ####################################################################################################################
# Http services
# ####################################################################################################################
back-passport:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
expose:
- 9000
depends_on:
- postgres-passport
- redis
- nsq
environment:
ACCESS_LOG: ${ACCESS_LOG}
AFTER_CONFIRM_BASE_URL: ${AFTER_CONFIRM_BASE_URL}
CONFIRM_BASE_URL: ${CONFIRM_BASE_URL}
COOKIE_DOMAIN: ${COOKIE_DOMAIN}
COOKIE_SECURE: ${COOKIE_SECURE}
DEBUG: ${DEBUG}
POSTGRES_URL: ${POSTGRES_URL_PASSPORT}
NSQ_ADDR: ${NSQ_ADDR}
REDIS_URL: ${REDIS_URL}
SIGNING_KEY: ${SIGNING_KEY}
command: "passport"
# ####################################################################################################################
# Background services
# ####################################################################################################################
back-email:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
depends_on:
- nsqlookup
environment:
DEFAULT_FROM: ${EMAIL_DEFAULT_FROM}
NSQLOOKUP_ADDR: ${NSQLOOKUP_ADDR}
MAILGUN_DOMAIN: ${MAILGUN_DOMAIN}
MAILGUN_API_KEY: ${MAILGUN_API_KEY}
TEMPLATES_DIR: "/var/templates/email"
command: "email"
# ####################################################################################################################
# Frontend apps
# ####################################################################################################################
front-passport:
image: ${REGISTRY_BASE_URL}/frontend-passport:${TAG}
restart: always
expose:
- 80
# ####################################################################################################################
# Reverse proxy
# ####################################################################################################################
proxy:
image: ${REGISTRY_BASE_URL}/proxy:${TAG}
restart: always
ports:
- 80:80
- 443:443
volumes:
- "proxy_certs:/root/.caddy"
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_API_KEY: ${CLOUDFLARE_API_KEY}
# ACME_AGREE: 'true'
# ####################################################################################################################
# Services (database, event bus etc)
# ####################################################################################################################
postgres-passport:
image: postgres:latest
restart: always
expose:
- 5432
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: ${POSTGRES_PASSPORT_DB}
POSTGRES_USER: ${POSTGRES_PASSPORT_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSPORT_PASSWORD}
redis:
image: redis
restart: always
expose:
- 6379
volumes:
- "redis_data:/data"
nsqlookup:
image: nsqio/nsq:v1.1.0
restart: always
expose:
- 4160
- 4161
command: /nsqlookupd
nsq:
image: nsqio/nsq:v1.1.0
restart: always
depends_on:
- nsqlookup
expose:
- 4150
- 4151
volumes:
- "nsq_data:/data"
command: /nsqd --lookupd-tcp-address=nsqlookup:4160 --data-path=/data
# ####################################################################################################################
# Ofelia cron job scheduler for docker
# ####################################################################################################################
scheduler:
image: mcuadros/ofelia
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./etc/scheduler:/etc/ofelia"
Dockerfile:
FROM postgres:latest
COPY init.sql /docker-entrypoint-initdb.d/
In your docker-compose.yml file, you say in part:
postgres-passport:
image: postgres:latest
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
So you're running the stock postgres image (the Dockerfile you show never gets called); and whatever's in your local postgres-passport directory, starting from the same directory as the docker-compose.yml file, appears as the /docker-entrypoint-initdb.d directory inside the container.
In the directory tree you show, if you
cd deploy/latest
docker-compose up
The ./postgres-passport is expected to be in the deploy/latest tree. Since it's not actually there, Docker doesn't complain, but just creates it as an empty directory.
If you're just trying to inject this configuration file, using a volume is a reasonable way to do it; you don't need the Dockerfile. However, you need to give the correct path to the directory you're trying to mount into the container.
postgres-passport:
image: postgres:latest
volumes:
# vvv Change this path vvv
- "../../postgres-passport/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
If you want to use that Dockerfile instead, you need to tell Docker Compose to build the custom image instead of using the standard one. Since you're building the init file into the image, you don't also need a bind-mount of the same file.
postgres-passport:
build: ../../postgres-passport
volumes:
# Only this one
- "./data/postgres_passport_data:/var/lib/postgresql/data"
(You will also need to adjust the COPY statement to match the path layout; just copying the entire local docker-entrypoint-initdb.d directory into the image is probably the most straightforward thing.)

Docker compose fiware WireCloud data persistance not loaded from volume

I am using a docker container for my FiWare WireCloud. It is working properly but when I stop my container with docker compose down and restart it with docker compose up all my data are erased even if I specified a volume for the postgresql database and I have the following error:
ERROR: relation "wirecloud_workspace" does not exist at character 370
If I want to make it work again, I have to recreate the whole database from scratch (initdb & createsuperuser)
What I would like to do is to be able to save my wirecloud data inside a volume and be able te backup it and reload it. Here is my current docker-compose.yml file in version 3:
version: '3.3'
services:
iot-mongo:
image: mongo:3.2
ports:
- "27017:27017"
volumes:
- ./data/mongo:/data/db
orion:
image: fiware/orion:1.9.0
links:
- iot-mongo
ports:
- "1026:1026"
command: -dbhost iot-mongo
nginx:
restart: always
image: nginx:1.13
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/sites-available:/etc/nginx/sites-available
- ./letsencrypt/well-known:/www/letsencrypt
- /etc/letsencrypt/:/etc/letsencrypt/
- wirecloudwww:/var/www/static
- wirecloudinstance:/opt/wirecloud_instance
links:
- wirecloud:wirecloud
- orion:orion
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgresdata:/var/lib/postgresql
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: postgres
PGDATA: /tmp
wirecloud:
restart: always
image: fiware/wirecloud:1.0-composable
links:
- postgres:postgres
volumes:
- wirecloudwww:/var/www/static
- wirecloudinstance:/opt/wirecloud_instance
volumes:
wirecloudwww: {}
wirecloudinstance: {}
postgresdata: {}
I also tried with docker-compose v1 like they show in the documentation but the result is the same.
The problem is the definition of the postgres volume and the PGDATA environment variable. The PGDATA environment is telling PostgreSQL to store data in /tmp, so it is not going to store data inside de volume (you can create a volume on /tmp, but this seems a bit strange). If you remove the PGDATA environment variable, postgres will store data into /var/lib/postgresql/data. Using this definition for the postgres service should do the trick:
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgresdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: postgres

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards