Here is my docker compose file
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
I would like to back up data .
Is there a command for that in docker? I am using mac for my local and linux for my server
Volumes are not anything special in case of Docker. They are simple directories/files, when you are using volumes in compose file docker will create a directory and mount that inside of your container when you run the container. You can see that by:
docker inspect <container name/id>
in the output you will find information about volumes, you can see there the directory on the host OS
To backup your volume you can simply compress and store the directory using tar. To do that you need to know the path of the directory, you can either mount a directory from host OS like:
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- /var/lib/postgresql/data:/var/lib/postgresql/data
and you can backup /var/lib/postgresql/data from the host OS either by mounting it another container or from the host OS directly
OR there is another way, you can mount the same volume in another container in readonly mode and backup the directory:
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
backuptool:
image: busybox:latest
volumes:
- data:/data:ro
volumes:
data: {}
you can then tar and upload the backup of /data from the backuptool container
Related
I am trying to insert data into postgres using docker.
I have a folder in my code named data which has insert commands and has one file named init.sql.
I want to insert the data from init.sql present in folder data to tables present in docker.
version: '3.1'
services:
postgres:
image: postgres:11.6-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PORT: 5432
volumes:
- ./tables:/docker-entrypoint-initdb.d/
- ./data:/var/lib/postgresql/data
volumes:
data: {}
I am trying this but I get the error:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
I think I am not using the correct use case, I am new to docker compose.
But is there any way, my use case can get satisfied?
This is caused by an improper usage of the volumes syntax for your named volume.
In order to mount a named volume you have to just use its name like this:
volumes:
- data:/var/lib/postgresql/data
If your syntax begins with a . then it will be a bind mount from your host.
volumes:
- ./data:/var/lib/postgresql/data
The above code is mounting the host folder data relative to where you your docker-compose.yml is located.
This docker-compose.yml should do what you expect.
version: '3.1'
services:
postgres:
image: postgres:11.6-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PORT: 5432
volumes:
- ./tables:/docker-entrypoint-initdb.d/
- data:/var/lib/postgresql/data
volumes:
data:
If for some reason your volume has been created already, with an empty or no database, your first step should be running:
docker-compose down --volumes
From the documentation:
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
From: https://docs.docker.com/compose/reference/down/
I have a Docker Compose file with some services. One of them is the database which I would like to back up the volumes and migrate all the data to another machine.
My docker-compose.yml looks like this
version: '3'
services:
service1:
...
serviceN:
db:
image: postgres:11
ports:
- 5432:5432
networks:
- postgresnet
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
volumes:
postgresql:
postgresql_data:
networks:
postgresnet:
driver: bridge
How could I backup the data of postgresql and postgresql_data volunes and migrate them to another machine?
Easiest way is to share external volumes between your docker-compose files.
First create volume
docker volume create shared-data
Next modify you yml:
...
volumes:
postgresql:
postgresql_data:
external:
name: shared-data
...
Now your postgresql_data is mapped to external volume and everything you save there could be visible from outside. Just create same configuration in another docker-compose.yml and enjoy.
I have my docker installed in Windows. I am trying to install this application. It has given me the following docker-compose.yml file:
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8085:80"
networks:
- attendizenet
volumes:
- .:/usr/share/nginx/html/attendize
depends_on:
- php
php:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
networks:
- attendizenet
php-worker:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
command: php artisan queue:work --daemon
networks:
- attendizenet
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
networks:
- attendizenet
maildev:
image: djfarrelly/maildev
ports:
- "1080:80"
networks:
- attendizenet
redis:
image: redis
networks:
- attendizenet
networks:
attendizenet:
driver: bridge
All the installation goes well, but the PostgreSQL container stops after starting for a moment giving following error.
2018-03-07 08:24:47.927 UTC [1] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2018-03-07 08:24:47.927 UTC [1] HINT: The server must be started by the user that owns the data directory
A simple PostgreSQL container from Docker Hub works smoothly, but the error occurs when we try to attach a volume to the container.
I am new to docker, so please ignore usage of terms wrongly.
This is a documented problem with the Postgres Docker image on Windows [1][2][3][4]. Currently, there doesn't appear to be a way to correctly mount Windows directories as volumes. You could instead use a persistent Docker volume, for example:
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- attendizenet
volumes:
pgdata:
Other things that didn't work:
Set PGDATA to a subdirectory (See PGDATA Setting)
environment:
- PGDATA=/var/lib/postgresql/data/mnt
volumes:
- ./pgdata:/var/lib/postgresql/data
Use a Bind Mount (docker-compose 3.2)
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
Running as POSTGRES_USER=root
More Information:
GitHub
data directory "/var/lib/postgresql/data" has wrong ownership
Docker Forums
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please refer reinierkors' answer from here. The answer is as follows copied as is from the link here for reader's convenience and works for me
I solved this by mapping my local volume one directory below the one Postgres needs:
version: '3'
services:
postgres:
image: postgres
restart: on-failure
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=postgres
volumes:
- ./postgres_data:/var/lib/postgresql
ports:
- 5432:5432
I was having the same issue after downgrading my Docker from WSL 2 to WSL 1 and what Thomas Taylor's pertaining, I solved the issue by using named volume.
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg12
...
volumes:
- pgdata:/var/lib/postgresql/data
...
volumes:
pgdata:
Map the local volume (e.g. C:\docker\pgdata) to one level (one directory) above what PostgreSQL needs. You can also do it from command line when starting the docker:
docker run -itd -e POSTGRES_USER=pguser -e POSTGRES_PASSWORD=pgpasswd \
-e PGDATA=/var/lib/postgresql/data/pgdata -p 5432:5432 \
-v c:\docker\pgdata:/var/lib/postgresql --name postgresql postgres
I met this issue when re-installed docker and used wsl-1 backend.
solution: switch docker to wsl-2 backend.
Even i had the problem i had to copy the data dir at regular intervals.
docker cp <container-name>:/var/lib/postgresql/data C:/docker/volumes/postgres
Owner for the data folder in postgres inside the container is Postgres user. Your current user may not have access privilege in the mounted folder. You need to give all permissions according to the requirements by given command below :
chmod 777 ./docker/pgdata
If this command is not helping to resolve this issue please refer the following link to do the user mapping from inside the container to outside the container.
https://docs.docker.com/engine/security/userns-remap/#prerequisites
I'm trying to persist postgres data in a docker container so that as soon as you docker-compose down and docker-compose up -d you don't lose data from your previous session. I haven't been able to make it do much of anything - pulling the container down and back up again routinely deletes the data.
Here's my current docker-compose.yml:
version: '2'
services:
api:
build: .
ports:
- '8245:8245'
volumes:
- .:/home/app/api
- /home/app/api/node_modules
- /home/app/api/public/src/bower_components
links:
- db
db:
build: ./database
env_file: .env
ports:
- '8246:5432'
volumes_from:
- dbdata
dbdata:
image: "postgres:9.5.2"
volumes:
- /var/lib/postgresql/data
Help?
According to the Dockment of Docker Compose, when you write something like:
volumes:
- /var/lib/postgresql/data
It creates a new docker volume and map it to /var/lib/postgresql/data inside the container.
Therefore, each time you run docker-compose up and docker-compose down, it creates new volume. You can confirm the behavior with docker volume ls.
To avoid it, you have two options:
(A) Map host directory into container
You can map directory of host into container using <HOST_PATH>:<CONTAINER_PATH>.
volumes:
- /path/to/your/host/directory:/var/lib/postgresql/data
The data of postgresql will be saved into /path/to/your/host/directory of the container host.
(B) Use external container
docker-compose has an option of external container.
When it is set to true, it won't always create volume.
Here's an example.
version: '2'
services:
dbdata:
image: postgres:9.5.2
volumes:
- mypostgresdb:/var/lib/postgresql/data
volumes:
mypostgresdb:
external: true
With external: true, docker-compose won't create the mypostgredb volume, so you have to create it by your own using following command:
docker volume create --name=mypostgredb
The data of postgresql will be saved into docker volume named mypostgredb. Read reference for more detail.
I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.