I deploy a service on a standard Docker for AWS stack (using this template).
I deploy using docker stack deploy -c docker-compose.yml pos with this compose file:
version: "3.2"
services:
postgres_vanilla:
image: postgres
volumes:
- db-data:/var/lib/postgresql
volumes:
db-data:
driver: "cloudstor:aws"
driver_opts:
size: "6"
ebstype: "gp2"
backing: "relocatable"
I then change some data in the db and force an update of the service with docker service update --force pos_postgres_vanilla
Problem is that the data I change doesn't persist after the update.
I've noticed that postgres initdb script runs every time I update, so I assume it's related.
Is there something i'm doing wrong?
Issue was that cloudstor:aws creates the volume with a lost+found under it, so when postgres starts it finds that the data directory isn't empty and complains about it. To fix that I changed the volume to be mounted one directory above the data directory, at /var/lib/postgresql, but that caused postgres to not find the PGVERSION file, which in turn caused it to run initdb every time the container starts (https://github.com/docker-library/postgres/blob/master/11/docker-entrypoint.sh#L57).
So to work around it, instead of changing the volume to be mounted one directory above the data directory, I changed the data directory to be one level below the volume mount by overriding environment variable PGDATA (to something like /var/lib/postgresql/data/db/).
Related
I am not able to login into my postgres databse deployed in docker. PFB my docker-compose.yml
discountdb:
image: postgres
docker-compose.override.yml
discountdb:
container_name: discountdb
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin1234
- POSTGRES_DB=DiscountDb
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
When I am trying to create server from pgAdmin4, I am getting the following error
Following is the db logs
What did I miss?
The environment variables you set are only used if Postgres can't find an existing database when then container starts.
Since you map a docker volume to /var/lib/postgresql/data/, chances are that you already have a database with existing users defined.
Try removing the volume mapping, so you're sure that Postgres creates a fresh database. If that solves it, then you have two options:
If you don't need the data in the volume, you can delete the postgres_data volume so Postgres creates a new database
If you need the data, you need to find out what userid/password you need to use to access the existing database in the volume
Nothing is wrong here but the POSTGRES_PASSWORD is taken into account the first time you start the container I mean when your postgres_data folder is still empty. If you changed the password but postgres_data is not empty the new password is ignored, you must log in with the first one.
The issue was with my postgres_data volume. I removed the volume using the command
docker volume rm -f discountdb
then ran the docker-compose again which resolved the issue.
I have a docker-compose.yml which I am using to deploy to a remote host from my local (Mac) machine using docker context. The compose config is as follows:
database:
image: postgres:14.2
restart: on-failure
volumes:
- ./db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
In order to persist data, I have defined a volume ./db-data:/var/lib/postgresql/data. This db-data folder does not exist in my local machine. I want delete this mount completely because I don't want any of the previously persisted data. I know I can define a new volume directory but I would like to use the same directory name (db-data). I have tried the following:
docker compose down --volume --remove-orphans - when I recreate new container, previously persisted data still exists
There is no folder called ./db-data in my Mac working directory.
I tried searching var/lib/docker in my Mac. But that directory does not exists.
Docker for Mac app doesn't list any volumes
There is no db-data in the remote host where the database is deployed
Running docker inspect <container-id> listed the mount directory for the container. The mount directory resembled absolute path of my local computer. For example it was like /Users/<user-name>/dir/db-data. When I saw this I assumed this had to be in the local computer due to the prefix Users/<user-name> but this path was actually found in the root of the remote machine.
Thats because the directory for docker volumes is in the docker vm for MACOS.
Where is /var/lib/docker on Mac/OS X
You would have to follow this to see the volume
Let's say I have the following setup in my docker-compose.yml.
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
ports:
- 5432:5432
...
where ./database contains some SQL files that initialize the database. Here's my question... is initdb run every single time the stopped postgres container starts running again (via $ docker-compose up).
Thus, is it fair to say that every time I restart my postgres container, it builds the entire database from scratch all over again?
My guess is 'yes' as in the documentation it says
The default postgres user and database are created in the entrypoint with initdb.
The answer is no, when you stop your container it is not deleted, only stopped, you can start it when it is stopped the same when you stop your computer it will not vanish from your desk :)
You can even restart it when it is running, same as you would do with your computer.
However when you remove/delete the container with
docker rm -f containername
or
docker-compose rm
then it is truly deleted, equivalent of making your computer vanish from your desk.
But even than you can still persist your data with volume mounts, for example in your compose file your ./database directory will not be deleted from your host machine even when you delete the containers using it. It is the equivalent of using an external usb drive in your computer, so when you make your computer vanish from your desk with deleting it, you still have your usb drive with the data on it that was there when you still had your computer.
So you can persist your database files with the same technique in a volume mount like this:
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
- ./postgres-data/data:/var/lib/postgresql/data
ports:
- 5432:5432
...
This way when you delete your container(s) and do "docker-compose up" again for the same compose file, postgres will not run its init scirpt because the /var/lib/postgresql directory is already populated in it.
However, my computer analogy is valid only in this context, please do not think of containers being mini computers or mini virtual machines, they are not! But that's an other discussion.
I am playing with MongoDB and Docker and at this point I am trying to create a useful image for myself to use at work. I have created the following Dockerfile:
FROM mongo:2.6
VOLUME /data/db /data/configdb
CMD ["mongod"]
EXPOSE 27017
And I have added it to my docker-compose.yml file:
version: '2'
services:
### PHP/Apache Container
php-apache:
container_name: "php55-dev"
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/mmi:/var/www
- ~/data:/data
links:
- mongodb
### MongoDB Container
mongodb:
container_name: "mongodb"
build: ./mongo
environment:
MONGODB_USER: "xxxx"
MONGODB_DATABASE: "xxxx"
MONGODB_PASS: "xxxx"
ports:
- "27017:27017"
volumes:
- ~/data/mongo:/data/db
I have some questions regarding this setup I have made:
Do I need VOLUME /data/db /data/configdb at the Dockerfile or would be enough to have this line ~/data/mongo:/data/configdb at docker-compose.yml?
I am assuming (and I took it from here) that as soon as I build the Mongo image I will be creating a database and giving full permissions to the user with password as it's on the environment variables? I am right? (I couldn't find anything helpful here)
How do I import a current mongo backup (several JSON files) into the database that should be created on the mongo container? I believe I need to run mongorestore command but how? do I need to create an script and run it each time the container start? or should I run during image build? What's the best approach?
Do I need VOLUME /data/db /data/configdb at the Dockerfile or would be enough to have this line ~/data/mongo:/data/configdb at docker-compose.yml?
VOLUME is not required when you are mounting a host directory but it is helpful as metadata. VOLUME does provide some special "copy data on volume creation" semantics when mounting a Docker volume (non host dir) which will impact your data initialisation method choice.
am assuming (and I took it from here) that as soon as I build the Mongo image I will be creating a database and giving full permissions to the user with password as it's on the environment variables? I am right? (I couldn't find anything helpful here)
MONGO_USER, MONGO_DATABASE and MONGO_PASS do not do anything in the official mongo Docker image or to mongod itself.
The mongo image has added support for similar environment variables:
MONGO_INITDB_ROOT_USERNAME
MONGO_INITDB_ROOT_PASSWORD
MONGO_INITDB_DATABASE
How do I import a current mongo backup (several JSON files) into the database that should be created on the mongo container? I believe I need to run mongorestore command but how? do I need to create an script and run it each time the container start? or should I run during image build? What's the best approach?
Whether you initialise data at build or runtime is up to your usage. As mentioned previously, Docker can copy data from a specified VOLUME into a volume it creates. If you are mounting a host directory you probably need to do the initialisation at run time.
mongorestore requires a running server to restore to. During a build you would need to launch the server and restore in the same RUN step. At runtime you might need to include a startup script that checks for existence of your database.
Mongo is able to initialise any empty directory into a blank mongo instance so you don't need to be worried about mongo not starting.
It is quite easy to run MongoDB containerised using docker. Though each time you start a new mongodb container, you will get new empty database.
What should I do in order to keep the database content between container restarts? I tried to bind external directory to container using -v option but without any success.
I tried using the ehazlett/mongodb image and it worked fine.
With this image, you can easily specify where mongo store its data with DATA_DIR env variable. I am sure it must not be very difficult to change on your image too.
Here is what I did:
mkdir test; docker run -v `pwd`/test:/tmp/mongo -e DATA_DIR=/tmp/mongo ehazlett/mongodb
notice the `pwd` in within the -v, as the server and the client might have different path, it is important to specify the absolute path.
With this command, I can run mongo as many time as I want and the database will always be store in the ./test directory I just created.
When using the official Mongo docker image, which is i.e. version mongo:4.2.2-bionic as writing this answer, and using docker-compose, you can achieve persistent data storage using this docker-compose.yml file example.
In the official mongo image, data is stored in the container under the root directory in the folder /data/db by default.
Map this folder to a folder in your local working directory called data (in this example).
Make sure ports are set and mapped, default 27017-27019:27017-27019.
Example of my docker-compose.yml:
version: "3.2"
services:
mongodb:
image: mongo:4.2.2-bionic
container_name: mongodb
restart: unless-stopped
ports:
- 27017-27019:27017-27019
volumes:
- ./data:/data/db
Run docker-compose up in the directory where the yml file is located to run the mongodb container with persistent storage. If you do not have the official image yet, it will pull it from Dockerhub first.
Old post but may be someone still need quick and easy solution...
The easiest way I found is using binding to volume.
Following that way you can easily attach existing MongoDB data; and it will live even after you destroying the container.
Create volume that points to your folder (may include existing db). In my case it's done under Windows, but you can do it on any file system:
docker volume create --opt type=none --opt o=bind --opt device=d:/data/db db
Create/run docker container with MongoDB using that volume binding:
docker run --name mongodb -d -p 27017:27017 -v db:/data/db mongo