Return docker to a previous state - postgresql

I have a docker-compose file to run postgres db like this:
version: '3.8'
services:
postgres:
image: postgres:9.6.2
container_name: postgres
ports:
- "5432:5432"
Then I can start it with this command:
docker-compose up postgres
Then I run some scripts to create a new schema, then create tables and populate them with some data.
At this point the db is ready to be used by an application, make some tests, insert/update/delete data in tables, etc.
After running these tests is there any way to go back or restore the db to the status it was before running the tests?
Maybe there's some command in docker to take a snapshot.
Is that possible?

You can simply create a Dockerfile, include your scripts in it and build an image that contains your modified database. Once you have that, just use this same image in your docker-compose file. You can then just use docker-compose down and docker-compose up and you'll have the container in it's previous state.

Related

Postgres running via docker not persisting data after initialization script

I'm using docker for the first time to set up a test database that my team can then use. I'm having some trouble getting my data on DBeaver after running my docker-compose file. The issue I'm facing is that my database does not show up in DBeaver (along with relevant Schemas and Tables that I also create/populate in my initialization sql script).
Here is my docker-compose.yml
version: "3"
services:
test_database:
image: postgres:latest
build:
context: ./
dockerfile: Dockerfile
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=testdb
container_name: test_database
In this, I specify the docker file I want it to use for building. Here is the dockerfile:
# syntax = docker/dockerfile:1.3
FROM postgres:latest
ADD test_data.tar .
COPY init_test_db.sql /docker-entrypoint-initdb.d/
Now, when I run docker-compose build and docker-compose up, I can see through the logs that my SQL commands (CREATE, COPY, etc.) do get executed and the rows do get added. But when I connect to this instance through DBeaver, I can't see this at all. In fact, the only database on there is the default Postgres database, even through the logs say I'm connected to test_database.
I followed some other solutions and used docker volume prune as well, but that didn't affect anything (I read some solutions about clearing up volumes, and at that point, I had volumes: /tmp:/tmp as well). Any ideas?
Wow, this wasn't an error after all. All I had to do was go on the connection settings on DBeaver and check 'Show all databases' under the Postgres tab. Hope this can help someone :)

Mongodb authentication issue when using same volume

I am using mongo db in the docker container. I am using docker compose to spin up mongo. Now we have old mongo containers running and authentication is not enforced. In order to use authentication enforced and start up script I am using .env filein my docker compose file as below. But .env file and startup script are taking place only if I change the volume. By using same volume both .env file and startup script is not taking any effect. Is there any way to use the same volume and create users using .env and also use start up script.
services:
mongo:
image: mongo
container_name: mongot
restart: always
env_file:
- .env
ports:
- 27019:27017
volumes:
- /data/db8/configdb:/data/configdb
- /data/db8/db:/data/db
- $PWD/mongoentry/:/docker-entrypoint-initdb.d/
network_mode: "bridge"
command: mongod
Environment files and startup scripts are only used when creating a new database. If a database already exists, they aren't used.
In your case, you already have a database in the volume, so they aren't used. But if you change the volume so no database exists, Mongo creates a new one using your values and scripts.
From the docs:
When you start the mongo image, you can adjust the initialization of
the MongoDB instance by passing one or more environment variables on
the docker run command line. Do note that none of the variables below
will have any effect if you start the container with a data directory
that already contains a database

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

Creating mongo docker container with local storage on hos

I want to run mongo db in docker container. I've pulled image and run it. So it seems work ok.
But every time I start it the DB is overwritten so I loose any changes. So I want to want to map somehow internal container storage on my local host folder.
Should I write Dockerfile or/and docker-compose.yaml? I suppose this is simple question but being new in docker I can't understand what to read to get full understanding.
You do not need to write Dockerfile and make thing complex, just use offical image as mentioned in command or compose file.
You can use both options either docker run or docker-compose but the path should be correct in mapping to keep data persistent.
Here is way
Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir.
Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
The -v /my/own/datadir:/data/db part of the command mounts the
/my/own/datadir directory from the underlying host system as /data/db
inside the container, where MongoDB by default will write its data
files.
mongo docker volume
with docker-compose
version: "2"
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=pastime
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_password
volumes:
- /my/own/datadir:/data/db

Using docker-compose to create tables in postgresql database

I am using docker-compose to deploy a multicontainer python Flask web application. I'm having difficulty understanding how to create tables in the postgresql database during the build so I don't have to add them manually with psql.
My docker-compose.yml file is:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/flask-app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
I dont want to have to enter psql in order to type in:
CREATE DATABASE my_database;
CREATE USER this_user WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE "my_database" to this_user;
\i create_tables.sql
I would appreciate guidance on how to create the tables.
It didn't work for me with the COPY approach in Dockerfile. But I managed to run my init.sql file by adding the following to docker-compose.yml:
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
init.sql was in the same directory as my docker-compose.yml.
I picked the solution from this gist. Check this article for more information.
I dont want to have to enter psql in order to type in
You can simply use container's built-in init mechanism:
COPY init.sql /docker-entrypoint-initdb.d/10-init.sql
This makes sure that your sql is executed after DB server is properly booted up.
Take a look at their entrypoint script. It does some preparations to start psql correctly and looks into /docker-entrypoint-initdb.d/ directory for files ending in .sh, .sql and .sql.gz.
10- in filename is because files are processed in ASCII order. You can name your other init files like 20-create-tables.sql and 30-seed-tables.sql.gz for example and be sure that they are processed in order you need.
Also note that invoking command does not specify the database. Keep that in mind if you are, say, migrating to docker-compose and your existing .sql files don't specify DB either.
Your files will be processed at container's first start instead of build stage though. Since Docker Compose stops images and then resumes them, there's almost no difference, but if it's crucial for you to init the DB at build stage I suggest still using built-in init method by calling /docker-entrypoint.sh from your dockerfile and then cleaning up at /docker-entrypoint-initdb.d/ directory.
I would create the tables as part of the build process. Create a new Dockerfile in a new directory ./database/
FROM postgres:latest
COPY . /fixtures
WORKDIR /fixtures
RUN /fixtures/setup.sh
./database/setup.sh would look something like this:
#!/bin/bash
set -e
/etc/init.d/postgresql start
psql -f create_fixtures.sql
/etc/init.d/postgresql stop
Put your create user, create database, create table sql (and any other fixture data) into a create_fixtures.sql file in the ./database/ directory.
and finally your postgres service will change to use build:
postgres:
build: ./database/
...
Note: Sometimes you'll need a sleep 5 (or even better a script to poll and wait for postgresql to start) after the /etc/init.d/postgresql start line. In my experience either the init script or the psql client handles this for you, but I know that's not the case with mysql, so I thought I'd call it out.