Dockerfile ENV variables not being accepted - postgresql

I'm using docker and dockerfile to build images. I want to build a PostgreSQL image so I'm using this dockerfile:
ARG POSTGRES_USER=vetouz
ARG POSTGRES_PASSWORD=***
ARG POSTGRES_DB=vetouz_mediatheque
FROM postgres:latest
USER postgres
EXPOSE 5432
Then I run the image using this command
docker run -e POSTGRES_PASSWORD=vetouz -d --name postgres postgres:latest
When I do that the role vetouz, the password and the db vetouz_mediatheque are not created and I don't understand why. I know it because when I access my container with sudo docker exec -it postgres bash and then run psql -U vetouz I get the error role vetouz does not exist.
It works if I run my image with the following command:
docker run -e POSTGRES_PASSWORD=*** -e POSTGRES_USER=vetouz -e POSTGRES_DB=vetouz_mediatheque -d --name postgres postgres:latest
But I would rather define my variables in the dockerfile.
Any idea why it's not working?

Please use ENV instead of ARG. args are only available during build, envs are available during runtime as well.
Source
https://docs.docker.com/engine/reference/builder/#arg
https://docs.docker.com/engine/reference/builder/#env

YOUR PROBLEM
As already stated your are using ARG that is only available when building the Docker image, but using env variables to set sensitive information in a Docker image is not a safe approach, and I will explain why.
SECURITY CONCERNS
But I would rather define my variables in the dockerfile.
This is ok for information that is not sensitive, but not a best practice for sensitive information like passwords, because the database credentials will be stored in plain text in the Dockerfile, and even if you use ARG to set the ENV var they will be available in the docker image layers.
docker run -e POSTGRES_PASSWORD=*** -e POSTGRES_USER=vetouz -e POSTGRES_DB=vetouz_mediatheque -d --name postgres postgres:latest
This is also a bad practice in terms of security because now your database credentials are saved into the bash history.
In a Linux machine you can check with:
history | grep -i POSTGRES
A MORE SECURE APPROACH
Create an .env file:
POSTGRES_USER=vetouz
POSTGRES_PASSWORD=your-password-here
POSTGRES_DB=vetouz_mediatheque
Don't forget to add the .env file to .gitignore:
echo ".env" >> .gitignore
Running the Docker Container
Now run the docker container with:
docker run --env-file ./.env -d --name postgres postgres:latest

Related

create postgres db contaner in local docker on mac

I and new to use docker, I'm so confused about creating postgres container in docker.
what is -v /data:/var/lib/postgresql/data in the command line below, which is for creating a container in docker? Is it for setting the volume? Can I change the path since I cannot find the postgresql in /lib, and so I cannot find the file path when I want add the file permission in file sharing in docker setting.
sudo docker run -d --name mybd --network mydb-network -p 5432:5432 -v /data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mydb -e PGDATA=/var/lib/postgresql/data/pgdata postgres
running container error
when I tried to run container in docker, it showed this error. Then I went to docker setting and wanted add /data path to file sharing, however, I cannot find the /date path.

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

How can i persist my data in docker/postgres container?

I know there are probably many ways to do this. What I am looking for is a way to do it using (preferably) only my DockerFile and one container.
Here is my current dockerfile:
FROM postgres:latest
ENV POSTGRES_USER=myuser
ENV POSTGRES_PASSWORD=mypassword
Here is the command I used to build this container:
docker built -t my_db .
And here is the command that I use to run the container:
docker run -p 5432:5432 my_db
What I would like to do is have the data stored in the container if possible, but I don't seem to understand how or where postgres stores it's data. I saw on another stack overflow post that postgres will store it by default in /var/lib/postgresql/data however when I look in that folder I see nothing. I can however verify that postgres is running because I am using a client called teamSQL and from that client I can create tables and insert/read data.
I can also verify that when i stop the container and restart the data is definitely not persisted.
Note: this is running in OSx but I don't think that is relevant.
You should use Docker volumes, so when you stop your container, data will persist on host machine, and when you start container again data will be mounted to it
docker volume create pgdata
docker run -p 5432:5432 -v pgdata:/var/lib/postgresql/data my_db

How can I keep changes I made to Postgresql Docker container?

I'm using the official postgresql docker image to start a container.
Afterwards, I install some software and use psql to create some tables etc. I am doing this by first starting the postgres container as follows:
docker run -it --name="pgtestcontainer" -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:9.6.6
Then I attach to this container with
docker exec -it pgtestcontainer bash
and I install software, create db tables etc.
Afterwards, I first quit from the second terminal session (that I used to install software) and do a ctrl + c in the first one to stop the postgres container.
At this point my expectation is that if I commit this postgres image with
docker commit xyz...zxy pg-commit-test
and then run a new container based on the committed image with:
docker run -it --name="modifiedcontainer" -e POSTGRES_PASSWORD=postgres -p 5432:5432 pg-commit-test
then I should have all the software and tables in place.
The outcome of the process above is that the software I've installed is in the modifiedcontainer but the sql tables etc are gone. So my guess is my approach is more or less correct but there is something specific to postgres docker image I'm missing.
I know that it creates the db from scratch if no external directory or docker volume is bound to
/var/lib/postgresql/data
but I'm not doing that and after the commit I'd expect the contents of the db to stay as they are.
How do I follow the procedure above (or the right one) and keep the changes to database(s)?
The postgres Dockerfile creates a mount point at /var/lib/postgresql/data which you must mount an external volume onto if you want persistent data.
ENV PGDATA /var/lib/postgresql/data
RUN mkdir -p "$PGDATA" && chown -R postgres:postgres "$PGDATA" && chmod 777 "$PGDATA" # this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
VOLUME /var/lib/postgresql/data
https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes
You can create a volume using
docker volume create mydb
Then you can use it in your container
docker run -it --name="pgtestcontainer" -v mydb:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:9.6.6
https://docs.docker.com/engine/admin/volumes/volumes/#create-and-manage-volumes
In my opinion, the best way is to create your own image with a /docker-entrypoint-initdb.d folder and your script inside.
Look How to extend this image
But without volume you can't (I think) save your datas.
I solved this by passing PGDATA parameter with a value that is different than the path that is bound to docker volume as suggested in one of the responses to this question.

Initialize data on dockerized mongo

I'm running a dockerized mongo container.
I'd like to create a mongo image with some initialized data.
Any ideas?
A more self-contained approach:
create javascript files that initialize your database
create a derived MongoDB docker image that contains these files
There are many answers that use disposable containers or create volumes and link them, but this seems overly complicated. If you take a look at the mongo docker image's docker-entrypoint.sh, you see that line 206 executes /docker-entrypoint-initdb.d/*.js files on initialization using a syntax: mongo <db> <js-file>. If you create a derived MongoDB docker image that contains your seed data, you can:
have a single docker run command that stands up a mongo with seed data
have data is persisted through container stops and starts
reset that data with docker stop, rm, and run commands
easily deploy with runtime schedulers like k8s, mesos, swarm, rancher
This approach is especially well suited to:
POCs that just need some realistic data for display
CI/CD pipelines that need consistent data for black box testing
example deployments for product demos (sales engineers, product owners)
How to:
Create and test your initialization scripts (grooming data as appropriate)
Create a Dockerfile for your derived image that copies your init scripts
FROM mongo:3.4
COPY seed-data.js /docker-entrypoint-initdb.d/
Build your docker image
docker build -t mongo-sample-data:3.4 .
Optionally, push your image to a docker registry for others to use
Run your docker image
docker run \
--name mongo-sample-data \
-p 27017:27017 \
--restart=always \
-e MONGO_INITDB_DATABASE=application \
-d mongo-sample-data:3.4
By default, docker-entrypoint.sh will apply your scripts to the test db; the above run command env var MONGO_INITDB_DATABASE=application will apply these scripts to the application db instead. Alternatively, you could create and switch to different dbs in the js file.
I have a github repo that does just this - here are the relevant files.
with the latest release of mongo docker , something like this works for me.
FROM mongo
COPY dump /home/dump
COPY mongo_restore.sh /docker-entrypoint-initdb.d/
the mongo restore script looks like this.
#!/bin/bash
# Restore from dump
mongorestore --drop --gzip --db "<RESTORE_DB_NAME>" /home/dump
and you could build the image normally.
docker build -t <TAG> .
First create a docker volume
docker volume create --name mongostore
then create your mongo container
docker run -d --name mongo -v mongostore:/data/db mongo:latest
The -v switch here is responsible for mounting the volume mongostore at the /data/db location, which is where mongo saves its data. The volume is persistent (on the host). Even with no containers running you will see your mongostore volume listed by
docker volume ls
You can kill the container and create a new one (same line as above) and the new mongo container will pick up the state of the previous container.
Initializing the volume
Mongo initializes a new database if none is present. This is responsible for creating the initial data in the mongostore. Let's say that you want to create a brand new environment using a pre-seeded database. The problem becomes how to transfer data from your local environment (for instance) to the volume before creating the mongo container. I'll list two cases.
Local environment
You're using either Docker for Mac/Windows or Docker Toolbox. In this case you can easily mount a local drive to a temporary container to initialize the volume. Eg:
docker run --rm -v /Users/myname/work/mongodb:/incoming \
-v mongostore:/data alpine:3.4 cp -rp /incoming/* /data
This doesn't work for cloud storage. In that case you need to copy the files.
Remote environment (AWS, GCP, Azure, ...)
It's a good idea to tar/compress things up to speed the upload.
tar czf mongodata.tar.gz /Users/myname/work/mongodb
Then create a temporary container to untar and copy the files to the mongostore. the tail -f /dev/null just makes sure that the container doesn't exit.
docker run -d --name temp -v mongostore:/data alpine:3.4 tail -f /dev/null
Copy files to it
docker cp mongodata.tar.gz temp:.
Untar and move to the volume
docker exec temp tar xzf mongodata.tar.gz && cp -rp mongodb/* /data
Cleanup
docker rm temp
You could also copy the files to the remote host and mounting from there but I tend to avoid interacting with the remote host at all.
Disclaimer. I'm writing this from memory (no testing).
Here is how its done with docker-compose. I use an older image of mongo but the docker-entrypoint.sh accepts *.js and *.sh files for all versions of the image.
docker-compose.yaml
version: '3'
services:
mongo:
container_name: mongo
image: mongo:3.2.12
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db:cached
- ./deploy/local/mongo_fixtures /fixtures
- ./deploy/local/mongo_import.sh:/docker-entrypoint-initdb.d/mongo_import.sh
volumes:
mongo-data:
driver: local
mongo_import.sh:
#!/bin/bash
# Import from fixtures
mongoimport --db wcm-local --collection clients --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json
And my monogo_fixtures json files are the product of monogoexport which have the following format:
{"_id":"some_id","field":"value"}
{"_id":"another_id","field":"value"}
This should help those using this without a custom Dockefile, just using the image straight away with the right entrypoint setup right in your docker-compose file. Cheers!
I've found a way that is somehow easier for me.
Say you have a database in a docker container on your server, and you want to back it up, here’s what you could do.
What might differ from your setup to mine is the name of your mongo docker container [mongodb] (default when using elastic_spence). So make sure you start your container first with --name mongodb to match the following steps:
$ docker run \
--rm \
--link mongodb:mongo \
-v /root:/backup \
mongo \
bash -c ‘mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR’
And to restore the database from a dump.
$ docker run \
--rm \
--link mongodb:mongo \
-v /root:/backup \
mongo \
bash -c ‘mongorestore /backup --host $MONGO_PORT_27017_TCP_ADDR’
If you need to download the dump from to your server you can use scp:
$ scp -r root#IP:/root/backup ./backup
Or upload it:
$ scp -r ./backup root#IP:/root/backup
P.S: Original source by Tim Brandin available at https://blog.studiointeract.com/mongodump-and-mongorestore-for-mongodb-in-a-docker-container-8ad0eb747c62
Thank you!