Docker Postgres Image not accessible - postgresql

So, I pulled the postgres image down from docker. I followed a tutorial which explained what's going with the command below and the the whole docker pull. I can log in to the instance fine. But when I restart my computer or shutdown docker I end up goign through similar setup steps and am not able to access the postgres instance anymore. Can someone explain what's going on here:
Run this command
docker run --rm --name pg-docker -e POSTGRES_PASSWORD=docker -d postgres -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
log in via PG admin.
Nothing, instance not available.
So, I feel like I am missing a step at one point I had executed a command like this:
docker exec -it c5b8bdd0820b35a01ea153a44e82458a6285cf484b701b2b2d6d4210266fb4f8 bash
which gave me acess to the shell for the image, after doing that I was able then to use PGAdmin, however, I feel like that may have been coincidence? As this does not work currently.
So, what am I doing wrong? What's an easier way to do this?

The --rm causes Docker to automatically remove the container when it exits. Remove it.
You can also add --restart always and your container will be up after restart.

Related

How to reconnect to same postgres database on Docker

I'm very new to using docker and I've created a postgres container using
docker run --name mytrainingdb -e POSTGRES_PASSWORD=mysecretpassword -d postgres. Then I connected to it with docker exec -it <container-id> bash and then psql.
Then I stop the container.
My query is, what do I do reconnect to the same database? I tried to run same docker run command, but it says the name 'mytrainingdb' is used, which means it is trying to create it afresh, which is not what I want. Hope my expectation is right, as in when I restart my laptop or resume work I can just restart the same container and my data/config would be preserved?
The documentation also mentions that we can link a host directory to volume of pg container to have the stored data accessible to us, but I'm ok with docker managing my storage for that database.
You will have error when you try to re-run the same command, because docker is trying to create a new container with same name as the previous one "mytrainingdb". If you close docker and reopen it you will still find your container , but its not running , you can start it again with docker start mytrainingdb or you can remove it with docker rm mytrainingdb .
However , dont restart docker because you want to create a new container with the same name! If you want to start a new container with the same name and your container is still running you can first stop it with docker stop mytrainingdb and docker rm mytrainingdb or you can just do docker rm -f mytrainingdb (this will remove you running container with force ) and then create a new container..
As for the volumes ,you just created one by default which is named is kind of hash , and its found at volumes/var/lib/docker/volumes/ .Because generally containers such PostgreSQL, or databases in general persists volumes. The volume gets created when running the container and is handy to save persistent data, whether you start the container with -v or not.
The volume you talked about in your question , is called mounted volume , is when you basically just bind a certain directory or file from the host (outside) to inside the container
docker run -v /hostdir:/containerdir in your case docker run -v /hostdir:/var/lib/postgresql/data
If you restart docker or your computer running containers won't be automatically restarted. You can start your container again with docker start mytrainingdb (related question), then connect with your docker exec command.
(one tip: instead of running bash, then psql, you can directly run psql, e.g. docker exec -it mytrainingdb psql --user postgres)
Your understanding of data persistence is correct, docker will manage the data and it will still be around.
From the postgres image documentation
There are several ways to store data used by applications that run in Docker containers. We encourage users of the postgres images to familiarize themselves with the options available, including:
Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
You can add --rm argument so that whenever you stop the container manually, or container stops for any reasons (his task is done or it fails), it will remove that container.
In your case, you can use this:
docker run --name mytrainingdb --rm -e POSTGRES_PASSWORD=mysecretpassword -d postgres

I cannot run more than 2 containers in Docker

I'm running docker for a project on Github. (https://github.com/mydexchain/mydexchain)
I'm running the code below and creating an image file and a container.
docker run -d --rm -p 2020:2020 -v mydexchain:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain mydexchain/mydexchain:latest
I set my tracker address on port 2020.
I run the container with the attach command. "docker attach mydexchain"
I'm running the code below and creating a second container.
docker run -d --rm -p 2021:2020 -v mydexchain1:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain1 mydexchain/mydexchain:latest
I set my tracker address on port 2021.
I run the container with the attach command. "docker attach mydexchain1"
-So far everything is normal-
I'm running the code below and creating a third container.
docker run -d --rm -p 2022:2020 -v mydexchain2:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain2 mydexchain/mydexchain:latest
I'm checking the containers with the docker ps command.
I see this screen.
As soon as I want to do anything with this container like attach or set tracker, the container disappears. Like This.
When I check the logs;
When I did these procedures for the first time, I did not encounter any errors.
I would be glad if you could help. I have been working for 1 week and could not find any solution.
Have you ensured that all volumes are clean and similar? Usually the path /var/lib/postgresql/data is mounted into container.
Might be also related to your pausing/closing your previous attach command which kills that container, since you are missing -i and -t flags when launching the container. Those should be used to prevent it from closing. See more from the documentation of attach command.

How to store data in external drive with Docker Postgres:9.3 image?

I want to setup my database inside a container using the Postgres:9.3 docker image, however I want to store my data in external drive.
I attempted it using the command
`docker run -dit -p 5432:5432 -v /mnt/external/docker_volume:/var/lib/postgresql/data --name mydatabase postgres:9.3`
Container got created as it echoes the container id, but it is not shown as running from docker ps. The above command works for other images. So my gut feeling is that there is a conflict as this images has VOLUME defined in its dockerfile but I haven't figure out a way to get around it. Any help will be appreciated!
Apparently, the problem is my /mnt/external/docker_volume was not empty and the Postgres init script didn't like it. I found this out after running with -it option and see the output in terminal.
I answered my own question. I hope somebody in the future will find this helpful. :)

How to alter the official mongo docker for authentication and data separation?

I want to make two minor improvements on the official MongoDB docker so that it starts with the --auth enabled and uses a separate data container to store the data. What's the best way to do this?
If all are set, how should I start the shell? Will it be possible for someone without a username and password to access any of the databases available? Which directory should I backup?
EDIT
Apparently, this is not enough:
docker run --name mymongoname1 -v /my/local/dir:/data/db -d -P mongo:latest
OK, so partial answer, because I haven't messed around with docker auth.
Containerising storage is done with a storage container. That's basically a container created off a token instance, with some volumes assigned.
So for elasticsearch (which I know isn't mongo, but it is at least a NoSQL db) I've been using:
docker create -v /es_data:/es_data --name elasticsearch_data es-base /bin/true
Then:
docker run -d -p 9200:9200 --vols-from elasticsearch_data elasticsearch-2.1.0
This connects the container volume to my es container - in this example it passes through a host volume, but you don't actually need to any more, because the container can hold the data in the docker filesystem. (And then I think you can push the data container around too, but I've not got that far!)
If you run ps -a you will see the data container in Created state. Just watch if you're doing a cleanup script that you don't delete it, because unlike running containers, you can freely delete it...

docker with postgres and bash

Today I was researching and trying docker, and with the most of things I was impressed. There are still some questions for me about docker.
Can anyone more experienced than me with Docker tell me what is the best way to login to postgres container (run bash), in order to view some postgres configuration files, view postgres logs, log into postgres shell, execute pg_dump for example, etc. etc..., and everything this while postgres process is running.
I see that people usually run one process per container, and with this approach I am not sure what is the best way to do mentioned actions on container which runs postgres?
Any advices?
Thanks!
You can usually get a shell like this:
docker exec -it some-node bash
The canonical docker way would be not to log in to the running db container, but instead do docker logs or link other containers to do maintenance tasks (e.g. docker run -it --rm --link <my-pg-container>:pg <my-pg-image> pgsql --host pg etc..