I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres
Related
I and new to use docker, I'm so confused about creating postgres container in docker.
what is -v /data:/var/lib/postgresql/data in the command line below, which is for creating a container in docker? Is it for setting the volume? Can I change the path since I cannot find the postgresql in /lib, and so I cannot find the file path when I want add the file permission in file sharing in docker setting.
sudo docker run -d --name mybd --network mydb-network -p 5432:5432 -v /data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mydb -e PGDATA=/var/lib/postgresql/data/pgdata postgres
running container error
when I tried to run container in docker, it showed this error. Then I went to docker setting and wanted add /data path to file sharing, however, I cannot find the /date path.
I am trying to run Postgresql in Docker using this code in a terminal:
`winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13`
and I keep running into this error: Error response from daemon: The system cannot find the file specified.
I have looked up this error and the solutions I see online (such as restarting Docker Desktop, reinstalling Docker, updating Docker) did not work for me.
I think the issue is with the volume part (designated by -v) because when I remove it, it works just fine. However, I want to be able to store the contents in a volume permanently, so running it without the -v is not a long-term solution.
Has anyone run into a similar issue before?
Check if you can access to this path in host.
dir C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data
check if you can access on volume inside a container.
winpty docker run -v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/data alpine ls /data
I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).
I have performed the following steps,
docker run -d --name demo-mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret -e MONGO_INITDB_DATABASE=testdb mongo ** to create a new mongodb container
Create data-base inside the running container, by connecting to it using a mongo client
docker commit demo-mongo demo-mongo-updated ** create image from the running container
However, docker does not by default (which seems obvious) retain the data of the newly created data-base (likely to be retained in /data/db) in the newly created image.
Is it possible by any means to preserve the state of a container while creating an image from the same.
You can create a volume that points to a host directory and mount it to the container:
docker run -v <PATH_TO_THE_HOST_DIR>:/data -d --name demo-mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret -e MONGO_INITDB_DATABASE=testdb mongo
The next time you create a mongo container pointing to the same host path, it will be available on the container.
I'm using the official postgresql docker image to start a container.
Afterwards, I install some software and use psql to create some tables etc. I am doing this by first starting the postgres container as follows:
docker run -it --name="pgtestcontainer" -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:9.6.6
Then I attach to this container with
docker exec -it pgtestcontainer bash
and I install software, create db tables etc.
Afterwards, I first quit from the second terminal session (that I used to install software) and do a ctrl + c in the first one to stop the postgres container.
At this point my expectation is that if I commit this postgres image with
docker commit xyz...zxy pg-commit-test
and then run a new container based on the committed image with:
docker run -it --name="modifiedcontainer" -e POSTGRES_PASSWORD=postgres -p 5432:5432 pg-commit-test
then I should have all the software and tables in place.
The outcome of the process above is that the software I've installed is in the modifiedcontainer but the sql tables etc are gone. So my guess is my approach is more or less correct but there is something specific to postgres docker image I'm missing.
I know that it creates the db from scratch if no external directory or docker volume is bound to
/var/lib/postgresql/data
but I'm not doing that and after the commit I'd expect the contents of the db to stay as they are.
How do I follow the procedure above (or the right one) and keep the changes to database(s)?
The postgres Dockerfile creates a mount point at /var/lib/postgresql/data which you must mount an external volume onto if you want persistent data.
ENV PGDATA /var/lib/postgresql/data
RUN mkdir -p "$PGDATA" && chown -R postgres:postgres "$PGDATA" && chmod 777 "$PGDATA" # this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
VOLUME /var/lib/postgresql/data
https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes
You can create a volume using
docker volume create mydb
Then you can use it in your container
docker run -it --name="pgtestcontainer" -v mydb:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:9.6.6
https://docs.docker.com/engine/admin/volumes/volumes/#create-and-manage-volumes
In my opinion, the best way is to create your own image with a /docker-entrypoint-initdb.d folder and your script inside.
Look How to extend this image
But without volume you can't (I think) save your datas.
I solved this by passing PGDATA parameter with a value that is different than the path that is bound to docker volume as suggested in one of the responses to this question.