I made dummy mistake, when I was creating docker-compose file, I forgot to create volume line. So when I restarted my docker-compose everything gone. Is there is any way that I can restore data?
Related
Has someone solved the postgre database auto import when creating a docker image? The traditional method is to put files into the docker-entrypoint-initdb.d. But, it does not work for me, because I need to import via pg_restore (because of custom-format dump). I do not know how to start postgres service through dockerfile. The problem is that every time it runs in a separate container layer. Thank you for your help.
I solved this by put .sh script(which contains pg_restore commands) to docker-entrypoint-inidb.d. I use official Postgre image, which runs after dockerfile any .sql and .sh files, which are in docker-entrypoint-initdb.d
More info https://github.com/docker-library/docs/tree/master/postgres#how-to-extend-this-image
I'd like to create a docker image with data.
My attempt is as follows:
FROM postgres
COPY dump_testdb /image
RUN pg_restore /image
RUN rm -rf /image
Then I run docker build -t testdb . and docker run -d testdb.
When I connect to the container I don't see the restored db. How do I get an image with the restored data?
COPY the dump file, with a .sql extension, into /docker-entrypoint-initdb.d/. Do not try to RUN anything. The postgres image will run everything in that directory the first time a container is started on a particular data directory.
You generally can’t RUN commands that interact with the database in a Dockerfile because the database won’t be running at that point. (There is a script in the base image that goes through some complicated gymnastics to do the first-time setup.) In any case, because of the mechanics of Docker’s volume system, you can’t create an image that contains prepopulated database data; you have to use a mechanism like this to cause the image to restore a dump or otherwise set itself up at first start.
i am trying to run my Postgis Database in a Docker Container. Therefore i dumped my database and created a Dockerfile like this:
FROM mdillon/postgis
COPY z_myDump.sql /docker-entrypoint-initdb.d/
I use the mdillon postgis as base image (Postgis Extensions are already included) and copy my dump. The container disappears after a few seconds with the following error:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/z_myDump.sql
/docker-entrypoint-initdb.d/z_myDump.sql: Permission denied
any idea?
changing permissions of the .sql file before building the image did the job ... my bad
I have a Docker image for postgresql that is 10.4. I have old database files that are postgresql 8.4. I want to upgrade these to use in 10.4 but don't really have a good way to do this. Is it possible to use the Docker image and upgrade the old files?
I think you can run the postgres:8.4 image, execute pg_dumpall inside it and save the result to the host using, for example, a volume or docker cp command.
After that you can run postgres:10 image, provide the result file to it (a volume or docker cp again) and restore the data.
a quick question about how docker and mongo coexist.
When I deploy my app to docker hub, does it include db records?
When docker removes mongo records. When I stop container, or only when I remove it?
The answer is depends...
You could create a image with your records, but that would increase your image size, and if someone mount a volume to the path /data/db they would lose your database. So I do not recommend to upload a image with a loaded database, instead use a custom entrypoint script to init your database.
About when the records are destroyed, it will happen when you remove the container, but only if you did not mount a volume to the folder /data/db in the container, then the database will be persisted even if you remove the container.
You can see more info about how to use the image at: https://hub.docker.com/_/mongo/