i am trying to run my Postgis Database in a Docker Container. Therefore i dumped my database and created a Dockerfile like this:
FROM mdillon/postgis
COPY z_myDump.sql /docker-entrypoint-initdb.d/
I use the mdillon postgis as base image (Postgis Extensions are already included) and copy my dump. The container disappears after a few seconds with the following error:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/z_myDump.sql
/docker-entrypoint-initdb.d/z_myDump.sql: Permission denied
any idea?
changing permissions of the .sql file before building the image did the job ... my bad
Related
I'd like to create a docker image with data.
My attempt is as follows:
FROM postgres
COPY dump_testdb /image
RUN pg_restore /image
RUN rm -rf /image
Then I run docker build -t testdb . and docker run -d testdb.
When I connect to the container I don't see the restored db. How do I get an image with the restored data?
COPY the dump file, with a .sql extension, into /docker-entrypoint-initdb.d/. Do not try to RUN anything. The postgres image will run everything in that directory the first time a container is started on a particular data directory.
You generally can’t RUN commands that interact with the database in a Dockerfile because the database won’t be running at that point. (There is a script in the base image that goes through some complicated gymnastics to do the first-time setup.) In any case, because of the mechanics of Docker’s volume system, you can’t create an image that contains prepopulated database data; you have to use a mechanism like this to cause the image to restore a dump or otherwise set itself up at first start.
Till now I've been backing up my postgresql data using pg_dump, which exports the data to an sql file mydb.sql, and then restoring from that sql file using psql -U user -d db < mydb.sql.
For one reason or another it would be more convenient to restore the database content more directly, in an environment where psql does not exist... specifically on a host server where postgresql is installed in a docker container running on the host, but not on the host itself.
My plan is to back up the content of /var/lib/postgresql/data/ to a tar file, and when required (e.g. when a new server is created that hosts the postgresql container) just restore that to the same path. The folder /var/lib/postgresql/data/ in the docker container is mapped to a folder on the host server, so I would create this backup on the host, not inside the postgres container.
Is this a valid approach? Any "gotchas"? And are there any subfolders within /var/lib/postgresql/data/ that I can exclude from the tar file? I don't want to back up mere 'housekeeping' information.
You can do that, but you have to do it properly if you don't want your database to become corrupted.
Either stop PostgreSQL before copying the data directory or follow the instructions from the documentation for an online backup.
I have installed postgres 9.5 in my linux(16.04) machine.I have started service using below command.
sudo service postgresql start
This will start postgres service as a postgres user.
But I want to run postgres a different user(myown user).
How can I do .Please help !!.
You have to recursively change the ownership of the database directory to the new user.
If the WAL directory or tablespaces are outside the data directory, you have to change their ownership too.
Then you will have to configure the startup script so that it starts PostgreSQL as the new user. Watch out: if you installed the startup script with an installation package, any changes to it will probably be lost after an update.
I recommend that you don't do all that and continue running PostgreSQL as postgres.
I have a Docker image for postgresql that is 10.4. I have old database files that are postgresql 8.4. I want to upgrade these to use in 10.4 but don't really have a good way to do this. Is it possible to use the Docker image and upgrade the old files?
I think you can run the postgres:8.4 image, execute pg_dumpall inside it and save the result to the host using, for example, a volume or docker cp command.
After that you can run postgres:10 image, provide the result file to it (a volume or docker cp again) and restore the data.
i need to create a docker image based on "postgres", and to launch an sql statement, i suppose by using pgsql, after the image started.
The idea is that i should create a .sh script, like the following:
pgsql -U username database -f statement.sql
(since we're on localhost, pgsql should be allowed to connect to the db).
Is this the correct approach ? I am not able, anyway, to launch the script after the server started, i have connection failure because the image is not started.
What's the right way to extend an existing docker image without copying all the setup of the base image ?
Thanks !