Import database in creating docker container - need use pg_restore - postgresql

Has someone solved the postgre database auto import when creating a docker image? The traditional method is to put files into the docker-entrypoint-initdb.d. But, it does not work for me, because I need to import via pg_restore (because of custom-format dump). I do not know how to start postgres service through dockerfile. The problem is that every time it runs in a separate container layer. Thank you for your help.

I solved this by put .sh script(which contains pg_restore commands) to docker-entrypoint-inidb.d. I use official Postgre image, which runs after dockerfile any .sql and .sh files, which are in docker-entrypoint-initdb.d
More info https://github.com/docker-library/docs/tree/master/postgres#how-to-extend-this-image

Related

Connect Tarantool Docker image to Postgres

I'm trying to connect Tarantool Docker Image to local PostgreSQL, to replicate some test data, and ran into the following problems:
It seems there is no CL (except Tarantool console) to check which
files are in place (exec bin/bash fails)
pg = require('pg') leads to
an error: "init.lua:4: module 'pg.driver' not found", despite the
presence of the pg module in the Docker description
I have doubts about how to replicate efficiently 4 tables, and
relations between them, to the container from outside Postgres
Does anyone know sources to dig in and find solutions to those problems? Any direction would be greatly appreciated.
docker exec -ti tnt_container sh
the issue. You should find an older base image or build it yourself.
This is PostgreSQL-related doubts. You may pass batches of data to pg functions or use intermediate application to transfer data via COPY. It looks like tarantool's pg driver does not support COPY.

Why does my postgres docker image not contain any data?

I'd like to create a docker image with data.
My attempt is as follows:
FROM postgres
COPY dump_testdb /image
RUN pg_restore /image
RUN rm -rf /image
Then I run docker build -t testdb . and docker run -d testdb.
When I connect to the container I don't see the restored db. How do I get an image with the restored data?
COPY the dump file, with a .sql extension, into /docker-entrypoint-initdb.d/. Do not try to RUN anything. The postgres image will run everything in that directory the first time a container is started on a particular data directory.
You generally can’t RUN commands that interact with the database in a Dockerfile because the database won’t be running at that point. (There is a script in the base image that goes through some complicated gymnastics to do the first-time setup.) In any case, because of the mechanics of Docker’s volume system, you can’t create an image that contains prepopulated database data; you have to use a mechanism like this to cause the image to restore a dump or otherwise set itself up at first start.

Dump Contents of RDS Postgres Query

Short Version of this Question:
I'd like to dump the contents of a Postgres query from a db instance hosted in RDS inside of a shell script.
Complete Version:
Right now I'm writing a shell script that I would like to dump the contents of a query into a .dump file from a source database, and run the dump file on a destination database instance. Both db instances are hosted in RDS.
MySQL allows you to do this using the mysqldump tool, but the recommended answer to this problem in Postgres seems to be to use the COPY command. However, the COPY command isn't available in RDS instances. The recommended solution in this case seems to be to use the '\copy' command, which does the same thing locally using the psql tool. However, it doesn't seem like this is a support option inside of a shell script.
What's the best way to accomplish this?
Thank you!
I am not familiar with shell, but I have used batch file in Windows to dump output of query to a file and to import the file on another instance.
Here is what I used to export from postgres RDS to file on Windows.
SET PGPASSWORD=your_password
cd "C:\Program Files (x86)\pgAdmin 4\v3\runtime"
psql -h your_host -U your_username -d your_databasename -c "\copy (your_query) TO
path\file_name.sql"
All above commands are in one batch file.

Postgres: launching an sql after the docker image started

i need to create a docker image based on "postgres", and to launch an sql statement, i suppose by using pgsql, after the image started.
The idea is that i should create a .sh script, like the following:
pgsql -U username database -f statement.sql
(since we're on localhost, pgsql should be allowed to connect to the db).
Is this the correct approach ? I am not able, anyway, to launch the script after the server started, i have connection failure because the image is not started.
What's the right way to extend an existing docker image without copying all the setup of the base image ?
Thanks !

The best practices for PostgreSQL docker container initialization with some data

I've created docker image with PostgreSQL running inside and exposing 5432 port.
This image doesn't contain any database inside. Container is an empty PostgreSQL database server.
I'd like in (or during) "docker run" command:
attach db file
create db via sql query execution
restore db from dump
I don't want to keep the data after container will be closed. It's just a temporary development server.
I suspect it's possible to keep my "docker run" command string quite short/simple.
Probably there it is possible to mount some external folder with db/sql/dump in run command and then create db during container initialization.
What are the best/recommended way and the best practices to accomplish this task? Probably somebody can point me to corresponding docker examples.
This is a good question and probably something other folks asked themselves more than once.
According to the docker guide you would not do this in a RUN command. Instead you would create yourself an ENTRYPOINT or CMD in your Dockerfile that calls a custom shell script instead of calling the postgres process direclty. In this scenario the DB would be created in a "real" filesystem, but then cleaned-up during shutdown of the container.
How would this work? The container would start, call the ENTRYPOINT or CMD as usual and consume the init script to get the DB filled. Then at the moment the container is stopped, the same script will be notified with a signal and manually drop the database content.
CMD ["cleanAndRun.sh"]
A sketched script "cleanAndRun.sh" taken from the Docker documentation and modified for your needs. Please remember it is a sketch only and needs modifications:
#!/bin/sh
# The script that is called in the trap must also stop the DB, so below call to
# dropdb is not enough, it just demonstrates how to call anything in the stop-container scenario!
trap "dropdb <params>" HUP INT QUIT TERM
# init your DB -every- time container starts
<init script to import to clean and import dump>
# start your postgres DB
postgres
echo "exited $0"