i need to create a docker image based on "postgres", and to launch an sql statement, i suppose by using pgsql, after the image started.
The idea is that i should create a .sh script, like the following:
pgsql -U username database -f statement.sql
(since we're on localhost, pgsql should be allowed to connect to the db).
Is this the correct approach ? I am not able, anyway, to launch the script after the server started, i have connection failure because the image is not started.
What's the right way to extend an existing docker image without copying all the setup of the base image ?
Thanks !
Related
I am using docker container for postgres database, from UI which is powered from postgres. I deleted an entry from postgres via postico(client for postgres in mac), now when I see my database in postico is completely empty but when I refresh the UI, I still see the data!!!
Please correct me If I am postico is just the UI to see what is in the postgres running in the container, In that case I expected UI also to be empty but it is not the case.
Can someone tell me why my expectation is going wrong and please let me know if any info is needed?
I am currently working on a school project and I need to connect to a PostgreSQL container through pgAdmin. I have used docker-compose to create the container instance of PostgreSQL and PostGIS.
But when I try to connect to my PostgreSQL container, it does not work. Maybe I have entered the details wrongly? I have attached a screenshot here of the docker-compose.yml file and the parameters that I have filled in on my pgAdmin Desktop.
What am I doing wrong?
netstat-output
Can anyone please help me? Would really appreciate it !!
Your db container's outside port is 54320, not 5432. You are trying to connect postgis container instead of postgres. And your passwords are different from each other.
You can do one of the below;
If you still want to connect to postgis, you can use password accordingly.
If you want to connect to postgres, change port to 54320 in pgAdmin.
I'd like to create a docker image with data.
My attempt is as follows:
FROM postgres
COPY dump_testdb /image
RUN pg_restore /image
RUN rm -rf /image
Then I run docker build -t testdb . and docker run -d testdb.
When I connect to the container I don't see the restored db. How do I get an image with the restored data?
COPY the dump file, with a .sql extension, into /docker-entrypoint-initdb.d/. Do not try to RUN anything. The postgres image will run everything in that directory the first time a container is started on a particular data directory.
You generally can’t RUN commands that interact with the database in a Dockerfile because the database won’t be running at that point. (There is a script in the base image that goes through some complicated gymnastics to do the first-time setup.) In any case, because of the mechanics of Docker’s volume system, you can’t create an image that contains prepopulated database data; you have to use a mechanism like this to cause the image to restore a dump or otherwise set itself up at first start.
Currently we have all in one single docker container for our production gitlab, where we are using bundled postgres and redis. So everything in same container. We want to use external postgres db and separate container for redis as well to follow the production standards.
How can I migrate from internal postgres db to external postgres db? If anyone provides process and steps that will be really helpful. We are new to this process. Please let us know If anyone knows
Thank you everyone for your inputs ,
PRS
You can follow the article "Migrating GitLab from internal to external PostgreSQL", which involves:
a database dump/reload, using pg_dumpall
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dumpall \
--username=gitlab-psql --host=/var/opt/gitlab/postgresql > /var/lib/pgsql/database.sql
sudo -u postgres psql -f /var/lib/pgsql/database.sql
Note: yuo can also use a backup of the database, but only if the external PostgreSQL version matches exactly the embedded one.
setting its password
sudo -u postgres psql -c "ALTER USER gitlab ENCRYPTED PASSWORD '***' VALID UNTIL 'infinity';"
and modifying the GitLab configuration:
That is:
# Disable the built-in Postgres
postgresql['enable'] = false
# Fill in the connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = 5432
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_username'] = 'gitlab'
gitlab_rails['db_password'] = '***'
apply tour changes:
gitlab-ctl reconfigure && gitlab-ctl restart
#VonC
Hi, let me know about the process I have done below
We currently have single all in one docker gitlab container which is using bundled postgres and redis . To follow the production standards we are looking to maintain separate postgres and redis instances for our prod gitlab..We already have data in bundled db ..so we took back up current gitlab with bundled postgres ..it generated .tar file....Next we did change gitlab.rb to point external post gres db [ same version ]..then we are able connect to gitlab but didn;t see any data because nothing was there as it is fresh db. Later we did the restore using external postgres db ...now we can see all the data?? Can we do in this method ? Now our gitlab is attached to external postgres and I can see all the restored data. Will this process works ? Any downfalls?
How this process is different from pgdump and import ?
I've created docker image with PostgreSQL running inside and exposing 5432 port.
This image doesn't contain any database inside. Container is an empty PostgreSQL database server.
I'd like in (or during) "docker run" command:
attach db file
create db via sql query execution
restore db from dump
I don't want to keep the data after container will be closed. It's just a temporary development server.
I suspect it's possible to keep my "docker run" command string quite short/simple.
Probably there it is possible to mount some external folder with db/sql/dump in run command and then create db during container initialization.
What are the best/recommended way and the best practices to accomplish this task? Probably somebody can point me to corresponding docker examples.
This is a good question and probably something other folks asked themselves more than once.
According to the docker guide you would not do this in a RUN command. Instead you would create yourself an ENTRYPOINT or CMD in your Dockerfile that calls a custom shell script instead of calling the postgres process direclty. In this scenario the DB would be created in a "real" filesystem, but then cleaned-up during shutdown of the container.
How would this work? The container would start, call the ENTRYPOINT or CMD as usual and consume the init script to get the DB filled. Then at the moment the container is stopped, the same script will be notified with a signal and manually drop the database content.
CMD ["cleanAndRun.sh"]
A sketched script "cleanAndRun.sh" taken from the Docker documentation and modified for your needs. Please remember it is a sketch only and needs modifications:
#!/bin/sh
# The script that is called in the trap must also stop the DB, so below call to
# dropdb is not enough, it just demonstrates how to call anything in the stop-container scenario!
trap "dropdb <params>" HUP INT QUIT TERM
# init your DB -every- time container starts
<init script to import to clean and import dump>
# start your postgres DB
postgres
echo "exited $0"