Custom postgres shared_preload_libraries dir - how to configure? - postgresql

I am trying to install pg_cron extension in postgres in alpine linux docker.
When running
CREATE EXTENSION pg_cron;
in psql console I get:
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/pg_cron.control": No such file or directory
The problem is that the actual pg_cron.control is not under /usr/local/share/... but under /usr/share/..
Where in postgresql.conf I can define the path?
Steps taken:
docker run --name postgres-0 -e POSTGRES_PASSWORD=Password1 -p 5432:5432 -d postgres:10-alpine
docker exec -it postgres-0 /bin/bash
apk update
apk add postgresql-pg_cron --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
cat <<EOT >> /var/lib/postgresql/data/postgresql.conf
shared_preload_libraries='pg_cron'
EOT
pg_ctl reload

PostgreSQL expects to find the extension files in the SHAREDIR/extension/ directory associated with the installation (execute pg_config --sharedir to confirm the value of SHAREDIR for your particular installation).
There is however no facility for specifying an alternative location for extension files; it looks like something is wrong with the packaging.
I'm not familiar with Alpine Linux, but a quick Google search brings up e.g. this issue: Postgres extensions are installed into incorrect path and the suggested solution is to use a bare Alpine Linux image and install PostgreSQL via the apk command, so you might want to try that.

Related

Docker image wait for postgres image to start - without Bash

I have a standard Python docker image that needs to start after postgers is properly started in its standard image.
I understand that I can add this Bash command in the docker-compose file:
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- mypostgres
But I don't have bash installed in the standard python docker image, and I'm trying to keep the installation minimal.
Is there a way to wait for postgres without having bash installed in my image?
I have a standard Python docker image that needs to start after postgres is properly started in its standard image.
You mentioned "Python docker image", but you appear to be calling npm start, which is a node.js application, not a Python application.
The standard Python images do have bash installed (as do the official Node images):
$ docker run -it --rm python:3.10 bash
root#c9bdac2e23f9:/#
However, just checking for the port to be available may be insufficient in any case, so really what you want is to execute a query against the database and only continue once the query is successful.
A common solution is to install the postgres cli and run psql in a loop, like this:
until psql -h $HOST -U $USER -d $DB_NAME -c 'select 1' >/dev/null 2>&1; do
echo 'Waiting for database...'
sleep 1
done
You can use environment variables or a .pgpass file to provide the appropriate password.
If you are building a custom image, it may be better to place this logic in your ENTRYPOINT script rather than embedding it in the command field of your docker-compose.yaml.
If you don't want to psql, you can write the same logic in Python or Node utilizing whatever Postgres bindings are available (e.g., something like psycopg2 for Python).
A better solution is to make your application robust in the face of database failures, because this allows your application to continue running if the database is briefly unavailable during a restart.

Docker Compose - Container Bash Forking

I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql

How to run the 'postgres' command (single-user)

I have postgres installed on an ubuntu machine, and I am able to enter into the command line via something along the lines of:
$ sudo -u postgres psql
psql (10.15 (Ubuntu 10.15-0ubuntu0.18.04.1))
Type "help" for help.
postgres=#
And I can start/stop the server by doing something like:
$ sudo service postgresql
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
Those both seem fine. However, I would like to run postgres in single-user mode to do a couple tests. On the postgres page it gives a few examples, such as:
To start a single-user mode server, use a command like
postgres --single -D /usr/local/pgsql/data other-options my_database
However, if I use the 'postgres' command, I just get an error saying I don't have that command:
$ postgres
Command 'postgres' not found, did you mean:
What do I need to install to run the 'postgres' command in order to enter single-user mode?
as you have not export the binary path that's why it's can't find your binary of postgres.
use this command:
/usr/lib/postgresql/10/bin/postgres --single -D /usr/local/pgsql/data other-options my_database
or,
you can export the path in bash
first open the bashrc with this command:nano ~/.bashrc
add this line in the end :PATH="/usr/lib/postgresql/10/bin/:$PATH"
run this command source ~/.bashrc
the just use postgres --single -D /usr/local/pgsql/data other-options my_database
you can also find where your binary is with this command : find /usr/lib -iname 'postgres'
It is already installed, it is just not in your PATH, as it is not anticipated you would use it manually.
It is probably somewhere like "/usr/lib/postgresql/10/bin/postgres", or you can use locate or find to find it.
Ubuntu has conf files spread over several places so:
/usr/lib/postgresql/13/bin/postgres --single -D /var/lib/postgresql/13/main -c "config_file=/etc/postgresql/13/main/postgresql.conf"

How to build an image of Postgres:11 with HLL extension?

I want to make a Dockerfile to build an image of Postgres:11 that already installed postgresql-hll extension inside.
Im not experienced with Docker so Im have no idea to follow the instruction of installing this extension properly.
In order to do this you need to:
clone the git repository:
git clone https://github.com/citusdata/postgresql-hll.git
Create a file called Dockerfile (at the same level with the folder postgresql-hll created at step 1) with the contents:
ARG psversion=11
FROM postgres:$psversion
COPY postgresql-hll /postgresql-hll
RUN apt-get update -y && apt-get install -y postgresql-server-dev-${PG_MAJOR} make gcc g++
WORKDIR /postgresql-hll
RUN PG_CONFIG=/usr/bin/pg_config make
RUN PG_CONFIG=/usr/bin/pg_config make install
RUN echo "shared_preload_libraries = 'hll'" >> /usr/share/postgresql/postgresql.conf.sample
COPY create_extension.sql /docker-entrypoint-initdb.d/
Create a file create_extension.sql at the same level with the Dockerfile, with the contents:
CREATE EXTENSION hll;
Build your image:
# build for POSTGRES 11
docker build -t hll:1.0 --build-arg psversion=11 .
# build for POSTGRES 9.6
docker build -t hll:1.0 --build-arg psversion=9 .
NOTE: The version for POSTGRES 9.6 gives an error when trying to load the library. It is here for completeness and maybe somebody can contribute to fix it.
Run a container based on this image
docker run -d --name hll hll:1.0
Open a shell in the newly created container:
docker exec -ti hll bash
Inside the container run:
su postgres
psql
\dx
The output should show the hll extension as installed.

Boot2Docker (on Windows) running Mongo with shared folder (This file system is not supported)

I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.