I am trying to install a PostgreSQL dockerised service, with plpython. I am able to build the image successfully, but when I come to run the container, I get the following error:
ERROR: could not open extension control file
"/usr/share/postgresql/9.5/extension/plpython3u.control": No such file
or directory STATEMENT: CREATE EXTENSION "plpython3u";
psql:/docker-entrypoint-initdb.d/create_db.sql:7: ERROR: could not
open extension control file
"/usr/share/postgresql/9.5/extension/plpython3u.control": No such file
or directory
my directory layout:
me#yourbox:~/Projects/experimental/docker/scratchdb$ tree
.
├── Dockerfile
└── sql
├── create_db.sql
└── schemas
└── DDL
└── db_schema_foo.sql
Dockerfile
FROM library/postgres:9.6
FROM zitsen/postgres-pgxn
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
python3 postgresql-plpython3-9.6
RUN pgxn install quantile
COPY sql /docker-entrypoint-initdb.d/
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
# CMD ["/usr/lib/postgresql/9.6/bin/postgres", "-D", "/var/lib/postgresql/9.6/main", "-c", "config_file=/etc/postgresql/9.6/main/postgresql.conf"]
create_db.sql
# Uncomment line below for debugging purposes
set client_min_messages TO debug1;
CREATE EXTENSION "quantile"
CREATE EXTENSION "plpython3u";
-- Create myappuser
CREATE ROLE myappuser LOGIN ENCRYPTED PASSWORD 'passw0rd123' NOINHERIT;
CREATE DATABASE only_foo_and_horses WITH ENCODING 'UTF8' TEMPLATE template1;
-- \l+
GRANT ALL PRIVILEGES ON DATABASE only_foo_and_horses TO myappuser;
-- Import only_foo_and_horses DDL and initialise database data
\c only_foo_and_horses;
\i /docker-entrypoint-initdb.d/schemas/DDL/db_schema_foo.sql;
-- # enable python in database
[[Edit]]
These are the commands I use to build and run the container:
docker build -t scratch:pg .
docker run -it -rm scratch:pg
How do I install plpython in a dockerised PostgreSQL service?
I think your error was because of the initial erroneous CMD which pointed to the wrong location of PostgreSQL for this image (9.5 vs 9.6).
However, I think I've spotted the mistake for why the SQL isn't being imported.
The default ENTRYPOINT for this image (at https://github.com/docker-library/postgres/blob/bef8f02d1fe2bb4547280ba609f19abd20230180/9.6/docker-entrypoint.sh) is responsible for importing from /docker-entrypoint-initdb.d/. Since you are overwriting CMD and it is not equal to just postgresql, it is skipping this part.
The default ENTRYPOINT should do what you want. Try removing your CMD.
I have just run this all from scratch and seems extension have been successfully created. Is there still an issue ?
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/create_db.sql
SET
CREATE EXTENSION
CREATE ROLE
CREATE DATABASE
GRANT
LOG: received fast shutdown request
waiting for server to shut down...LOG: aborting any active transactions
LOG: autovacuum launcher shutting down
LOG: shutting down
.LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
Related
I am doing (or planning to do) some spatial db work with OpenStreetMaps data using Postgres. I am working on a mac m1, and decided the best way is to run a Postgres db with PostGis installed in a docker container. I can't connect from my host machine, but I can connect from another container added in the same docker-compose.
In order to set the database up for importing OSM data, I need to change some of the config values, as well as install PostGis. So I have written my own dockerfile that completes these tasks on startup. To do so I use a two .sh files. This dockerfile works mostly correctly, as I can connect from another docker container running pgadmin4, but my backend application, azure data studio, and psql from terminal cannot connect, getting could not connect to server: Operation timed out Is the server running on host "172.31.0.2" and accepting TCP/IP connections on port 5432?.
My assumption is that I'm missing or adding something incorrectly when I run the Postgres container but no amount of googling bears fruit! Any help would be amazing.
Docker compose
version: '3.8'
services:
db:
build:
./database
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- "5432:5432"
Dockerfile
FROM postgres:14-bullseye
LABEL maintainer="PostGIS Project - https://postgis.net"
ENV POSTGIS_MAJOR 3
ENV POSTGIS_VERSION 3.3.2+dfsg-1.pgdg110+1
RUN apt-get update \
&& apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR \
&& apt-get install -y --no-install-recommends \
# ca-certificates: for accessing remote raster files;
# fix: https://github.com/postgis/docker-postgis/issues/307
ca-certificates \
\
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /docker-entrypoint-initdb.d
COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/10_postgis.sh
COPY ./update-postgis.sh /usr/local/bin
Essentially a copy of the postgis docker image, but that allows me to install other extensions in the initdb-postgis.sh and update-postgis.sh files, shown below:
initdb-postgis.sh
#!/bin/sh
set -e
export PGUSER="$POSTGRES_USER"
"${psql[#]}" <<- 'EOSQL'
ALTER SYSTEM SET listen_addresses = '*';
ALTER SYSTEM SET wal_level = minimal;
ALTER SYSTEM SET max_wal_senders = 0;
ALTER SYSTEM SET checkpoint_timeout = '1d';
ALTER SYSTEM SET checkpoint_completion_target = 0.90;
ALTER SYSTEM SET shared_buffers = '8GB';
ALTER SYSTEM SET max_wal_size = '10GB';
ALTER SYSTEM SET min_wal_size = '1GB';
CREATE DATABASE template_postgis IS_TEMPLATE true;
EOSQL
for DB in template_postgis "$POSTGRES_DB"; do
echo "Loading PostGIS extensions into $DB"
"${psql[#]}" --dbname="$DB" <<-'EOSQL'
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS postgis_topology;
CREATE EXTENSION IF NOT EXISTS hstore;
EOSQL
done
update-postgis.sh
#!/bin/sh
set -e
export PGUSER="$POSTGRES_USER"
POSTGIS_VERSION="${POSTGIS_VERSION%%+*}"
for DB in template_postgis "$POSTGRES_DB" "${#}"; do
echo "Updating PostGIS extensions '$DB' to $POSTGIS_VERSION"
psql --dbname="$DB" -c "
-- Upgrade PostGIS (includes raster)
CREATE EXTENSION IF NOT EXISTS postgis VERSION '$POSTGIS_VERSION';
ALTER EXTENSION postgis UPDATE TO '$POSTGIS_VERSION';
-- Upgrade Topology
CREATE EXTENSION IF NOT EXISTS postgis_topology VERSION '$POSTGIS_VERSION';
ALTER EXTENSION postgis_topology UPDATE TO '$POSTGIS_VERSION';
-- Upgrade hstore
CREATE EXTENSION IF NOT EXISTS hstore;
"
I'm connecting from the host using the IP given when I docker inspect the container id, so the docker compose network shouldn't be an issue. As I mentioned before, connection works fine from inside the docker compose, with both a pgadmin4 and unix script running osm2pgsql. Both can connect using either the IP from docker inspect, or the name of the service, in this case 'db'.
Whilst I admit I am relitively inexperienced with both docker and postgis, normally some amount of googling helps me find the issue, but no this time - please help!
It turns out that other processes (namely other postgres db's) were running on port 5432. So following this post fixed the issue, and I can now connect using psql.
I tried to install using the instructions available on https://docs.konghq.com/install/ubuntu/ and also using snap store but I get the same error. I don't know if it's relevant or not but I am using postgres-12.2 which comes pre-installed with Ubuntu-20.04. The directory structure in postgres-12.2 is different from the earlier ones.
error: cannot perform the following tasks:
- Run install hook of "kong" snap if present (run hook "install":
-----
The files belonging to this database system will be owned by user "snap_daemon".
This user must also own the server process.
The database cluster will be initialized with locale "C.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
creating directory /var/snap/kong/172/postgresql/10/main ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Asia/Kolkata
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/snap/kong/172/usr/lib/postgresql/10/bin/pg_ctl -D /var/snap/kong/172/postgresql/10/main -l logfile start
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/snap/kong/common/sockets/.s.PGSQL.5432"?
-----)```
[1]: https://i.stack.imgur.com/2pzKn.png
Finally, it was done after a lot of research and endless tries. If you face the same issue, follow the following order(For Ubuntu 20.04 only):
Better not install using apt-get or snap(at least until a snap for Ubuntu 20.04 is available). Install using the .deb package available at https://docs.konghq.com/install/ubuntu/#packages. After downloading the package, navigate to the Downloads folder in the terminal and run the following commands to install:
sudo apt-get install openssl libpcre3 procps perl
sudo dpkg -i kong-2.0.4.*.deb
Don't know what the first command is installing but it is suggested to do so as per the official documentation of KONG.
In another terminal, create a new user and database in postgres for KONG connectivity.
sudo -i -u postgres
psql
CREATE USER kong;
CREATE DATABASE kong OWNER kong;
Back to the terminal where you were installing KONG. Try running the command:
sudo kong migrations bootstrap
If everything goes without a glitch, consider yourself lucky and go to step 5.
If there occurs an error at step 3 as:
Error: missing password, required for connect, there's more work to be done.
Run the command: kong check
This should list an error as [error] no file at: /etc/kong/kong.conf. Create a file named kong.conf in the directory /etc/kong and paste the contents available at https://github.com/Kong/kong/blob/master/kong.conf.default.
Thereafter, uncomment the lines which initialize the following variables: https://docs.konghq.com/2.0.x/configuration/#postgres-settings. If in step 2, you created user and database with different names, make sure to modify the credentials in the file kong.conf with your user(role) and database names.
[You may face trouble creating files and editing them at this step.]
Run the command
sudo kong start
to verify the correct configuration. Open your browser and navigate to http://localhost:8001. If some page opens, KONG is correctly configured on your device. Go back to the terminal and stop KONG using the command
sudo kong stop
I am using a postgis/postgis Docker image to set a database server for my application.
The database server must have a tablespace created, then a database.
Then each time another application will start from another container, it will run a Liquibase script that will update the database schema (create tables, index...) when needed.
On a terminal, to prepare the database container, I'm running these commands :
# Run a naked Postgis container
sudo docker run --name ecoemploi-postgis
-e POSTGRES_PASSWORD=postgres
-d -v /data/comptes-france:/data/comptes-france postgis/postgis
# Send 'bash level' commands to create the directory for the tablespace
sudo docker exec -it ecoemploi-postgis
bin/sh -c 'mkdir /tablespace && chown postgres:postgres /tablespace'
Then to complete my step 1, I have to run SQL statements to create the tablespace in a PostGIS point of view, and create the database by a CREATE DATABASE.
I connect myself, manually, under the psql of my container :
sudo docker exec -it ecoemploi-postgis bin/sh
-c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR"
-p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
And I run manally these commands :
CREATE TABLESPACE data LOCATION '/tablespace';
CREATE DATABASE comptesfrance TABLESPACE data;
exit
But I would like to have a container created from a single Dockerfile having done all the needed work. The difficulty is that it has to be done in two parts :
One before the container is started. (creating directories, granting them user:group).
One after it is started for the first time : declaring the tablespace and creating the base. If I understand well the base image I took, it should be done after an entrypoint docker-entrypoint.sh has been run ?
What is the good way to write a Dockerfile creating a container having done all these steps ?
The PostGIS image "is based on the official postgres image", so it should be able to use the /docker-entrypoint-initdb.d mechanism. Any files you put in that directory will be run the first time the database container is started. The postgis Dockerfile already uses this directory to install the PostGIS extensions into the default database.
That means you can put your build-time setup directly into the Dockerfile, and copy the startup-time script into that directory.
FROM postgis/postgis:12-3.0
RUN mkdir /tablespace && chown postgres:postgres /tablespace
COPY createdb.sql /docker-entrypoint-initdb.d/20-createdb.sql
# Use default ENTRYPOINT/CMD from base image
For the particular setup you describe, this may not be necessary. Each database runs in an isolated filesystem space and starts with an empty data directory, so there's not a specific need to create an alternate data directory; Docker style is to just run multiple databases if you need isolated storage. Similarly, the base postgres image will create a database for you at first start (named by the POSTGRES_DB environment variable).
In order to run a container, your Dockerfile must be functional and completed.
you must enter the queries in a bash file and in the last line you have to enter an ENTRYPOINT with this bash script
I'm using docker file to build ubuntu image have install postgresql. But I can't wait for service postgres status OK
FROM ubuntu:18.04
....
RUN apt-get update && apt-get install -y postgresql-11
RUN service postgresql start
RUN su postgres
RUN psql
RUN CREATE USER kong; CREATE DATABASE kong OWNER kong;
RUN \q
RUN exit
Everything seem okay, but RUN su postgres will throw error because service postgresql not yet started after RUN service postgresql start. How can I do that?
First thing, each RUN command in Dockerfile run in a separate shell and RUN command should be used for installation or configuration not for starting the process. The process should be started at CMD or entrypoint.
RUN vs CMD
Better to use offical postgress docker image.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The default postgres user and database are created in the entrypoint with initdb.
or you can build your image based on postgress.
FROM postgres:11
ENV POSTGRES_USER=kong
ENV POSTGRES_PASSWORD=example
COPY seed.sql /docker-entrypoint-initdb.d/seed.sql
This will create an image with user, password and also the entrypoint will insert seed data as well when container started.
POSTGRES_USER
This optional environment variable is used in conjunction with
POSTGRES_PASSWORD to set a user and its password. This variable will
create the specified user with superuser power and a database with the
same name. If it is not specified, then the default user of postgres
will be used.
Some advantage with offical docker image
Create DB from ENV
Create DB user from ENV
Start container with seed data
Will wait for postgres to be up and running
All you need
# Use postgres/example user/password credentials
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
Initialization scripts
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files, run any executable *.sh
scripts, and source any non-executable *.sh scripts found in that
directory to do further initialization before starting the service.
With a freshly installed version of Postgres 9.2 via yum repository on Centos 6, how do you run postgres as a different user when it is configured to run as 'postgres:postgres' (u:g) out of the box?
In addition to AndrewPK's explanation, I'd like to note that you can also start new PostgreSQL instances as any user by stopping and disabling the system Pg service, then using:
initdb -D /path/to/data/directory
pg_ctl start -D /path/to/data/directory
This won't auto-start the server on boot, though. For that you must integrate into your init system. On CentOS 6 a simple System V-style init script in /etc/init.d/ and a suitable symlink into /etc/rc3.d/ or /etc/rc3.d/ (depending on default runlevel) is sufficient.
If running more than one instance at a time they must be on different ports. Change the port directive in postgresql.conf in the datadir or set it on startup with pg_ctl -o "-p 5433" .... You may also need to override the unix_socket_directories if your user doesn't have write permission to the default socket directory.
pg_ctl
initdb
This is only for a fresh installation (as it pertained to my situation) as it involves blowing away the data dir.
The steps I took to resolve this issue while utilizing the packaged startup scripts for a fresh installation:
Remove the postgres data dir /var/lib/pgsql/9.2/data if you've already gone through the initdb process with the postgres user:group configured as default.
Modify the startup script (/etc/init.d/postgresql-9.2) to replace all instances of postgres:postgres with NEWUSER:NEWGROUP.
Modify the startup script to replace all instances of postgres in any $SU -l postgres lines with the NEWUSER.
run /etc/init.d/postgres initdb to regenerate the cluster using the new username
Make sure any logs created are owned by the new user or remove old logs if error on initdb (the configuration file in my case was found in /var/lib/pgsql/9.2/data/postgresql.conf).
Startup postgres and it should now be running under the new user/group.
I understand this might not be what other people are looking for if they have existing postgres db's and want to restart the server to run as a different user/group combo - this was not my case, and I didn't see an answer posted anywhere for a 'fresh' install utilizing the pre-packaged startup scripts.