Install Postgres extensions in bitbucket pipeline - postgresql

So I've setup a bitbucket-pipelines.yml for my python app. It needs a postgres database so I've followed the tutorial here (https://confluence.atlassian.com/bitbucket/test-with-databases-in-bitbucket-pipelines-856697462.html) which has led me to the following config:
image: node
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
I need some specific extensions in my db, how can I add these. I tried to add an extra in the script that installs them but at that point postgres doesn't seem to be up and running.

This worked for me, without the need to build my own image (I add 2 extensions to postgres):
image: node:8.11.1 # or any image you need
clone:
depth: 1 # include the last commit
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: your_password
pipelines:
default:
- step:
caches:
- node
script:
- npm install
- apt-get update
- apt-get -y install postgresql-client
- ./bin/utilities/wait-for-it.sh -h localhost -p 5432 -t 30
- PGPASSWORD=your_password psql -h localhost -p 5432 -c "create extension if not exists \"uuid-ossp\"; create extension if not exists pg_trgm;" -U postgres test;
- npm test test/drivers/* test/helpers/* test/models/*
services:
- postgres
wait-for-it.sh makes sure that postgres is ready to run, you can find it here: https://github.com/vishnubob/wait-for-it
Then, I run psql to create the extensions in the test database. Here note the variable PGPASSWORD I set before running that and that I use the -h and -p parameters to connect to the running postgres instance (otherwise it will try to do it with unix sockets, which doesn't seem to work).

You need to create your own image based on postgres, then push it to repository and use in pipeline
definitions:
services:
postgres:
image: your_custom_image_based_on_postgres
environment:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
You can also find image that suit your requirements in https://hub.docker.com/

Related

Docker and Postgis - How do I access shp2pgsql inside my docker container?

I am running postgres in a docker container using docker-compose and it spins up with no issue and I am able to connect to the database. But now I want to go into the container and execute the postgis shp2pgsql to load a shape file but the command seems to be nonexistent. Below is my code:
docker-compose.yaml
version: '3'
services:
db:
container_name: pg_container
image: postgis/postgis
restart: always
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test_db
volumes:
- ./data:/var/lib/postgresql/
- ./postgres_init:/postgres_init
ports:
- 5433:5433
networks:
- ch_ntw
networks:
ch_ntw:
driver: bridge
ipam:
config:
- subnet: 10.222.1.0/24
Getting into the container:
docker exec -it pg_container bash
Connecting to the db without issue using psql:
psql --host=pg_container --dbname=test_db --username=root
But then if I try to invoke shp2pgsql from bash I get the following:
shp2pgsql -s 2263:4326 postgres_init/nyct2010_15b/nyct2010.shp | psql -d test_db
bash: shp2pgsql: command not found
I would think since this is a postgis container that the function should be accessible no?
shp2pgsql is a client package. The postgis/postgis image is the PostGIS server components only. If you want to use shp2pgsql or other client tools, install them locally on your host, or in another container.

Cannot connect to postgres in Gitlab CI

I am trying to use GitLab CI PostgreSQL for my integration tests but it doesn´t work.
Here's the code of the stage:
integration_test:
stage: test
tags:
- custom_tag
services:
- postgres
variables:
POSTGRES_DB: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD} ${DOCKER_REGISTRY}
- docker pull ${DOCKER_IMAGE_CI}
- export PGPASSWORD=${POSTGRES_PASSWORD}
- docker run --rm postgres psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB} -c "SELECT 'OK' AS status;"
It returns an error like this:
psql: error: could not connect to server: could not translate host name "postgres" to address: Name or service not known
Anybody can help me?
Perhaps it's better to look at dockerizing test functions. This approach also provides better control over networking by means of docker bridge.
In this way your config could looks like this:
.gitlab-ci.yml:
stages:
- test
before_script:
- docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD} ${DOCKER_REGISTRY}
integration_test:
stage: test
script:
- docker-compose build
- docker-compose up
docker-compose.yml:
version: '3'
networks:
database:
services:
postgres-db:
image: ${DOCKER_IMAGE_CI}
networks:
- database
container_name: postgres
test-container:
build:
context: .
dockerfile: Dockerfile
networks:
- database
container_name: testcon
Dockerfile:
FROM postgres
ENV POSTGRES_DB=test \
POSTGRES_HOST=postgres \
POSTGRES_USER=postgres \
POSTGRES_PASSWORD=postgres \
POSTGRES_HOST_AUTH_METHOD=trust
CMD psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB} -c "SELECT 'OK' AS status;"
Your pipeline looks like it uses shell executor with gitlab services.
Command docker run --rm postgres <docker command> does not automatically connect to postgres network. You could try running your docker image with --link postgres, more details here. Note that link is legacy feature and may be removed in the future.
Personally I would try running my pipeline job using docker image. If your image is not publicly visible then you can bypass it with DOCKER_AUTH_CONFIG
If you used docker runner with password protected image then yaml would look like:
integration_test:
image: ${DOCKER_IMAGE_CI}
stage: test
tags:
- custom_tag
services:
- postgres
variables:
POSTGRES_DB: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
script:
- PGPASSWORD=${POSTGRES_PASSWORD} psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB} -c "SELECT 'OK' AS status;"
DOCKER_AUTH_CONFIG environment variable would be as follows(gitlab docs):
{
"auths": {
"${DOCKER_REGISTRY}": {
"auth": "(Base64 content from ${DOCKER_USER}:${DOCKER_PASSWORD})"
}
}
}
and to generate base64 auth you can use echo -n "${DOCKER_USER}:${DOCKER_PASSWORD}" | base64

PostgreSQL Container in Docker Not Authorizing a Correct Password

I have arranged a node.js back end to connect to a redis cache and psql database.
The app I have created is running but I would like to do some database admin and have attempted to log in using pgAdmin - however, my details were rejected.
I thought it might be a pgAdmin thing so I attempted to use the login URI in powershell but again it was rejected.
I checked that the psql service is running on the exposed port (in case I messed up the docker-compose config) and it is...not sure where to go from here.
My docker-compose config for the database is:
# PostgreSQL
postgres:
container_name: postgres
build: ./postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/myapp
POSTGRES_DB: myapp
POSTGRES_HOST: postgres
ports:
- "5432:5432"
I should note that the database is running - I can log in to my front end and access data, etc...
My login attempt:
psql postgres://admin:password#localhost:5432/myapp
And the response:
psql: FATAL: password authentication failed for user "admin"
I think you docker-compose not formatted well if it's not copy-paste issue as the environment variable, not place properly.
# PostgreSQL
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/myapp
POSTGRES_DB: myapp
POSTGRES_HOST: postgres
ports:
- "5432:5432"
Or you can try
version: '3.7'
services:
postgresdb:
container_name: postgres
environment:
POSTGRES_DB: appdb
POSTGRES_USER: appdb
POSTGRES_PASSWORD: 123123
image: bitnami/postgresql:latest
ports:
- "5432:5432"
Or better to post you Dockerfile, as I see your building your own Docker image, but better to use the offical image of Postgres like the one I posted above.
Also will suggest debugging on container DB first and verify connectivity on the container localhost, debugging and testing with depended containers like connecting from nodejs first here one lost in the actual problem.
Check if your ENV set properly.
docker exec postgres bash -c "printenv "
or
docker exec postgres bash -c "printenv | grep POSTGRES_"
or
docker exec -it postgres bash -c "psql -U admin myapp"

Docker-Compose and Postgres Extensions

This is my docker-compose file. Is there any easy way to get a postgres extension installed? I'm trying to install pg_trgm.
Edit: I now have two dockerfiles and an install script. It doesn't seem to be working when I run docker-compose up build
Internal server error: pq: operator does not exist: character varying % unknown
services:
db:
build:
context: .
dockerfile: db/Dockerfile
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_USER=x
- POSTGRES_PASSWORD=x
- POSTGRES_DB=x
api:
build:
context: .
args:
app_env: ${APP_ENV}
volumes:
- .:/go/src/x/y/z
ports:
- "8080:8080"
db/Dockerfile:
FROM postgres
COPY db/install-extensions.sql /docker-entrypoint-initdb.d
db/install-extensions.sql
CREATE EXTENSION IF NOT EXISTS pg_trgm;
Try this
FROM postgres
COPY ./install-extensions.sql /docker-entrypoint-initdb.d
and remove db from your file.
OR you can write
version: "3.1"
services:
db:
image: postgres:9.6
restart: always
environment:
POSTGRES_PASSWORD: unit
POSTGRES_USER: unit
POSTGRES_DB: unit
ports:
- 5432:5432
volumes:
- ./scripts:/docker-entrypoint-initdb.d
then
create directory scripts
put your .sql or .sh file
remove created containers docker-compose rm -v
start docker docker-compose up --build
in logs you must see something like this:
created_extension
I'm not sure precisely why but in order to get this working I had to use a shell script:
#!/usr/bin/env bash
echo "enabling pg_trgm on database $POSTGRES_DB"
psql -U $POSTGRES_USER --dbname="$POSTGRES_DB" <<-'EOSQL'
create extension if not exists pg_trgm;
EOSQL
echo "finished with exit code $?"
possibly because I was overriding the default database and user name.

How to create postgres database and run migration when docker-compose up

I'm setting up a simple backend that perform CRUD action with postgres database and want to create database and migration automatically when the docker-compose up runs.
I have already tried to add below code to Dockerfile or entrypoint.sh but none of them work.
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
createdb db:migrate
This code will work if run separately after docker is fully up
I have already tried to add - ./db-init:/docker-entrypoint-initdb.d to the volumes but that didn't work either
This is the Dockerfile
FROM node:10.12.0
# Create app directory
RUN mkdir -p /restify-pg
WORKDIR /restify-pg
EXPOSE 1337
ENTRYPOINT [ "./entrypoint.sh" ]
This is my docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/psotgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
entrypoint.sh (in here I get createdb: command not found)
#!/bin/bash
cd app
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
sequelize db:migrate
npm install
npm run dev
I expect that when I run docker, the migration and the db creation would happen.
entrypoint.sh (in here I get createdb: command not found)
Running createdb in the nodejs container will not work because it is postgres specific command and it's not installed by default in the nodejs image.
If you specify POSTGRES_DB: pg_development env var on postgres container, the database will be created automatically when the container starts. So no need to run createdb anyway in entrypoint.sh that is mounted in the nodejs container.
In order to make sequelize db:migrate work you need to:
add sequelize-cli to dependencies in package.json
run npm install so it gets installed
run npx sequelize db:migrate
Here is a proposal:
# docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
working_dir: /restify-pg
entrypoint: ["/bin/bash", "./entrypoint.sh"]
image: node:10.12.0
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
# package.json
{
...
"dependencies": {
...
"pg": "^7.9.0",
"pg-hstore": "^2.3.2",
"sequelize": "^5.2.9",
"sequelize-cli": "^5.4.0"
}
}
# entrypoint.sh
npm install
npx sequelize db:migrate
npm run dev
If you can run your migrations from nodejs instead of docker then consider this solution instead