Gitlab CI not able to use pg_prove - postgresql

I'm struggling to get a Gitlab CI up and running that uses the correct version of postgres (13) and has PGTap installed.
I deploy my project locally using a Dockerfile which uses postgres:13.3-alpine and then installs PGTap too. However, I'm not sure if I can use this Dockerfile to help with my CI issues.
In my gitlab-ci.yml file, I currently have:
variables:
GIT_SUBMODULE_STRATEGY: recursive
pgtap:
only:
refs:
- merge_request
- master
changes:
- ddl/**/*
image: postgres:13.1-alpine
services:
- name: postgres:13.1-alpine
alias: db
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
script:
- psql postgres://postgres#db/postgres -c 'create extension pgtap;'
- psql postgres://postgres#db/postgres -f ddl/01.sql
- cd ddl/
- psql postgres://postgres#db/postgres -f 02.sql
- psql postgres://postgres#db/postgres -f 03.sql
- pg_prove -d postgres://postgres#db/postgres --recurse test_*
The above works until it gets to the pg_prove command at the bottom as I get the below error:
pg_prove: command not found
Is there a way I can install pg_prove using the script commands? Or is there a better way to do this?

There is an old issue closed.
To summarize, either you build you own image based on postgres:13.1-alpine installing PGTap or you use a non official image where PGTap is installed 1maa/postgres:13-alpine :
docker run -it 1maa/postgres:13-alpine sh
/ # which pg_prove
/usr/local/bin/pg_prove

Since your step image is alpine based, you can try:
script:
- apk add --no-cache --update build-base make perl perl-dev git openssl-dev
- cpan TAP::Parser::SourceHandler::pgTAP
- psql.. etc
You can probably omit some of the packages...

Related

Run Pytest in Gitlab-CI as User

I had the following gitlab-ci.yml in my python-package repository:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
using this tox.ini file:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
This did work as I wanted it to.
However, then I added tests to test my code against a local Postgresql database using https://pypi.org/project/pytest-postgresql/. For this, I had to install PostgreSQL(apt -y install postgresql postgresql-contrib libpq5).
When I added this to my gitlab-ci.yml:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- apt -y install postgresql postgresql-contrib libpq5
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
I got the error from tox, that some module in Postgres (pg_ctl) wouldn't allow being run as the root. Log here: https://pastebin.com/fMu1JY5L
So, I must execute tox as a user, not the root.
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
From a quick google search I found out the easiest solution to creating a new user is to create a new Docker Image using Docker-in-Docker.
So, as of now I have this configuration:
gitlab-ci.yml:
image: docker:19.03.12
services:
- docker:19.03.12-dind
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker info
docker-build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
formatting-check:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE black --check .
unit-test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE tox
Dockerfile:
FROM python:latest
RUN apt update
RUN apt -y install postgresql postgresql-contrib libpq5
RUN useradd -m exec_user
USER exec_user
ENV PATH "$PATH:/home/exec_user/.local/bin"
RUN pip install black tox
(I had to add ENV PATH "$PATH:/home/exec_user/.local/bin" because pip would cry about it not being in the Path)
tox.ini:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
The job docker-build completes — the other two fail.
As for formatting-check:
$ docker run $CONTAINER_TEST_IMAGE black --check .
ERROR: Job failed: execution took longer than 1h0m0s seconds
The black command usually executes extremely fast (<1s).
As for unit-test:
/bin/sh: eval: line 120: tox: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
I have also found, that replacing docker run $CONTAINER_TEST_IMAGE tox with docker run $CONTAINER_TEST_IMAGE python3 -m tox doesn't work. Here, python3 isn't found (which seems odd given that the base image is python:latest).
If you have any idea how to solve this issue, let me know :D
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
This should work for your use case. Running su as root will not require a password. You could also use sudo -u postgres tox (must apt install sudo first).
As a basic working example using su (as seen here - job) using the postgres user, which is created automatically when postgres is installed.
myjob:
image: python:3.9-slim
script:
- apt update && apt install -y --no-install-recommends libpq-dev postgresql-client postgresql build-essential
- pip install psycopg2 psycopg pytest pytest-postgresql
- su postgres -c pytest
# or in your case, you might use: su postgres -c tox
Alternatively, you might consider just using GitLab's services feature to run your postgres server if that's the only obstacle in your way. You can pass --postgresql-host and --postgresql-password to pytest to tell the extension to use the services.

Run postgreSQL gitlab docker

I'm trying to run a pipeline on gitlab of a python webapp made with Django, that uses a postgre database. After installing postgre, the psql command gives the error:
psql: error: could not connect to server: No such file or directory
Here's (part of) my .gitlab-ci.yml file:
image: python:latest
# Install postgreSQL service on container
services:
- postgres:12.2-alpine
# Change pip's cache directory to be inside the project directory
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
DJANGO_SETTINGS_MODULE: "my_app.settings"
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST_AUTH_METHOD: trust
# Let's cache also the packages
# Install packages in a virtualenv and cache it as well
cache:
paths:
- .cache/pip
- venv/
before_script:
- pip install virtualenv --upgrade pip
- virtualenv venv
- source venv/bin/activate
- apt-get update
#- apt-get install -y postgresql postgresql-client libpq-dev # postgre db requirements
stages: # List of stages for jobs, and their order of execution
- build
- verify
- unit_test
- integration_test
- package
- release
- deploy
build:
stage: build
script:
- pip install -r requirements.txt
- echo "Build stage finished"
verify:
stage: verify
script:
- prospector -X ./my_app # static code analysis
- bandit -r ./my_app # static code analysis pt. 2
- echo "Verify stage finished"
unit_test:
stage: unit_test
script:
- echo "Running unit_test 1"
- pytest ./my_app/unit_test.py #running unit_test
- echo "Creating db"
- apt-get install -y postgresql postgresql-client libpq-dev # postgre db
- psql -U postgres
- psql -d "CREATE USER $POSTGRES_USER WITH PASSWORD $POSTGRES_PASSWORD CREATEDB;"
- psql -d "CREATE DATABASE $POSTGRES_DB OWNER $POSTGRES_USER;"
- echo "Unit testing stage finished"
How can I make psql work on gitlab CI/CD pipeline?
You're on the right track with the "services" keyword, which will cause a postgres database to run on the host "postgres" (the DNS of the service is based on the name of the container unless you specify an "alias" with the service).
Your issue is that psql attempts to connect to localhost unless you specify otherwise, so your psql -U postgres attempts to connect on localhost. Try using psql -U postgres -p 5432 -h postgres instead.

How to add scripts to image before running them in gitlab CI

I am trying to run a CI job in gitlab, where the integration tests depend on postgresql.
In gitlab I've used the postgresql runner. The issue is that the integration tests require the extension uuid-ossp. I could run the SQL commands before each test to ensure the extension is applied, but I'd rather apply it once before running all the tests.
So I've used the image tag in the CI script to add a .sh file in the postgresql image in /docker-entrypoint-initdb.d/, and then try to run the integration tests with the same image. The problem is that it doesn't seem to apply the extension as the integration tests fail where the uuid functions are used -- function uuid_generate_v4() does not exist
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- echo "#!/bin/bash
set -e
psql \"$POSTGRES_DB\" -v --username \"$POSTGRES_USER\" <<-EOSQL
create extension if not exists \"uuid-ossp\";
EOSQL" > /docker-entrypoint-initdb.d/create-uuid-ossp-ext.sh
artifacts:
untracked: true
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- go test ./... -v -race -tags integration
An alternate i was hoping that would work was
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- psql -d postgresql://postgres#localhost:5432/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
artifacts:
untracked: true
But in this case the client is unable to connect to postgres (i imagine it's because i'm editing the image not running it?)
I must be missing something obvious, or is this even possible?
In both case in the job prep-postgres, you make changes in a running container (from postgres:12.2-alpine image) but you don't save these changes, so test-integration job can't use them.
I advice you to build your own image using a Dockerfile and the entrypoint script for the Postgres Docker image. This answer from #Elton Stoneman could help.
After that, you can refer your previously built image as services: in the test-integration job and you will benefit from the created extension.
At the moment i've had to do something a little smelly and download postgres client before running the extension installation.
.prepare_db: &prepare_db |
apt update \
&& apt install -y postgresql-client \
&& psql -d postgresql://postgres#localhost/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- *prepare_db
- go test ./... -v -race -tags integration
This isn't perfect. I was hoping that there was a way to save the state of the docker image between stages but there doesn't seem to be that option. So the options seem to be either:
install it during the test-integration stage.
create a base image specifically for this purpose where the installation of the expansion has already been done.
I've gone with option 1 for now, but will reply if i find something more concise, easier to maintain and fast.

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT

How to build/run one Dockerfile per build job in Travis-CI?

I've got a repository with multiple Dockerfiles which take ~20min each to build: https://github.com/fredrikaverpil/pyside2-wheels
I'd like to efficiently divide these Dockerfiles to be built in its own jobs.
Right now, this is my .travis.yml:
language: python
sudo: required
dist: trusty
python:
- 2.7
- 3.5
services:
- docker
install:
- docker build -f Dockerfile-Ubuntu16.04-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This creates two jobs, one per Python version. However, I'd rather have one job per Dockerfile, as I'm about to add more such files.
How can this be achieved?
Managed to solve it, I think.
language: python
sudo: required
dist: trusty
services:
- docker
matrix:
include:
- env: DOCKER_OS=ubuntu16.04
python: 2.7
- env: DOCKER_OS=ubuntu16.04
python: 3.5
- env: DOCKER_OS=centos7
python: 2.7
install:
- docker build -f Dockerfile-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This results in three job builds.