Run postgreSQL gitlab docker - postgresql

I'm trying to run a pipeline on gitlab of a python webapp made with Django, that uses a postgre database. After installing postgre, the psql command gives the error:
psql: error: could not connect to server: No such file or directory
Here's (part of) my .gitlab-ci.yml file:
image: python:latest
# Install postgreSQL service on container
services:
- postgres:12.2-alpine
# Change pip's cache directory to be inside the project directory
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
DJANGO_SETTINGS_MODULE: "my_app.settings"
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST_AUTH_METHOD: trust
# Let's cache also the packages
# Install packages in a virtualenv and cache it as well
cache:
paths:
- .cache/pip
- venv/
before_script:
- pip install virtualenv --upgrade pip
- virtualenv venv
- source venv/bin/activate
- apt-get update
#- apt-get install -y postgresql postgresql-client libpq-dev # postgre db requirements
stages: # List of stages for jobs, and their order of execution
- build
- verify
- unit_test
- integration_test
- package
- release
- deploy
build:
stage: build
script:
- pip install -r requirements.txt
- echo "Build stage finished"
verify:
stage: verify
script:
- prospector -X ./my_app # static code analysis
- bandit -r ./my_app # static code analysis pt. 2
- echo "Verify stage finished"
unit_test:
stage: unit_test
script:
- echo "Running unit_test 1"
- pytest ./my_app/unit_test.py #running unit_test
- echo "Creating db"
- apt-get install -y postgresql postgresql-client libpq-dev # postgre db
- psql -U postgres
- psql -d "CREATE USER $POSTGRES_USER WITH PASSWORD $POSTGRES_PASSWORD CREATEDB;"
- psql -d "CREATE DATABASE $POSTGRES_DB OWNER $POSTGRES_USER;"
- echo "Unit testing stage finished"
How can I make psql work on gitlab CI/CD pipeline?

You're on the right track with the "services" keyword, which will cause a postgres database to run on the host "postgres" (the DNS of the service is based on the name of the container unless you specify an "alias" with the service).
Your issue is that psql attempts to connect to localhost unless you specify otherwise, so your psql -U postgres attempts to connect on localhost. Try using psql -U postgres -p 5432 -h postgres instead.

Related

Run Pytest in Gitlab-CI as User

I had the following gitlab-ci.yml in my python-package repository:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
using this tox.ini file:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
This did work as I wanted it to.
However, then I added tests to test my code against a local Postgresql database using https://pypi.org/project/pytest-postgresql/. For this, I had to install PostgreSQL(apt -y install postgresql postgresql-contrib libpq5).
When I added this to my gitlab-ci.yml:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- apt -y install postgresql postgresql-contrib libpq5
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
I got the error from tox, that some module in Postgres (pg_ctl) wouldn't allow being run as the root. Log here: https://pastebin.com/fMu1JY5L
So, I must execute tox as a user, not the root.
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
From a quick google search I found out the easiest solution to creating a new user is to create a new Docker Image using Docker-in-Docker.
So, as of now I have this configuration:
gitlab-ci.yml:
image: docker:19.03.12
services:
- docker:19.03.12-dind
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker info
docker-build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
formatting-check:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE black --check .
unit-test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE tox
Dockerfile:
FROM python:latest
RUN apt update
RUN apt -y install postgresql postgresql-contrib libpq5
RUN useradd -m exec_user
USER exec_user
ENV PATH "$PATH:/home/exec_user/.local/bin"
RUN pip install black tox
(I had to add ENV PATH "$PATH:/home/exec_user/.local/bin" because pip would cry about it not being in the Path)
tox.ini:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
The job docker-build completes — the other two fail.
As for formatting-check:
$ docker run $CONTAINER_TEST_IMAGE black --check .
ERROR: Job failed: execution took longer than 1h0m0s seconds
The black command usually executes extremely fast (<1s).
As for unit-test:
/bin/sh: eval: line 120: tox: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
I have also found, that replacing docker run $CONTAINER_TEST_IMAGE tox with docker run $CONTAINER_TEST_IMAGE python3 -m tox doesn't work. Here, python3 isn't found (which seems odd given that the base image is python:latest).
If you have any idea how to solve this issue, let me know :D
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
This should work for your use case. Running su as root will not require a password. You could also use sudo -u postgres tox (must apt install sudo first).
As a basic working example using su (as seen here - job) using the postgres user, which is created automatically when postgres is installed.
myjob:
image: python:3.9-slim
script:
- apt update && apt install -y --no-install-recommends libpq-dev postgresql-client postgresql build-essential
- pip install psycopg2 psycopg pytest pytest-postgresql
- su postgres -c pytest
# or in your case, you might use: su postgres -c tox
Alternatively, you might consider just using GitLab's services feature to run your postgres server if that's the only obstacle in your way. You can pass --postgresql-host and --postgresql-password to pytest to tell the extension to use the services.

Running cron job inside container with python code and postgres database

I am new into docker, and I have created a container with python + postgres, which runs a python script that collects some data and writes it down on the SQL database. Now, I need to set this job to run each day. And then the nightmare started. I did not manage to create a separate container for this job, so I tried to create a file and copy it into the container via DockerFile (see this one down). I did not manage to run cron as entry-point for the container because then my database was not mounted. So, I create the container, access it, give full permissions to /var/www/html, and create the database table. And then I run cron. No erro, but nothing happens, no log is written on /var/log/cron.log. Here my files:
Dockerfile:
FROM postgres:latest
USER root
RUN apt-get update && apt-get install -y python3 python3-pip
RUN apt-get -y install cron nano
RUN apt-get -y install postgresql-server-dev-10 gcc python3-dev musl-dev
RUN pip3 install psycopg2 \
bs4 \
requests \
pytz
COPY temp-alerts-cron /etc/cron.d/temp-alerts-cron
RUN chmod 0777 /etc/cron.d/temp-alerts-cron
RUN chmod gu+rw /var/run/
RUN chmod gu+s /usr/sbin/cron
RUN touch /var/log/cron.log
RUN chmod 0777 /var/log/cron.log
RUN crontab /etc/cron.d/temp-alerts-cron
USER postgres
EXPOSE 5432
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
The temp-alers-cron:
20 13 * * * root /var/www/html/run.sh >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
And the called script:
echo 'inside thingy' >> /var/log/cron.log 2>&1
python3 /var/www/html/nuria_main.py
In case it is needed, here the docker-compose.yml:
services:
postgres:
container_name: 'temp-postgres'
build: # build the image from Dockerfile
context: ${PWD}
volumes: # bind mount volume for Postgres data
- pg-data:/var/lib/postgresql/data
- ./python-app:/var/www/html
restart: unless-stopped
environment:
- POSTGRES_USR=xxadmin
- POSTGRES_DB=tempdb
- POSTGRES_PASSWORD=secret
expose:
- "5432"
networks:
kong:
networks:
kong:
external:
name: kong_net
volumes:
pg-data:
Hope somebody knows what I am doing wrong. I do not get any log or error, so i am lost.
Thanks!

Gitlab CI not able to use pg_prove

I'm struggling to get a Gitlab CI up and running that uses the correct version of postgres (13) and has PGTap installed.
I deploy my project locally using a Dockerfile which uses postgres:13.3-alpine and then installs PGTap too. However, I'm not sure if I can use this Dockerfile to help with my CI issues.
In my gitlab-ci.yml file, I currently have:
variables:
GIT_SUBMODULE_STRATEGY: recursive
pgtap:
only:
refs:
- merge_request
- master
changes:
- ddl/**/*
image: postgres:13.1-alpine
services:
- name: postgres:13.1-alpine
alias: db
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
script:
- psql postgres://postgres#db/postgres -c 'create extension pgtap;'
- psql postgres://postgres#db/postgres -f ddl/01.sql
- cd ddl/
- psql postgres://postgres#db/postgres -f 02.sql
- psql postgres://postgres#db/postgres -f 03.sql
- pg_prove -d postgres://postgres#db/postgres --recurse test_*
The above works until it gets to the pg_prove command at the bottom as I get the below error:
pg_prove: command not found
Is there a way I can install pg_prove using the script commands? Or is there a better way to do this?
There is an old issue closed.
To summarize, either you build you own image based on postgres:13.1-alpine installing PGTap or you use a non official image where PGTap is installed 1maa/postgres:13-alpine :
docker run -it 1maa/postgres:13-alpine sh
/ # which pg_prove
/usr/local/bin/pg_prove
Since your step image is alpine based, you can try:
script:
- apk add --no-cache --update build-base make perl perl-dev git openssl-dev
- cpan TAP::Parser::SourceHandler::pgTAP
- psql.. etc
You can probably omit some of the packages...

GitLab CI - pg_dump error in pipeline stage

gitlab-ci.yaml file:
liquibase:
stage: liquibase
image: openjdk:8-jre-alpine
services:
- postgres
script:
- INIT_PATH='pwd'
- apk upgrade
- apk add bash
- apk add postgresql
- cd migrations
- mkdir /liquibase
- mkdir /Downloads
- cd /Downloads
- wget "https://github.com/liquibase/liquibase/releases/download/liquibase-parent-3.7.0-bin.zip"
- wget "https://repo1.maven.org/maven2/org/postgresql/postgresql/42.2.8/postgresql-42.2.8.jar"
- unzip liquibase-3.7.0-bin.zip -d /liquibase -q
- cd ../../liquibase
- export PATH=$PATH:/liquibase
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -l
- liquibase --changeLogFile=$INIT_PATH/migrations/baseline_postgres.xml --url="jdbc:postgresql://postgres:5432/custom_baseline" --username $POSTGRES_USER" -d "$POSTGRES_DB" -l
- cd ../../..
- pg_dump -h "postgres" -U "$POSTGRES_USER" -d "POSTGRES_DB" > baseline_schema.sql
This stage in my gitlab-ci pipeline (kubernetes executor) returns the following error:
pg_dump: server version: 13.1 (debian 13.1.1.pddg100+1); pg_dump version 11:10
pg_dump: aborting because of server version mismatch
I have tried adding symbolic links as other posts have suggested but I haven't succeeded. Any suggestions on resolving the pg_dump error for this stage in my Gitlab-CI pipeline?
You are using pg_dump from the wrong PostgreSQL version. Change the PATHenvironment variable or use an absolute path.

setting up postgis on Gitlab CI

I've being trying to setup gitlab CI with my django project. The project uses postgis extension. After all this setup I still get the error that postgis.control file could not be found
$ export PGPASSWORD=$POSTGRES_PASSWORD
$ psql -c "CREATE EXTENSION IF NOT EXISTS postgis;" -d $POSTGRES_DB -U $POSTGRES_USER -h "postgres"
ERROR: could not open extension control file "/usr/share/postgresql/11/extension/postgis.control": No such file or directory
ERROR: Job failed: exit code 1
Here is my .gitlab-ci.yml file
image: python:3.6
stages:
- test
services:
- mdillon/postgis
- postgres
variables:
POSTGRES_DB: my_db
POSTGRES_USER: my_user
POSTGRES_PASSWORD: ""
TESTFOLDER: "myapp/apps/api myapp/apps/logger"
DATABASE_URL: "postgres://my_user:#mdillon-postgis/my_db"
test:
stage: test
image: mdillon/postgis
before_script:
- apt-get update -qy
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -c "CREATE EXTENSION IF NOT EXISTS postgis;" -d $POSTGRES_DB -U $POSTGRES_USER -h "postgres"
- psql -c "CREATE EXTENSION IF NOT EXISTS postgis_topology;" -d $POSTGRES_DB -U $POSTGRES_USER -h "postgres"
- apt-get install -y openjdk-8-jre-headless libjpeg-dev zlib1g-dev software-properties-common ghostscript libxslt1-dev binutils libproj-dev libgdal-dev gdal-bin memcached libmemcached-dev
- export DEBIAN_FRONTEND=noninteractive;
- pip install --upgrade pip
- pip install -r requirements/base.pip
- pip install flake8
script:
- python manage.py test $TESTFOLDER --noinput --settings=myapp.settings.gitlab_ci --parallel 4 --verbosity=2
only:
- master
In my case, I discovered that it was the host I was using to connect to the database that was the cause of the problem.
After reading through the GitLab documentation, I discovered GitLab uses the name of the service as the host for the connection. So in my case, when connecting from my Python application, I used mdillon-postgis as my host.
You can find more details here https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
You probably don't have the postgis installed in the database server. you need ro run: sudo apt-get install postgis