Using psql in a github action - postgresql

I am trying to use psql in a github action but am seeing the following error:
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
where my github action yml file is shown below (the run_all_tests.sh file just calls a subprocess that tries to run the command psql). Does anyone know why this could be happening?
name: Python application
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
# Service containers to run with `container-job`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Copy the code
uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python3 setup.py install
- name: Test with unittest
run: |
cd backend/py
source run_all_tests.sh
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432

Since I was having the same issue, I tried a different approach that worked for me.
In the first place, I run the job within a container:
jobs:
build:
container: gradle:jdk11
That won't make the psql command available so you need to add a run step to install it. The particular installation method may differ depending on the Docker image you choose:
jobs:
build:
container: gradle:jdk11
...
steps:
- run: |
apt-get update
apt-get install --yes --no-install-recommends postgresql-client
Please note you may have different steps above or below.
Now it's time to execute all these SQL you need. The most important thing here: database hostname is postgres which is the id of the service container.
jobs:
build:
container: gradle:jdk11
...
steps:
- run: |
apt-get update
apt-get install --yes --no-install-recommends postgresql-client
- run: |
psql -h postgres -U postgres -c 'CREATE DATABASE ...'
psql -h postgres -U postgres -c 'CREATE ROLE ...'

Since the job is running directly on a runner machine (not within a docker container), you need to connect to "localhost" instead of "postgres". It should work if you change POSTGRES_HOST: postgres to POSTGRES_HOST: localhost.
This is described in detail in the docs: https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers

Related

Github Actions: Connecting to postgres database in diesel-rs

I'm trying to run cargo test inside a CI workflow for my Actix Web app. Each test creates its own database by first connecting to the default database ("postgres") and then executing SQL queries.
This is the workflow currently used, the "Test postgres connection" runs successfully, but "Cargo test" fails:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
container: rust:latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install PSQL
run: |
apt update
apt install -y postgresql-client
- name: Test postgres connection
run: psql -h postgres -d postgres -U postgres -c 'SELECT 1;'
env:
PGPASSWORD: postgres
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
Here's an example of one of the tests:
struct Context {
pub psql_user: String,
pub psql_pw: String,
}
impl Context {
fn new() -> Self {
dotenv().ok();
let psql_user =
env::var("POSTGRES_USER").expect("POSTGRES_USER must be set for integration tests");
let psql_pw = env::var("POSTGRES_PASSWORD")
.expect("POSTGRES_PASSWORD must be set for integration tests");
let database_url = format!(
"postgres://{}:{}#localhost:5432/postgres",
psql_user, psql_pw
);
let mut conn = PgConnection::establish(&database_url)
.expect("Failed to connect to the database 'postgres'"); // This panics
// ...
}
}
#[actix_web::test]
async fn test_create_task_req() {
let ctx = Context::new("create_task_test");
// ...
}
I assume the mistake is somewhere in my code as everything runs fine in the workflow until cargo test, that throws this error:
---- test_create_task_req stdout ----
thread 'test_create_task_req' panicked at 'Failed to connect to the database 'postgres':
BadConnection("could not connect to server: Connection refused
Is the server running on host \"localhost\" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host \"localhost\" (::1) and accepting
TCP/IP connections on port 5432?
")',
tests/tasks_crud_integration.rs:42:14
When running cargo test locally, no problems occur.
With trial and error I ended up finding a working solution:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
# Removed the container 'rust:latest'
services:
postgres:
image: postgres # Removed version notation
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
# Removed 'Install PSQL' step as psql comes preinstalled in the postgres Docker Hub image
- name: Test postgres connection
run: psql postgres://postgres:postgres#localhost:5432/postgres -c 'SELECT 1;'
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
As seen above, the seemingly most critical change was removing the rust container (that was unnecessary for the workflow anyway). Despite the solution being found, I still don't exactly know what in that Docker image caused the problem in the first place.

Could not translate host name "postgres" to address on github actions

I'm configuring my first Github actions workflow. It require postgres, so I added a service like this :
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: postgres_db
POSTGRES_PASSWORD: postgres_password
POSTGRES_PORT: 5432
POSTGRES_USER: postgres_user
ports:
- 5432:5432
# set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
But Github actions are unable to use the postgres host. There is the steps :
steps:
- uses: actions/checkout#v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v3
with:
python-version: ${{ matrix.python-version }}
- name: Install PostgreSQL client
run: |
sudo apt-get update
sudo apt-get install --yes postgresql-client
- name: Test database connection
run: psql -h postgres -d postgres_db -U postgres_user -c 'SELECT 1;'
env:
PGPASSWORD: postgres_password
And the failure :
Run psql -h postgres -d postgres_db -U postgres_user -c 'SELECT 1;'
psql: error: could not translate host name "postgres" to address: Temporary failure in name resolution
Github action workflow page is available here : https://github.com/buxx/rolling/actions/runs/3092460076/jobs/5003751698. And entire config file here : https://github.com/buxx/rolling/blob/3e4d200e5e111d3731d7fc8d18e5795a0b82ca9a/.github/workflows/pr-tests.yml
How to configure Github action workflow to be able to use postgresql service ?
Without special indication about executing step into container, step is running on the runner. So, postgres host to use is localhost.

Github Actions to Connect Postgres service with custom container image

In my Django project, I have a CI workflow for running tests, which requires a Postgres service. Recently a new app introduced heavier packages such as pandas, matplotlib, pytorch and so on and this increased the run-tests job time from 2 to 12 minutes which is absurd. Also in my project, I have a base Docker image with Python and these packages that are heavier to speed up the build of the images. So I was thinking to use this same image in the workflow when running the steps because the packages would be loaded already.
Unfortunately, all goes well until it reaches the step to actually run the tests because it seems that the postgres service is not connected with the container and I get the following error:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This is my workflow right now. Any ideas on what I am doing wrong?
name: server-ci
on:
pull_request:
types: [opened]
env:
DJANGO_SETTINGS_MODULE: settings_test
jobs:
run-tests:
name: Run tests
runs-on: ubuntu-latest
container:
image: myimage/django-server:base
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
ports:
- 8000:8000
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
POSTGRES_DB: mydb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_PASSWORD: admin
POSTGRES_USER: postgres
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Cache dependencies
uses: actions/cache#v2
with:
path: /opt/venv
key: /opt/venv-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
if: steps.cache.outputs.cache-hit != 'true'
- name: Run tests
run: |
./manage.py test --parallel --verbosity=2
It turns out that the workflow is now running in a container of its own, next to the postgres container. So the port mapping to the runner VM doesn’t do anything any more (because it affects the host, not Docker containers on it).
The job and service containers get attached to the same Docker network, so all I need to do is change POSTGRES_HOST to postgres (the name of the service container) and Docker’s DNS should do the rest.
Credits: https://github.community/t/connect-postgres-service-with-custom-container-image/189994/2?u=everspader

CircleCI Swift with Postgres connection issues

I am working with my repo to build a test app for swift with circleCI and postgres but when it comes to testing I can't seem to grasp how to connect the two images in the testing phase.
I am running
circleci local execute --job build
Which should build both the swift and postgres images. I give them both the same env variables I give in the application. However I get this error when trying to run it. In my experience when trying to set up the two docker containers with compose this error was showing up when my api could not connect to the db container over the network.
Test Case 'AppTests.RemoveUserTest' started at 2019-04-09 19:46:15.380
Fatal error: 'try!' expression unexpectedly raised an error: NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "db", port: 5432, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), connectionErrors: [])): file /home/buildnode/jenkins/workspace/oss-swift-4.2-package-linux-ubuntu-16_04/swift/stdlib/public/core/ErrorType.swift, line 184
I know it says it failed because of a try statement but that try statement is failing because it's requesting actions from Postgres which is not there. Any ideas?
My current config.yml for circleci
version: 2
jobs:
build:
docker:
- image: swift:4.2
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
DB_HOSTNAME: db
PORT: 5432
- image: postgres:11.2-alpine
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
steps:
- checkout
- run: apt-get update -qq
- run: apt-get install -yq libssl-dev pkg-config wget
- run: apt-get install -y postgresql-client || true
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Compile code
command: swift build
- run:
name: Run unit tests
command: swift test
release:
docker:
- image: swift:4.2
steps:
- checkout
- run:
name: Compile code with optimizations
command: swift build -c release
push-to-docker-hub:
docker:
- image: docker:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl jq python py-pip
- run:
name: Build Docker Image
command: |
docker build -t api .
docker tag api <>/repo:latest
docker tag api <>/repo:$CIRCLE_SHA1
docker login -u $DOCKER_USER -p $DOCKER_PASS
docker push <>/repo:latest
docker push <>/repo:$CIRCLE_SHA1
# - persist_to_workspace:
# root: ./
# paths:
# - k8s-*.yml
workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master
#- linux-release
You're setting the hostname for the database to db, but not defining that anywhere. You need to name your Docker container to match the DB_HOSTNAME environment variable like so https://github.com/vapor/postgresql/blob/master/circle.yml#L8

How to setup Postgres database schema for python Bitbucket Pipeline

I am trying to integrate bitbucket pipeline with my python package that use Postgres database.
To achieve this I am using Postgres service but I am not able to find any way in bitbucket-pipelines.yml to populate database schema.
Below is my bitbucket-pipeline.yml and now I am getting error "bash: psql: command not found"
image: python:2.7.13
definitions:
services:
postgres:
image: postgres
pipelines:
default:
- step:
caches:
- pip
script:
- python setup.py sdist
services:
- postgres
branches:
master:
- step:
name: Run unit/integration tests
deployment: test
caches:
- pip
script:
- sudo apt-get update && sudo apt-get install -y postgresql-client
- psql -c 'drop database if exists testdb;' -U postgres
- psql -c 'create database testdb;' -U postgres
- python setup.py sdist
- python -m unittest discover tests/
This worked for me (I had to remove the sudos for before the apt-get)
image: atlassian/default-image:2
clone:
depth: 5 # include the last five commits
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: test_annotation
POSTGRES_USER: user
POSTGRES_PASSWORD: password
pipelines:
default:
- step:
caches:
- node
script:
- apt-get update && apt-get install -y postgresql-client
- PGPASSWORD=password psql -h localhost -p 5432 -U user test_annotation;
- chmod 755 ./scripts/create-test-database.sh
- ./scripts/create-test-database.sh
services:
- postgres
Make sure services is intented correctly, otherwise, the db won't start.
Julien