In my Django project, I have a CI workflow for running tests, which requires a Postgres service. Recently a new app introduced heavier packages such as pandas, matplotlib, pytorch and so on and this increased the run-tests job time from 2 to 12 minutes which is absurd. Also in my project, I have a base Docker image with Python and these packages that are heavier to speed up the build of the images. So I was thinking to use this same image in the workflow when running the steps because the packages would be loaded already.
Unfortunately, all goes well until it reaches the step to actually run the tests because it seems that the postgres service is not connected with the container and I get the following error:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This is my workflow right now. Any ideas on what I am doing wrong?
name: server-ci
on:
pull_request:
types: [opened]
env:
DJANGO_SETTINGS_MODULE: settings_test
jobs:
run-tests:
name: Run tests
runs-on: ubuntu-latest
container:
image: myimage/django-server:base
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
ports:
- 8000:8000
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
POSTGRES_DB: mydb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_PASSWORD: admin
POSTGRES_USER: postgres
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Cache dependencies
uses: actions/cache#v2
with:
path: /opt/venv
key: /opt/venv-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
if: steps.cache.outputs.cache-hit != 'true'
- name: Run tests
run: |
./manage.py test --parallel --verbosity=2
It turns out that the workflow is now running in a container of its own, next to the postgres container. So the port mapping to the runner VM doesn’t do anything any more (because it affects the host, not Docker containers on it).
The job and service containers get attached to the same Docker network, so all I need to do is change POSTGRES_HOST to postgres (the name of the service container) and Docker’s DNS should do the rest.
Credits: https://github.community/t/connect-postgres-service-with-custom-container-image/189994/2?u=everspader
Related
I'm trying to run cargo test inside a CI workflow for my Actix Web app. Each test creates its own database by first connecting to the default database ("postgres") and then executing SQL queries.
This is the workflow currently used, the "Test postgres connection" runs successfully, but "Cargo test" fails:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
container: rust:latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install PSQL
run: |
apt update
apt install -y postgresql-client
- name: Test postgres connection
run: psql -h postgres -d postgres -U postgres -c 'SELECT 1;'
env:
PGPASSWORD: postgres
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
Here's an example of one of the tests:
struct Context {
pub psql_user: String,
pub psql_pw: String,
}
impl Context {
fn new() -> Self {
dotenv().ok();
let psql_user =
env::var("POSTGRES_USER").expect("POSTGRES_USER must be set for integration tests");
let psql_pw = env::var("POSTGRES_PASSWORD")
.expect("POSTGRES_PASSWORD must be set for integration tests");
let database_url = format!(
"postgres://{}:{}#localhost:5432/postgres",
psql_user, psql_pw
);
let mut conn = PgConnection::establish(&database_url)
.expect("Failed to connect to the database 'postgres'"); // This panics
// ...
}
}
#[actix_web::test]
async fn test_create_task_req() {
let ctx = Context::new("create_task_test");
// ...
}
I assume the mistake is somewhere in my code as everything runs fine in the workflow until cargo test, that throws this error:
---- test_create_task_req stdout ----
thread 'test_create_task_req' panicked at 'Failed to connect to the database 'postgres':
BadConnection("could not connect to server: Connection refused
Is the server running on host \"localhost\" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host \"localhost\" (::1) and accepting
TCP/IP connections on port 5432?
")',
tests/tasks_crud_integration.rs:42:14
When running cargo test locally, no problems occur.
With trial and error I ended up finding a working solution:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
# Removed the container 'rust:latest'
services:
postgres:
image: postgres # Removed version notation
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
# Removed 'Install PSQL' step as psql comes preinstalled in the postgres Docker Hub image
- name: Test postgres connection
run: psql postgres://postgres:postgres#localhost:5432/postgres -c 'SELECT 1;'
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
As seen above, the seemingly most critical change was removing the rust container (that was unnecessary for the workflow anyway). Despite the solution being found, I still don't exactly know what in that Docker image caused the problem in the first place.
I am trying to use psql in a github action but am seeing the following error:
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
where my github action yml file is shown below (the run_all_tests.sh file just calls a subprocess that tries to run the command psql). Does anyone know why this could be happening?
name: Python application
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
# Service containers to run with `container-job`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Copy the code
uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python3 setup.py install
- name: Test with unittest
run: |
cd backend/py
source run_all_tests.sh
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432
Since I was having the same issue, I tried a different approach that worked for me.
In the first place, I run the job within a container:
jobs:
build:
container: gradle:jdk11
That won't make the psql command available so you need to add a run step to install it. The particular installation method may differ depending on the Docker image you choose:
jobs:
build:
container: gradle:jdk11
...
steps:
- run: |
apt-get update
apt-get install --yes --no-install-recommends postgresql-client
Please note you may have different steps above or below.
Now it's time to execute all these SQL you need. The most important thing here: database hostname is postgres which is the id of the service container.
jobs:
build:
container: gradle:jdk11
...
steps:
- run: |
apt-get update
apt-get install --yes --no-install-recommends postgresql-client
- run: |
psql -h postgres -U postgres -c 'CREATE DATABASE ...'
psql -h postgres -U postgres -c 'CREATE ROLE ...'
Since the job is running directly on a runner machine (not within a docker container), you need to connect to "localhost" instead of "postgres". It should work if you change POSTGRES_HOST: postgres to POSTGRES_HOST: localhost.
This is described in detail in the docs: https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers
I am trying GitHub Actions for CI with a Ruby on Rails application.
My setup is with VM, not running the Ruby build in a container.
This is my workflow yml. It runs all the way without errors until the step "Setup Database".
name: Rails CI
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:10.10
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: db_test
ports:
- 5432/tcp
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis:latest
ports:
- 6379/tcp
steps:
- uses: actions/checkout#v1
- name: Set up ruby 2.5
uses: actions/setup-ruby#v1
with:
ruby-version: 2.5.5
- name: Set up node 8.14
uses: actions/setup-node#v1
with:
node-version: '8.14'
- name: Setup system dependencies
run: sudo apt-get install libpq-dev
- name: Setup App Dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup Database
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
REDIS_HOST: redis
REDIS_PORT: ${{ job.services.redis.ports[6379] }}
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: bundle exec rspec --tag ~type:system
I am able to install ruby, node, the images, Postgres as a service, etc, and run Rubocop and Brakeman. But when I try to set up the DB before running Rspec it says it cannot connect to the DB.
As far as I've been able to ascertain, the host is localhost when running the VM configuration as opposed to a container configuration.
This is the database.yml.ci that the "Setup Database" step copies to the database.yml to be used by Rails.
test:
adapter: postgresql
encoding: unicode
database: db_test
pool: 5
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
I expected Postgres to be correctly set up and bundle exec rails db:create to create the database. However, it throws the following error:
rails aborted!
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
I've tried all sorts of different configurations, but unfortunately, Actions is sort of knew and there doesn't seem to be a lot of material available online.
Any ideas on how to fix this?
===========================
EDIT:
So I was able to sort this out through trial and error. I ended up using a docker image with a ruby and node container. This is the working configuration:
on:
push:
branches:
- master
pull_request:
branches:
- master
- development
- release
jobs:
build:
runs-on: ubuntu-latest
container:
image: timbru31/ruby-node:latest
services:
postgres:
image: postgres:11
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
chrome:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
steps:
- uses: actions/checkout#v1
- name: Setup app dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup database
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
SELENIUM_URL: 'http://chrome:4444/wd/hub'
run: bundle exec rspec
And the CI DB configuration database.yml.ci
default: &default
adapter: postgresql
encoding: unicode
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
pool: 5
database: <%= ENV['POSTGRES_DB'] %>
test:
<<: *default
I have a slightly different setup but this was the most relevant question when I encountered the same error so wanted to post here in case it can help. The two things that were critical for me were:
1) Set the DB_HOST=localhost
2) Set the --network="host" argument when you start the docker container with your rails app
name: Master Build
on: [push]
env:
registry: my_registry_name
# Not sure these are actually being passed down to rails, set them as the default in database.yml
DB_HOST: localhost
DB_USERNAME: postgres
DB_PASSWORD: postgres
jobs:
my_image_test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out repository
uses: actions/checkout#v2
- name: Build my_image docker image
uses: whoan/docker-build-with-cache-action#v5
with:
username: "${{secrets.aws_ecr_access_key_id}}"
password: "${{secrets.aws_ecr_secret_access_key}}"
registry: "${{env.registry}}"
image_name: my_image
context: my_image
- name: Lint rubocop
working-directory: ./my_image
run: docker run $registry/my_image bundle exec rubocop
- name: Run rails tests
working-directory: ./my_image
run: docker run --network="host" $registry/my_image bash -c "RAILS_ENV=test rails db:create && RAILS_ENV=test rails db:migrate && rails test"
Your problem appears to be that Postgres is not exposed on port 5432. Try to replace the port number with ${{ job.services.postgres.ports[5432] }}.
There are examples here: https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml
I had this challenge when trying to set up GitHub actions for a Rails Application.
Here's what worked for me:
name: Ruby
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version:
- '2.7.2'
node-version:
- '12.22'
database-name:
- my-app
database-password:
- postgres
database-user:
- postgres
database-host:
- 127.0.0.1
database-port:
- 5432
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options:
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out Git Repository
uses: actions/checkout#v2
- name: Set up Ruby, Bundler and Rails
uses: ruby/setup-ruby#v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- name: Set up Node
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install packages
run: |
yarn install --check-files
- name: Setup test database
env:
RAILS_ENV: test
DATABASE_NAME_TEST: ${{ matrix.database-name }}
DATABASE_USER: ${{ matrix.database-user }}
DATABASE_PASSWORD: ${{ matrix.database-password }}
DATABASE_HOST: ${{ matrix.database-host }}
DATABASE_PORT: ${{ matrix.database-port }}
POSTGRES_DB: ${{ matrix.database-name }}
run: |
bundle exec rails db:migrate
bundle exec rails db:seed
Note:
Replace my-app with the name of your app.
You can leave the database-password and database-user as postgres
That's all.
I hope this helps
I'm trying to set up a CI/CD pipeline in GitHub Actions for my Elixir project.
I can fetch dependencies, compile them, check formatting, credo... But when the tests starts, I'm not able to reach the PostgreSQL service declared on the YAML.
How can I link both containers? (Elixir and PostgreSQL)
According to the logs shown on GitHub Actions, both containers are on the same Docker network, so they should be reachable from each other using their network aliases. However, when I try to connect to the postgres one, it says NXDOMAIN. Also the ping doesn't work, as expected.
The content of my workflow:
name: Elixir CI
on: push
jobs:
build:
runs-on: ubuntu-18.04
container:
image: elixir:1.9.1
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app
POSTGRES_DB: my_app_test
steps:
- uses: actions/checkout#v1
- name: Install Dependencies
env:
MIX_ENV: test
run: |
cp config/test.secret.ci.exs config/test.secret.exs
mix local.rebar --force
mix local.hex --force
apt-get update -qqq && apt-get install make gcc -y -qqq
mix deps.get
- name: Compile
env:
MIX_ENV: test
run: mix compile --warnings-as-errors
- name: Run formatter
env:
MIX_ENV: test
run: mix format --check-formatted
- name: Run Credo
env:
MIX_ENV: test
run: mix credo
- name: Run Tests
env:
MIX_ENV: test
run: mix test
Also, on Elixir I have set up the test task to connect to postgres:5432, but it says the host does not exist.
According to some tutorials and examples I found on the Internet, this configurations looks like valid, but nothing I could do made it work.
You need to pass the name of the service ("postgres") as POSTGRES_HOST to the application and set the port POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} (spaces matter.)
Github CI dynamically routes port and host to it.
I wrote a blog post on the subject a couple of days ago.
GitHub Actions allow you to run background services on a per-job basis. After following the examples, I can't figure out how to connect to a running PostgreSQL container.
I've attempted a few different approaches in this pull request, but none of them have worked.
name: dinosql test suite
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432/tcp
steps:
- uses: actions/checkout#master
- name: Test dinosql/ondeck
run: go test -v ./...
working-directory: internal/dinosql/testdata/ondeck
env:
PG_USER: postgres
PG_DATABASE: postgres
PG_PASSWORD: postgres
PG_PORT: ${{ job.services.postgres.ports['5432'] }}
This setup results in the following error:
Run go test -v ./...
=== RUN TestQueries
=== PAUSE TestQueries
=== RUN TestPrepared
=== PAUSE TestPrepared
=== CONT TestQueries
=== CONT TestPrepared
--- FAIL: TestPrepared (0.00s)
##[error] db_test.go:212: db: postgres://postgres:postgres#127.0.0.1:32768/postgres?sslmode=disable
##[error] db_test.go:212: dial tcp 127.0.0.1:32768: connect: connection refused
--- FAIL: TestQueries (0.00s)
##[error] db_test.go:83: db: postgres://postgres:postgres#127.0.0.1:32768/postgres?sslmode=disable
##[error] db_test.go:83: dial tcp 127.0.0.1:32768: connect: connection refused
FAIL
FAIL example.com/ondeck 0.005s
? example.com/ondeck/prepared [no test files]
##[error]Process completed with exit code 1.
The tests should pass if a valid database connection could be made.
I ran into the same problem and found this example via GitHub's code search after a lot of trial and error.
name: dinosql test suite
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432/tcp
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#master
- name: Test dinosql/ondeck
run: go test -v ./...
working-directory: internal/dinosql/testdata/ondeck
env:
# use postgres for the host here because we have specified a contaienr for the job.
# If we were running the job on the VM this would be localhost
PG_HOST: postgres
PG_USER: postgres
PG_DATABASE: postgres
PG_PASSWORD: postgres
PG_PORT: ${{ job.services.postgres.ports['5432'] }}
Adding the healthcheck options and changing the database hostname from 127.0.0.1 to postgres should do the trick.
It appears that without the healthcheck options, the postgres container will be shut down and won't be available for the tests.
If you're not running your job in a container, like in this example which is running on a VM ubuntu-latest, you should still use localhost and just map the ports
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers#running-jobs-directly-on-the-runner-machine