How to connect to Postgres in GitHub Actions - postgresql

I am trying GitHub Actions for CI with a Ruby on Rails application.
My setup is with VM, not running the Ruby build in a container.
This is my workflow yml. It runs all the way without errors until the step "Setup Database".
name: Rails CI
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:10.10
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: db_test
ports:
- 5432/tcp
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis:latest
ports:
- 6379/tcp
steps:
- uses: actions/checkout#v1
- name: Set up ruby 2.5
uses: actions/setup-ruby#v1
with:
ruby-version: 2.5.5
- name: Set up node 8.14
uses: actions/setup-node#v1
with:
node-version: '8.14'
- name: Setup system dependencies
run: sudo apt-get install libpq-dev
- name: Setup App Dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup Database
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
REDIS_HOST: redis
REDIS_PORT: ${{ job.services.redis.ports[6379] }}
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: bundle exec rspec --tag ~type:system
I am able to install ruby, node, the images, Postgres as a service, etc, and run Rubocop and Brakeman. But when I try to set up the DB before running Rspec it says it cannot connect to the DB.
As far as I've been able to ascertain, the host is localhost when running the VM configuration as opposed to a container configuration.
This is the database.yml.ci that the "Setup Database" step copies to the database.yml to be used by Rails.
test:
adapter: postgresql
encoding: unicode
database: db_test
pool: 5
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
I expected Postgres to be correctly set up and bundle exec rails db:create to create the database. However, it throws the following error:
rails aborted!
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
I've tried all sorts of different configurations, but unfortunately, Actions is sort of knew and there doesn't seem to be a lot of material available online.
Any ideas on how to fix this?
===========================
EDIT:
So I was able to sort this out through trial and error. I ended up using a docker image with a ruby and node container. This is the working configuration:
on:
push:
branches:
- master
pull_request:
branches:
- master
- development
- release
jobs:
build:
runs-on: ubuntu-latest
container:
image: timbru31/ruby-node:latest
services:
postgres:
image: postgres:11
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
chrome:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
steps:
- uses: actions/checkout#v1
- name: Setup app dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup database
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
SELENIUM_URL: 'http://chrome:4444/wd/hub'
run: bundle exec rspec
And the CI DB configuration database.yml.ci
default: &default
adapter: postgresql
encoding: unicode
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
pool: 5
database: <%= ENV['POSTGRES_DB'] %>
test:
<<: *default

I have a slightly different setup but this was the most relevant question when I encountered the same error so wanted to post here in case it can help. The two things that were critical for me were:
1) Set the DB_HOST=localhost
2) Set the --network="host" argument when you start the docker container with your rails app
name: Master Build
on: [push]
env:
registry: my_registry_name
# Not sure these are actually being passed down to rails, set them as the default in database.yml
DB_HOST: localhost
DB_USERNAME: postgres
DB_PASSWORD: postgres
jobs:
my_image_test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out repository
uses: actions/checkout#v2
- name: Build my_image docker image
uses: whoan/docker-build-with-cache-action#v5
with:
username: "${{secrets.aws_ecr_access_key_id}}"
password: "${{secrets.aws_ecr_secret_access_key}}"
registry: "${{env.registry}}"
image_name: my_image
context: my_image
- name: Lint rubocop
working-directory: ./my_image
run: docker run $registry/my_image bundle exec rubocop
- name: Run rails tests
working-directory: ./my_image
run: docker run --network="host" $registry/my_image bash -c "RAILS_ENV=test rails db:create && RAILS_ENV=test rails db:migrate && rails test"

Your problem appears to be that Postgres is not exposed on port 5432. Try to replace the port number with ${{ job.services.postgres.ports[5432] }}.
There are examples here: https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml

I had this challenge when trying to set up GitHub actions for a Rails Application.
Here's what worked for me:
name: Ruby
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version:
- '2.7.2'
node-version:
- '12.22'
database-name:
- my-app
database-password:
- postgres
database-user:
- postgres
database-host:
- 127.0.0.1
database-port:
- 5432
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options:
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out Git Repository
uses: actions/checkout#v2
- name: Set up Ruby, Bundler and Rails
uses: ruby/setup-ruby#v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- name: Set up Node
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install packages
run: |
yarn install --check-files
- name: Setup test database
env:
RAILS_ENV: test
DATABASE_NAME_TEST: ${{ matrix.database-name }}
DATABASE_USER: ${{ matrix.database-user }}
DATABASE_PASSWORD: ${{ matrix.database-password }}
DATABASE_HOST: ${{ matrix.database-host }}
DATABASE_PORT: ${{ matrix.database-port }}
POSTGRES_DB: ${{ matrix.database-name }}
run: |
bundle exec rails db:migrate
bundle exec rails db:seed
Note:
Replace my-app with the name of your app.
You can leave the database-password and database-user as postgres
That's all.
I hope this helps

Related

Cant connect to container in CI

I cant connect to my postgres container in ci what i am missing containers in ci is build my connection in postges should be like this "Server=localhost;Port=5432;User Id=root;Password=root;Database=employee_expenses_db;" i use this connection when i test in my local machine it works fine with my containers on my local pc but it not works in Ci and containers in ci is build but connection in those containers are not found
in CI it must connect to my containers that runs in and then my integration test should pass
ci.yml file
name: CI
on:
push:
branches: [master]
pull_request:
release:
types: [published]
env:
NUGET_PACKAGES: /opt/github/cache/${{ github.repository }}
DOTNET_VERSION: 6.0.x
jobs:
build:
name: Build
runs-on: [self-hosted, linux]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-dotnet#v1
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: dotnet build
test:
name: Test
runs-on: [self-hosted, linux]
needs: build
steps:
- uses: actions/checkout#v2
- uses: actions/setup-dotnet#v1
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Run tests
run: |
docker-compose -f ./test/dockerCompose.yml build
docker-compose -f ./test/dockerCompose.yml up -d
dotnet test --configuration ${DOTNET_CONFIGURATION=Release} ./test/EmployeeExpensesApi.Tests
dockerCompose.yml
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
hostname: "rabbitmq"
labels:
NAME: "rabbitmq"
ports:
- '4369:4369'
- '5551:5551'
- '5552:5552'
- '5672:5672'
- '25672:25672'
- '15672:15672'
networks:
- test-network
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: employee_expenses_db
PGUSER: "root"
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- "5432:5432"
restart: unless-stopped
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d employee_expenses_db" ]
interval: 5s
timeout: 5s
retries: 5
networks:
- test-network
liquibase:
container_name: liquibase
build: ./liquibase
depends_on:
postgres:
condition: service_healthy
networks:
- test-network
networks:
test-network:
driver: bridge

Github Actions: Connecting to postgres database in diesel-rs

I'm trying to run cargo test inside a CI workflow for my Actix Web app. Each test creates its own database by first connecting to the default database ("postgres") and then executing SQL queries.
This is the workflow currently used, the "Test postgres connection" runs successfully, but "Cargo test" fails:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
container: rust:latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install PSQL
run: |
apt update
apt install -y postgresql-client
- name: Test postgres connection
run: psql -h postgres -d postgres -U postgres -c 'SELECT 1;'
env:
PGPASSWORD: postgres
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
Here's an example of one of the tests:
struct Context {
pub psql_user: String,
pub psql_pw: String,
}
impl Context {
fn new() -> Self {
dotenv().ok();
let psql_user =
env::var("POSTGRES_USER").expect("POSTGRES_USER must be set for integration tests");
let psql_pw = env::var("POSTGRES_PASSWORD")
.expect("POSTGRES_PASSWORD must be set for integration tests");
let database_url = format!(
"postgres://{}:{}#localhost:5432/postgres",
psql_user, psql_pw
);
let mut conn = PgConnection::establish(&database_url)
.expect("Failed to connect to the database 'postgres'"); // This panics
// ...
}
}
#[actix_web::test]
async fn test_create_task_req() {
let ctx = Context::new("create_task_test");
// ...
}
I assume the mistake is somewhere in my code as everything runs fine in the workflow until cargo test, that throws this error:
---- test_create_task_req stdout ----
thread 'test_create_task_req' panicked at 'Failed to connect to the database 'postgres':
BadConnection("could not connect to server: Connection refused
Is the server running on host \"localhost\" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host \"localhost\" (::1) and accepting
TCP/IP connections on port 5432?
")',
tests/tasks_crud_integration.rs:42:14
When running cargo test locally, no problems occur.
With trial and error I ended up finding a working solution:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
# Removed the container 'rust:latest'
services:
postgres:
image: postgres # Removed version notation
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
# Removed 'Install PSQL' step as psql comes preinstalled in the postgres Docker Hub image
- name: Test postgres connection
run: psql postgres://postgres:postgres#localhost:5432/postgres -c 'SELECT 1;'
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
As seen above, the seemingly most critical change was removing the rust container (that was unnecessary for the workflow anyway). Despite the solution being found, I still don't exactly know what in that Docker image caused the problem in the first place.

Github Actions to Connect Postgres service with custom container image

In my Django project, I have a CI workflow for running tests, which requires a Postgres service. Recently a new app introduced heavier packages such as pandas, matplotlib, pytorch and so on and this increased the run-tests job time from 2 to 12 minutes which is absurd. Also in my project, I have a base Docker image with Python and these packages that are heavier to speed up the build of the images. So I was thinking to use this same image in the workflow when running the steps because the packages would be loaded already.
Unfortunately, all goes well until it reaches the step to actually run the tests because it seems that the postgres service is not connected with the container and I get the following error:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This is my workflow right now. Any ideas on what I am doing wrong?
name: server-ci
on:
pull_request:
types: [opened]
env:
DJANGO_SETTINGS_MODULE: settings_test
jobs:
run-tests:
name: Run tests
runs-on: ubuntu-latest
container:
image: myimage/django-server:base
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
ports:
- 8000:8000
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
POSTGRES_DB: mydb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_PASSWORD: admin
POSTGRES_USER: postgres
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Cache dependencies
uses: actions/cache#v2
with:
path: /opt/venv
key: /opt/venv-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
if: steps.cache.outputs.cache-hit != 'true'
- name: Run tests
run: |
./manage.py test --parallel --verbosity=2
It turns out that the workflow is now running in a container of its own, next to the postgres container. So the port mapping to the runner VM doesn’t do anything any more (because it affects the host, not Docker containers on it).
The job and service containers get attached to the same Docker network, so all I need to do is change POSTGRES_HOST to postgres (the name of the service container) and Docker’s DNS should do the rest.
Credits: https://github.community/t/connect-postgres-service-with-custom-container-image/189994/2?u=everspader

stuck on "starting your workflow run" on Github Actions

I wanted to test "github action" feature but it is not starting and its is stuck.It just says "Starting your workflow run..."
Is there something wrong in my build.yml file
This is my build.yml file:
name: CI
on:
pull_request:
branches:
- master
workflow_dispatch:
env:
POSTGRESQL_VERSION: 13.1
POSTGRESQL_DB: students_info
POSTGRESQL_USER: postgres
POSTGRESQL_PASSWORD: password
JAVA_VERSION: 1.15
jobs:
build:
runs-on: ubuntu-16.04
services:
postgres:
image: postgres:13.1
env:
POSTGRES_DB: ${{ env.POSTGRESQL_DB }}
POSTGRES_USER: ${{ env.POSTGRESQL_USER }}
POSTGRES_PASSWORD: ${{ env.POSTGRESQL_PASSWORD }}
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- uses: actions/setup-java#v1.4.3
with:
java-version: ${{ env.JAVA_VERSION }}
- name: Maven Clean Package
run: |
./mvnw --no-transfer-progress clean package -P build-frontend
PS:I have tried with ubuntu-latest as well
There is currently a problem with GitHub Actions:
https://www.githubstatus.com/incidents/zbpwygxwb3gw

github actions - run sql script in postgres service

I want to run a script in the postgres service in github actions that creates a table and adds an extension. How can I do that? Do I need to make a shell script or can I do right in the yaml file?
sql script
drop database mydb;
create database mydb;
\c mydb;
CREATE EXTENSION "pgcrypto";
workflow
name: API Integration Tests
on:
pull_request:
push:
branches:
-master
env:
DB_HOST: localhost
DB_USERNAME: postgres
DB_PASSWORD: rt
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x, 12.x, 13.x]
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: mydb
POSTGRES_PASSWORD: helloworl
POSTGRES_USER: postgres
ports:
- 5433:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install
run: npm ci
- name: npm test
run: npm run test
You can add a step that uses PSQL commands.
Here's an example step that creates your database:
- name: Create database
run: |
PGPASSWORD=helloworl psql -U postgres -tc "SELECT 'CREATE DATABASE mydb' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mydb')"
By the way, I note that the next command you wanted was: CREATE EXTENSION "pgcrypto";, which I assume is because you want to generate UUIDs (Common use case). Please note that you do not need this for get_random_uuid() as this is natively support in Postgres from v13 onwards.
However if you really, really, really wanted to add pgcrypto, you can use this step:
- name: Enable pgcrypto extension
run: |
PGPASSWORD=helloworl psql -U postgres -tc "CREATE EXTENSION 'pgcrypto';"