GitHub actions cannot connect to MongoDB service - mongodb

I'm having trouble running my automated tests with GitHub actions. I can't figure out why I can't connect with the MongoDB service running my integration tests. I tried different hosts: localhost, 127.0.0.1, 0.0.0.0, but none of them can connect with the database.
It works perfectly fine in my docker setup, but for some reason not with GitHub actions.
name: CI master
on: [push, pull_request]
env:
RUST_BACKTRACE: 1
CARGO_TERM_COLOR: always
APP_ENV: development
APP_MONGO_USER: test
APP_MONGO_PASS: password
APP_MONGO_DB: test
jobs:
# Run tests
test:
name: Test
runs-on: ubuntu-latest
services:
mongo:
image: mongo
env:
MONGO_INITDB_ROOT_USERNAME: ${APP_MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${APP_MONGO_PASS}
MONGO_INITDB_DATABASE: ${APP_MONGO_DB}
ports:
- 27017:27017
steps:
- uses: actions/checkout#v2
- uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo#v1
with:
command: test
Config file (development.toml).
[application]
host = "127.0.0.1"
port = 8080
Connecting to the database. The environment variables and config file get merged and I'm accessing them here through config: &Settings.
pub async fn init(config: &Settings) -> Result<Database> {
let client_options = ClientOptions::parse(
format!(
"mongodb://{}:{}#{}:27017",
config.mongo.user, config.mongo.pass, config.application.host
)
.as_str(),
)
.await?;
let client = Client::with_options(client_options)?;
let database = client.database("test"); // TODO: replace with env var
database.run_command(doc! {"ping": 1}, None).await?;
println!("Connected successfully.");
Ok(database)
}
Calling the init function.
// Mongo
let mongo = mongo::init(&config).await.expect("Failed to init mongo");
The error I get.
thread 'health_check' panicked at 'Failed to init mongo: Error { kind: ServerSelectionError { message: "Server selection timeout: No available servers. Topology: { Type: Unknown, Servers: [ { Address: 127.0.0.1:27017, Type: Unknown, Error: Connection refused (os error 111) }, ] }" }, labels: [] }', tests/health_check.rs:31:44

I eventually solved it by adding a health check to my service. Seems that my issue had something to do with my database not being up yet before running the tests.
services:
mongodb:
image: mongo
env:
MONGO_INITDB_ROOT_USERNAME: test
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: test
options: >-
--health-cmd mongo
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 27017:27017

Add a health check to the service that uses mongosh to check if it's possible to ping the database.
services:
mongo:
image: mongo
env:
MONGO_INITDB_ROOT_USERNAME: username
MONGO_INITDB_ROOT_PASSWORD: password
options: >-
--health-cmd "echo 'db.runCommand("ping").ok' | mongosh --quiet"
--health-interval 10s
--health-timeout 5s
--health-retries 5
--name mongo_container

Related

Github Actions: Connecting to postgres database in diesel-rs

I'm trying to run cargo test inside a CI workflow for my Actix Web app. Each test creates its own database by first connecting to the default database ("postgres") and then executing SQL queries.
This is the workflow currently used, the "Test postgres connection" runs successfully, but "Cargo test" fails:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
container: rust:latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install PSQL
run: |
apt update
apt install -y postgresql-client
- name: Test postgres connection
run: psql -h postgres -d postgres -U postgres -c 'SELECT 1;'
env:
PGPASSWORD: postgres
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
Here's an example of one of the tests:
struct Context {
pub psql_user: String,
pub psql_pw: String,
}
impl Context {
fn new() -> Self {
dotenv().ok();
let psql_user =
env::var("POSTGRES_USER").expect("POSTGRES_USER must be set for integration tests");
let psql_pw = env::var("POSTGRES_PASSWORD")
.expect("POSTGRES_PASSWORD must be set for integration tests");
let database_url = format!(
"postgres://{}:{}#localhost:5432/postgres",
psql_user, psql_pw
);
let mut conn = PgConnection::establish(&database_url)
.expect("Failed to connect to the database 'postgres'"); // This panics
// ...
}
}
#[actix_web::test]
async fn test_create_task_req() {
let ctx = Context::new("create_task_test");
// ...
}
I assume the mistake is somewhere in my code as everything runs fine in the workflow until cargo test, that throws this error:
---- test_create_task_req stdout ----
thread 'test_create_task_req' panicked at 'Failed to connect to the database 'postgres':
BadConnection("could not connect to server: Connection refused
Is the server running on host \"localhost\" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host \"localhost\" (::1) and accepting
TCP/IP connections on port 5432?
")',
tests/tasks_crud_integration.rs:42:14
When running cargo test locally, no problems occur.
With trial and error I ended up finding a working solution:
on: [push, pull_request]
name: CI
env:
CARGO_TERM_COLOR: always
jobs:
test:
name: Test
runs-on: ubuntu-latest
# Removed the container 'rust:latest'
services:
postgres:
image: postgres # Removed version notation
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Install stable toolchain
uses: actions-rs/toolchain#v1
with:
profile: minimal
toolchain: stable
override: true
# Removed 'Install PSQL' step as psql comes preinstalled in the postgres Docker Hub image
- name: Test postgres connection
run: psql postgres://postgres:postgres#localhost:5432/postgres -c 'SELECT 1;'
- name: Cargo test
uses: actions-rs/cargo#v1
with:
command: test
args: --verbose
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
As seen above, the seemingly most critical change was removing the rust container (that was unnecessary for the workflow anyway). Despite the solution being found, I still don't exactly know what in that Docker image caused the problem in the first place.

Disable caddy ssl to enable a deploy to Cloud Run through Gitlab CI

I am trying to deploy api-platform docker-compose images to Cloud Run. I have that working up to the point where the image starts to run. I have the system listening on port 8080 but I cannot turn off the https redirect. So, it shuts down. Here is the message I receive:
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
run: loading initial config: loading new config: loading http app module: provision http: server srv0: setting up route handlers: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 1: loading handler modules: position 0: loading module 'mercure': provision http.handlers.mercure: a JWT key for publishers must be provided
I am using Cloud Build to build the container and deploy to Run. This is kicked off through Gitlab CI.
This is my cloudbuild.yaml file.
steps:
- name: docker/compose
args: ['build']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_caddy:latest', 'gcr.io/$PROJECT_ID/apiplatform']
- name: docker
args: [ 'push', 'gcr.io/$PROJECT_ID/apiplatform']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/apiplatform', '--region', 'us-west1', '--platform', 'managed', '--allow-unauthenticated']
docker-compose.yml
version: "3.4"
services:
php:
build:
context: ./api
target: api_platform_php
depends_on:
- database
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
pwa:
build:
context: ./pwa
target: api_platform_pwa_prod
environment:
API_PLATFORM_CLIENT_GENERATOR_ENTRYPOINT: http://caddy
caddy:
build:
context: api/
target: api_platform_caddy
depends_on:
- php
- pwa
environment:
PWA_UPSTREAM: pwa:3000
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:8080}
MERCURE_PUBLISHER_JWT_KEY: ${MERCURE_PUBLISHER_JWT_KEY:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${MERCURE_SUBSCRIBER_JWT_KEY:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 8080
published: 8080
protocol: tcp
database:
image: postgres:13-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_PASSWORD=!ChangeMe!
- POSTGRES_USER=api-platform
volumes:
- db_data:/var/lib/postgresql/data:rw
# you may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./api/docker/db/data:/var/lib/postgresql/data:rw
volumes:
php_socket:
db_data:
caddy_data:
caddy_config:

Docker Containers up and running but cannot load the localhost

I could able to run Wiki_js and MongoDB containers using docker-compose. Both are running without any errors.
This is my docker-compose file.
version: '3'
services:
wikidb:
image: mongo:3
expose:
- '27017'
command: '--smallfiles --bind_ip ::,0.0.0.0'
environment:
- 'MONGO_LOG_DIR=/dev/null'
volumes:
- $HOME/mongo/db:/data/db
wikijs:
image: 'requarks/wiki:1.0'
links:
- wikidb
depends_on:
- wikidb
ports:
- '8000:3000'
environment:
WIKI_ADMIN_EMAIL: myemail#gmail.com
volumes:
- $HOME/wiki/config.yml:/var/wiki/config.yml
This is config.yml file. I didn't set up git for this project.
title: Wiki
host: http://localhost
port: 8000
paths:
repo: ./repo
data: ./data
uploads:
maxImageFileSize: 3
maxOtherFileSize: 100
db: mongodb://wikidb:27017/wiki
git:
url: https://github.com/Organization/Repo
branch: master
auth:
type: ssh
username: marty
password: MartyMcFly88
privateKey: /etc/wiki/keys/git.pem
sslVerify: true
serverEmail: marty#example.com
showUserEmail: true
But I cannot load the localhost under the port 8000. Is there a specific reason for this?

How to connect to Postgres in GitHub Actions

I am trying GitHub Actions for CI with a Ruby on Rails application.
My setup is with VM, not running the Ruby build in a container.
This is my workflow yml. It runs all the way without errors until the step "Setup Database".
name: Rails CI
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:10.10
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: db_test
ports:
- 5432/tcp
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis:latest
ports:
- 6379/tcp
steps:
- uses: actions/checkout#v1
- name: Set up ruby 2.5
uses: actions/setup-ruby#v1
with:
ruby-version: 2.5.5
- name: Set up node 8.14
uses: actions/setup-node#v1
with:
node-version: '8.14'
- name: Setup system dependencies
run: sudo apt-get install libpq-dev
- name: Setup App Dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup Database
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
REDIS_HOST: redis
REDIS_PORT: ${{ job.services.redis.ports[6379] }}
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: bundle exec rspec --tag ~type:system
I am able to install ruby, node, the images, Postgres as a service, etc, and run Rubocop and Brakeman. But when I try to set up the DB before running Rspec it says it cannot connect to the DB.
As far as I've been able to ascertain, the host is localhost when running the VM configuration as opposed to a container configuration.
This is the database.yml.ci that the "Setup Database" step copies to the database.yml to be used by Rails.
test:
adapter: postgresql
encoding: unicode
database: db_test
pool: 5
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
I expected Postgres to be correctly set up and bundle exec rails db:create to create the database. However, it throws the following error:
rails aborted!
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
I've tried all sorts of different configurations, but unfortunately, Actions is sort of knew and there doesn't seem to be a lot of material available online.
Any ideas on how to fix this?
===========================
EDIT:
So I was able to sort this out through trial and error. I ended up using a docker image with a ruby and node container. This is the working configuration:
on:
push:
branches:
- master
pull_request:
branches:
- master
- development
- release
jobs:
build:
runs-on: ubuntu-latest
container:
image: timbru31/ruby-node:latest
services:
postgres:
image: postgres:11
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
chrome:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
steps:
- uses: actions/checkout#v1
- name: Setup app dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup database
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
SELENIUM_URL: 'http://chrome:4444/wd/hub'
run: bundle exec rspec
And the CI DB configuration database.yml.ci
default: &default
adapter: postgresql
encoding: unicode
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
pool: 5
database: <%= ENV['POSTGRES_DB'] %>
test:
<<: *default
I have a slightly different setup but this was the most relevant question when I encountered the same error so wanted to post here in case it can help. The two things that were critical for me were:
1) Set the DB_HOST=localhost
2) Set the --network="host" argument when you start the docker container with your rails app
name: Master Build
on: [push]
env:
registry: my_registry_name
# Not sure these are actually being passed down to rails, set them as the default in database.yml
DB_HOST: localhost
DB_USERNAME: postgres
DB_PASSWORD: postgres
jobs:
my_image_test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out repository
uses: actions/checkout#v2
- name: Build my_image docker image
uses: whoan/docker-build-with-cache-action#v5
with:
username: "${{secrets.aws_ecr_access_key_id}}"
password: "${{secrets.aws_ecr_secret_access_key}}"
registry: "${{env.registry}}"
image_name: my_image
context: my_image
- name: Lint rubocop
working-directory: ./my_image
run: docker run $registry/my_image bundle exec rubocop
- name: Run rails tests
working-directory: ./my_image
run: docker run --network="host" $registry/my_image bash -c "RAILS_ENV=test rails db:create && RAILS_ENV=test rails db:migrate && rails test"
Your problem appears to be that Postgres is not exposed on port 5432. Try to replace the port number with ${{ job.services.postgres.ports[5432] }}.
There are examples here: https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml
I had this challenge when trying to set up GitHub actions for a Rails Application.
Here's what worked for me:
name: Ruby
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version:
- '2.7.2'
node-version:
- '12.22'
database-name:
- my-app
database-password:
- postgres
database-user:
- postgres
database-host:
- 127.0.0.1
database-port:
- 5432
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options:
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out Git Repository
uses: actions/checkout#v2
- name: Set up Ruby, Bundler and Rails
uses: ruby/setup-ruby#v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- name: Set up Node
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install packages
run: |
yarn install --check-files
- name: Setup test database
env:
RAILS_ENV: test
DATABASE_NAME_TEST: ${{ matrix.database-name }}
DATABASE_USER: ${{ matrix.database-user }}
DATABASE_PASSWORD: ${{ matrix.database-password }}
DATABASE_HOST: ${{ matrix.database-host }}
DATABASE_PORT: ${{ matrix.database-port }}
POSTGRES_DB: ${{ matrix.database-name }}
run: |
bundle exec rails db:migrate
bundle exec rails db:seed
Note:
Replace my-app with the name of your app.
You can leave the database-password and database-user as postgres
That's all.
I hope this helps

How do I connect to a GitHub Action's job's service?

GitHub Actions allow you to run background services on a per-job basis. After following the examples, I can't figure out how to connect to a running PostgreSQL container.
I've attempted a few different approaches in this pull request, but none of them have worked.
name: dinosql test suite
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432/tcp
steps:
- uses: actions/checkout#master
- name: Test dinosql/ondeck
run: go test -v ./...
working-directory: internal/dinosql/testdata/ondeck
env:
PG_USER: postgres
PG_DATABASE: postgres
PG_PASSWORD: postgres
PG_PORT: ${{ job.services.postgres.ports['5432'] }}
This setup results in the following error:
Run go test -v ./...
=== RUN TestQueries
=== PAUSE TestQueries
=== RUN TestPrepared
=== PAUSE TestPrepared
=== CONT TestQueries
=== CONT TestPrepared
--- FAIL: TestPrepared (0.00s)
##[error] db_test.go:212: db: postgres://postgres:postgres#127.0.0.1:32768/postgres?sslmode=disable
##[error] db_test.go:212: dial tcp 127.0.0.1:32768: connect: connection refused
--- FAIL: TestQueries (0.00s)
##[error] db_test.go:83: db: postgres://postgres:postgres#127.0.0.1:32768/postgres?sslmode=disable
##[error] db_test.go:83: dial tcp 127.0.0.1:32768: connect: connection refused
FAIL
FAIL example.com/ondeck 0.005s
? example.com/ondeck/prepared [no test files]
##[error]Process completed with exit code 1.
The tests should pass if a valid database connection could be made.
I ran into the same problem and found this example via GitHub's code search after a lot of trial and error.
name: dinosql test suite
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_PASSWORD: postgres
ports:
- 5432/tcp
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#master
- name: Test dinosql/ondeck
run: go test -v ./...
working-directory: internal/dinosql/testdata/ondeck
env:
# use postgres for the host here because we have specified a contaienr for the job.
# If we were running the job on the VM this would be localhost
PG_HOST: postgres
PG_USER: postgres
PG_DATABASE: postgres
PG_PASSWORD: postgres
PG_PORT: ${{ job.services.postgres.ports['5432'] }}
Adding the healthcheck options and changing the database hostname from 127.0.0.1 to postgres should do the trick.
It appears that without the healthcheck options, the postgres container will be shut down and won't be available for the tests.
If you're not running your job in a container, like in this example which is running on a VM ubuntu-latest, you should still use localhost and just map the ports
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps tcp port 5432 on service container to the host
- 5432:5432
https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers#running-jobs-directly-on-the-runner-machine