CircleCI with Postgres - cannot connect - postgresql

In my Circle build, I'm trying to use a Postgres container for testing. The test DB will be created automatically, but Python's not finding the DB. Here's my config.yml:
version: 2.1
​
orbs:
python: circleci/python#0.2.1
​
jobs:
build:
executor: python/default
docker:
- image: circleci/python:3.8.0
environment:
- ENV: CIRCLE
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: circleci
POSTGRES_DB: circle_test
POSTGRES_HOST_AUTH_METHOD: trust
steps:
- checkout
- python/load-cache
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- python/save-cache
- run:
name: Running tests
command: |
. venv/bin/activate
python3 ./api/manage.py test
- store_artifacts:
path: test-reports/
destination: python_app
​
workflows:
main:
jobs:
- build
It seems like everything's fine until the tests start:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
There's no such folder:
ls -a /var/run/
. .. exim4 lock utmp

It looks like psycopg2 defaults to using the UNIX socket, so you'll need to specify the db host as localhost in your connect() call.
Also an aside, but it looks like you're specifying an additional but unneeded Python image. The executor for the job is already set to be python/default so it doesn't seem like the next image circleci/python:3.8.0 is doing anything based on the config you've pasted in.

Some changes in the PGHOST and DATABASE_URL config solved it:
jobs:
build:
executor: python/default
docker:
- image: circleci/python:3.8.0
environment:
ENV: CIRCLE
DATABASE_URL: postgresql://circleci#localhost/circle_test
- image: circleci/postgres:9.6
environment:
PGHOST: localhost
PGUSER: circleci
POSTGRES_USER: circleci
POSTGRES_DB: circle_test
POSTGRES_HOST_AUTH_METHOD: trust
steps:
- checkout
- python/load-cache
- run:
name: Wait for db to run
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- python/save-cache
- run:
name: Running tests
command: |
. venv/bin/activate
cat /etc/hosts
python3 ./django/gbookapi/manage.py test gbook.tests
- store_test_results:
path: test-reports.xml
- store_artifacts:
path: test-reports/
destination: python_app
workflows:
main:
jobs:
- build

Related

Celery itegration in CircleCI

I want to know how celery is implemented in CircleCI?
Below is my initial configuration of config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7 run
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
steps:
- checkout
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- run:
name: Running tests
command: |
. venv/bin/activate
cd app/
python3 manage.py test

Github Actions to Connect Postgres service with custom container image

In my Django project, I have a CI workflow for running tests, which requires a Postgres service. Recently a new app introduced heavier packages such as pandas, matplotlib, pytorch and so on and this increased the run-tests job time from 2 to 12 minutes which is absurd. Also in my project, I have a base Docker image with Python and these packages that are heavier to speed up the build of the images. So I was thinking to use this same image in the workflow when running the steps because the packages would be loaded already.
Unfortunately, all goes well until it reaches the step to actually run the tests because it seems that the postgres service is not connected with the container and I get the following error:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This is my workflow right now. Any ideas on what I am doing wrong?
name: server-ci
on:
pull_request:
types: [opened]
env:
DJANGO_SETTINGS_MODULE: settings_test
jobs:
run-tests:
name: Run tests
runs-on: ubuntu-latest
container:
image: myimage/django-server:base
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
ports:
- 8000:8000
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
POSTGRES_DB: mydb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_PASSWORD: admin
POSTGRES_USER: postgres
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Cache dependencies
uses: actions/cache#v2
with:
path: /opt/venv
key: /opt/venv-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
if: steps.cache.outputs.cache-hit != 'true'
- name: Run tests
run: |
./manage.py test --parallel --verbosity=2
It turns out that the workflow is now running in a container of its own, next to the postgres container. So the port mapping to the runner VM doesn’t do anything any more (because it affects the host, not Docker containers on it).
The job and service containers get attached to the same Docker network, so all I need to do is change POSTGRES_HOST to postgres (the name of the service container) and Docker’s DNS should do the rest.
Credits: https://github.community/t/connect-postgres-service-with-custom-container-image/189994/2?u=everspader

How to connect to Postgres in GitHub Actions

I am trying GitHub Actions for CI with a Ruby on Rails application.
My setup is with VM, not running the Ruby build in a container.
This is my workflow yml. It runs all the way without errors until the step "Setup Database".
name: Rails CI
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:10.10
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: db_test
ports:
- 5432/tcp
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis:latest
ports:
- 6379/tcp
steps:
- uses: actions/checkout#v1
- name: Set up ruby 2.5
uses: actions/setup-ruby#v1
with:
ruby-version: 2.5.5
- name: Set up node 8.14
uses: actions/setup-node#v1
with:
node-version: '8.14'
- name: Setup system dependencies
run: sudo apt-get install libpq-dev
- name: Setup App Dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup Database
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
REDIS_HOST: redis
REDIS_PORT: ${{ job.services.redis.ports[6379] }}
POSTGRES_HOST: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: bundle exec rspec --tag ~type:system
I am able to install ruby, node, the images, Postgres as a service, etc, and run Rubocop and Brakeman. But when I try to set up the DB before running Rspec it says it cannot connect to the DB.
As far as I've been able to ascertain, the host is localhost when running the VM configuration as opposed to a container configuration.
This is the database.yml.ci that the "Setup Database" step copies to the database.yml to be used by Rails.
test:
adapter: postgresql
encoding: unicode
database: db_test
pool: 5
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
I expected Postgres to be correctly set up and bundle exec rails db:create to create the database. However, it throws the following error:
rails aborted!
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
I've tried all sorts of different configurations, but unfortunately, Actions is sort of knew and there doesn't seem to be a lot of material available online.
Any ideas on how to fix this?
===========================
EDIT:
So I was able to sort this out through trial and error. I ended up using a docker image with a ruby and node container. This is the working configuration:
on:
push:
branches:
- master
pull_request:
branches:
- master
- development
- release
jobs:
build:
runs-on: ubuntu-latest
container:
image: timbru31/ruby-node:latest
services:
postgres:
image: postgres:11
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
chrome:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
steps:
- uses: actions/checkout#v1
- name: Setup app dependencies
run: |
gem install bundler -v 1.17.3 --no-document
bundle install --jobs 4 --retry 3
npm install
npm install -g yarn
- name: Run rubocop
run: bundle exec rubocop
- name: Run brakeman
run: bundle exec brakeman
- name: Setup database
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
run: |
cp config/database.yml.ci config/database.yml
bundle exec rails db:create
bundle exec rails db:schema:load
- name: Run rspec
env:
RAILS_ENV: test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ci_db_test
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }}
SELENIUM_URL: 'http://chrome:4444/wd/hub'
run: bundle exec rspec
And the CI DB configuration database.yml.ci
default: &default
adapter: postgresql
encoding: unicode
username: <%= ENV['POSTGRES_USER'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
pool: 5
database: <%= ENV['POSTGRES_DB'] %>
test:
<<: *default
I have a slightly different setup but this was the most relevant question when I encountered the same error so wanted to post here in case it can help. The two things that were critical for me were:
1) Set the DB_HOST=localhost
2) Set the --network="host" argument when you start the docker container with your rails app
name: Master Build
on: [push]
env:
registry: my_registry_name
# Not sure these are actually being passed down to rails, set them as the default in database.yml
DB_HOST: localhost
DB_USERNAME: postgres
DB_PASSWORD: postgres
jobs:
my_image_test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out repository
uses: actions/checkout#v2
- name: Build my_image docker image
uses: whoan/docker-build-with-cache-action#v5
with:
username: "${{secrets.aws_ecr_access_key_id}}"
password: "${{secrets.aws_ecr_secret_access_key}}"
registry: "${{env.registry}}"
image_name: my_image
context: my_image
- name: Lint rubocop
working-directory: ./my_image
run: docker run $registry/my_image bundle exec rubocop
- name: Run rails tests
working-directory: ./my_image
run: docker run --network="host" $registry/my_image bash -c "RAILS_ENV=test rails db:create && RAILS_ENV=test rails db:migrate && rails test"
Your problem appears to be that Postgres is not exposed on port 5432. Try to replace the port number with ${{ job.services.postgres.ports[5432] }}.
There are examples here: https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml
I had this challenge when trying to set up GitHub actions for a Rails Application.
Here's what worked for me:
name: Ruby
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version:
- '2.7.2'
node-version:
- '12.22'
database-name:
- my-app
database-password:
- postgres
database-user:
- postgres
database-host:
- 127.0.0.1
database-port:
- 5432
services:
postgres:
image: postgres:latest
env:
POSTGRES_DB: ${{ matrix.database-name }}
POSTGRES_USER: ${{ matrix.database-user }}
POSTGRES_PASSWORD: ${{ matrix.database-password }}
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options:
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Check out Git Repository
uses: actions/checkout#v2
- name: Set up Ruby, Bundler and Rails
uses: ruby/setup-ruby#v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- name: Set up Node
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install packages
run: |
yarn install --check-files
- name: Setup test database
env:
RAILS_ENV: test
DATABASE_NAME_TEST: ${{ matrix.database-name }}
DATABASE_USER: ${{ matrix.database-user }}
DATABASE_PASSWORD: ${{ matrix.database-password }}
DATABASE_HOST: ${{ matrix.database-host }}
DATABASE_PORT: ${{ matrix.database-port }}
POSTGRES_DB: ${{ matrix.database-name }}
run: |
bundle exec rails db:migrate
bundle exec rails db:seed
Note:
Replace my-app with the name of your app.
You can leave the database-password and database-user as postgres
That's all.
I hope this helps

Gitlab CI with Docker images - Flask microservice testing database

I am trying to deploy a Flack app/service which is built into a docker container to Gitlab CI. I am able to get everything working via docker-compose except when I try to run tests against the postgres database I am getting the below error:
Is the server running on host "events_db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
Presumably this is because the containers can't see each other. I've tried many different methods. But below is my latest. I have attempted to have docker-compose spin up both containers (just like it does on local), run the postgres db as a git lab service, run from a python image instead of a docker image, use a docker.prod.yml where I remove the volumes and variables.
Nothing is working. I've checked just about every link that shows up on google when you look for 'gitlab ci docker flask postgres' and I believe that I am massively misunderstanding the implementation.
I do have gitlab runner up and going.
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- postgres:latest
stages:
- test
variables:
POSTGRES_DB: events_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: postgres://postgres#postgres:5432/events_test
FLASK_ENV: development
APP_SETTINGS: app.config.TestingConfig
DOCKER_COMPOSE_VERSION: 1.23.2
before_script:
#- rm /usr/local/bin/docker-compose
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
#- mv docker-compose /usr/local/bin
- docker-compose up -d --build
test:
stage: test
#coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
script:
- docker-compose exec -T events python manage.py test
after_script:
- docker-compose down
docker-compose.yml
version: '3.3'
services:
events:
build:
context: ./services/events
dockerfile: Dockerfile
volumes:
- './services/events:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=app.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#events_db:5432/events_dev # new
- DATABASE_TEST_URL=postgres://postgres:postgres#events_db:5432/events_test # new
events_db:
build:
context: ./services/events/app/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
What is the executor type of your Gitlab Runner?
If you're using the Kubernetes executor, add this variable:
DOCKER_HOST: tcp://localhost:2375/
For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
Also, the Gitlab Runner should be in "privileged" mode.
More info:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#help-and-feedback
Hope that helps!

CircleCI Swift with Postgres connection issues

I am working with my repo to build a test app for swift with circleCI and postgres but when it comes to testing I can't seem to grasp how to connect the two images in the testing phase.
I am running
circleci local execute --job build
Which should build both the swift and postgres images. I give them both the same env variables I give in the application. However I get this error when trying to run it. In my experience when trying to set up the two docker containers with compose this error was showing up when my api could not connect to the db container over the network.
Test Case 'AppTests.RemoveUserTest' started at 2019-04-09 19:46:15.380
Fatal error: 'try!' expression unexpectedly raised an error: NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "db", port: 5432, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), connectionErrors: [])): file /home/buildnode/jenkins/workspace/oss-swift-4.2-package-linux-ubuntu-16_04/swift/stdlib/public/core/ErrorType.swift, line 184
I know it says it failed because of a try statement but that try statement is failing because it's requesting actions from Postgres which is not there. Any ideas?
My current config.yml for circleci
version: 2
jobs:
build:
docker:
- image: swift:4.2
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
DB_HOSTNAME: db
PORT: 5432
- image: postgres:11.2-alpine
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
steps:
- checkout
- run: apt-get update -qq
- run: apt-get install -yq libssl-dev pkg-config wget
- run: apt-get install -y postgresql-client || true
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Compile code
command: swift build
- run:
name: Run unit tests
command: swift test
release:
docker:
- image: swift:4.2
steps:
- checkout
- run:
name: Compile code with optimizations
command: swift build -c release
push-to-docker-hub:
docker:
- image: docker:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl jq python py-pip
- run:
name: Build Docker Image
command: |
docker build -t api .
docker tag api <>/repo:latest
docker tag api <>/repo:$CIRCLE_SHA1
docker login -u $DOCKER_USER -p $DOCKER_PASS
docker push <>/repo:latest
docker push <>/repo:$CIRCLE_SHA1
# - persist_to_workspace:
# root: ./
# paths:
# - k8s-*.yml
workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master
#- linux-release
You're setting the hostname for the database to db, but not defining that anywhere. You need to name your Docker container to match the DB_HOSTNAME environment variable like so https://github.com/vapor/postgresql/blob/master/circle.yml#L8