I have the following GitHub workflow for building my project
name: build
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: set up JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Build with Maven
run: mvn clean compile test
The build works just fine.
However the project JUnit test require that a localhost server is listening on port 4444 ... and I get the following error:
Connection refused: localhost/127.0.0.1:4444
The server is spun up before each JUnit test and is part of the tests suite.
How do I tell the docker container that network connections are allowed on this port?
Or are there any open ports by default?
I share my solution. Hopefully it will help.
The Dockerfile for the test server listening on port (in my case 8080). As the comments mention above, you need to expose your port
FROM golang:1.15.2-alpine
# Setup you server
# This container exposes port 8080 to the outside world
EXPOSE 8080
# Run the executable
CMD ["./main"]
The docker-compose file (this is not a must, but in my case I had to start multiple depending containers). Again make sure the port is defined
main-server:
build: ./
container_name: main-server
image: artofimagination/main-server
ports:
- 8080:8080
The github action. Just run the docker-compose command for your server. Then the application that needs to connect to the port. In this case pytest tries to send requests to main-server through port 8080
It is worth to mention that in my example pytest accessing 127.0.0.1:8080
- name: Check out code into the Go module directory
uses: actions/checkout#v2
- name: Start test server
run: docker-compose up -d main-server
- name: Run functional test
run: pip3 install -r test/requirements.txt && pytest -v test
Good luck and I hope this helps.
Related
In Github Actions I want to test my frontend through Cypress. I run my Jekyll application in a Docker container and then run the tests in the next step. However, when connecting the Cypress tests to the Docker container I get a 'Connection refused' error. Does anyone know how I can access my container after running docker-compose?
This is my Github Actions yml file:
name: Cypress Tests
on: [push]
jobs:
cypress-run:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Cypress run
uses: cypress-io/github-action#v4.2.0
with:
start: docker-compose up -d
This is my docker-compose file:
version: '2'
services:
jekyll:
image: jekyll/jekyll:latest
command: jekyll serve --watch --force_polling --verbose
ports:
- 4000:4000
volumes:
- .:/srv/jekyll
So apparently although my Jekyll container started perfectly fine, the actual Jekyll server crashed on startup. I needed to specify the UID and GID of Github actions to my Jekyll environment in order for it to start correctly.
This is my new docker-compose file:
version: '2'
services:
jekyll:
image: jekyll/jekyll:latest
command: jekyll serve --watch --force_polling --verbose
hostname: localhost
environment:
JEKYLL_UID: 1001
JEKYLL_GID: 1001
ports:
- 4000:4000
volumes:
- .:/srv/jekyll
I have testng project with selenium for integration testing of frontend app in vuejs and springboot backend. So in order to run tests I need first to bring up all dependent projects:
springboot and mongodb
vuejs frrontend app
Each project is in its own repo.
So I have created docker images of springboot and frontend app and will put it up in gitlab container registry.
Then in the testeng project plan to use docker-compose in .gitlab-ci.yml. Here is docker-compose.yml for testng project:
version: '3.7'
services:
frontendapp:
image: demo.app-frontend-selenium
container_name: frontend-app-selenium
depends_on:
- demoapi
ports:
- 8080:80
demoapi:
image: demo.app-backend-selenium
container_name: demo-api-selenium
depends_on:
- mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod
- SCOUNT_API_ENDPOINTS_WEB_CORS_OPTIONS_ALLOWEDORIGINS=*
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_DATABASE=demo-api-selenium
- KEYCLOAK_AUTH-SERVER-URL=https://my-keycloak-url/auth
ports:
- 8082:80
mongodb:
image: mongo:4-bionic
container_name: mongodb-selenium
environment:
MONGO_INITDB_DATABASE: demo-api-selenium
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
After running docker-compose in gitlab-ci.yml what will be url of frontend app in order to execute tests?
When I do it locally I am using following urls for testing:
frontend app: http://localhost:8080
api: http://localhost:8082
But in case when running on gitlab ci what will be url to access frontend and api?
TL;DR instead of using localhost you need to use the hostname of your docker daemon (docker:dind) service. If you setup docker-in-docker for your GitLab job per usual setup, this is most likely docker.
So the urls you need to use according to your compose file are:
frontend app: http://docker:8080
api: http://docker:8082
my_job:
services:
- name: docker:dind
alias: docker # this is the hostname of the daemon
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
image: docker:stable
script:
- docker run -d -p 8000:80 strm/helloworld-http
- apk update && apk add curl # install curl and let server start
- curl http://docker:8000 # use the daemon to reach your containers
For a full explanation of this, read on.
Docker port mapping in Gitlab CI vs locally
How it works locally
Normally, when you use docker-compose locally on your system, you are typically running the docker daemon on your localhost (e.g. using docker desktop).
When you provide a port mapping like 8080:80 it means to publish port 8080 on the daemon host bound to port 80 in the container. When running locally, that means you can reach the container via localhost.
In GitLab
However, when you're running docker-in-docker on GitLab CI the important difference in this environment is that the docker daemon is remote. So, when you expose ports through the docker API, the ports are exposed on the docker daemon host not locally in your job container.
Hence, you must use the hostname of the docker daemon, not localhost, to reach your started containers.
Alternative solutions
An alternative to this would be to conduct your testing inside the same docker network that you create with your compose stack. That way, your testing is agnostic of where the docker environment lives and can, for example, leverage the service aliases in your compose file (like frontendapp, demoapi, etc) instead of relying on published ports.
For example, you may choose add a test container to your compose stack. Some testing libraries like Testcontainers can help set this up, too.
In my Django project, I have a CI workflow for running tests, which requires a Postgres service. Recently a new app introduced heavier packages such as pandas, matplotlib, pytorch and so on and this increased the run-tests job time from 2 to 12 minutes which is absurd. Also in my project, I have a base Docker image with Python and these packages that are heavier to speed up the build of the images. So I was thinking to use this same image in the workflow when running the steps because the packages would be loaded already.
Unfortunately, all goes well until it reaches the step to actually run the tests because it seems that the postgres service is not connected with the container and I get the following error:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This is my workflow right now. Any ideas on what I am doing wrong?
name: server-ci
on:
pull_request:
types: [opened]
env:
DJANGO_SETTINGS_MODULE: settings_test
jobs:
run-tests:
name: Run tests
runs-on: ubuntu-latest
container:
image: myimage/django-server:base
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
ports:
- 8000:8000
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: admin
POSTGRES_DB: mydb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
POSTGRES_HOST: localhost
POSTGRES_PORT: 5432
POSTGRES_PASSWORD: admin
POSTGRES_USER: postgres
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Cache dependencies
uses: actions/cache#v2
with:
path: /opt/venv
key: /opt/venv-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
if: steps.cache.outputs.cache-hit != 'true'
- name: Run tests
run: |
./manage.py test --parallel --verbosity=2
It turns out that the workflow is now running in a container of its own, next to the postgres container. So the port mapping to the runner VM doesn’t do anything any more (because it affects the host, not Docker containers on it).
The job and service containers get attached to the same Docker network, so all I need to do is change POSTGRES_HOST to postgres (the name of the service container) and Docker’s DNS should do the rest.
Credits: https://github.community/t/connect-postgres-service-with-custom-container-image/189994/2?u=everspader
I'm trying to set up a CI/CD pipeline in GitHub Actions for my Elixir project.
I can fetch dependencies, compile them, check formatting, credo... But when the tests starts, I'm not able to reach the PostgreSQL service declared on the YAML.
How can I link both containers? (Elixir and PostgreSQL)
According to the logs shown on GitHub Actions, both containers are on the same Docker network, so they should be reachable from each other using their network aliases. However, when I try to connect to the postgres one, it says NXDOMAIN. Also the ping doesn't work, as expected.
The content of my workflow:
name: Elixir CI
on: push
jobs:
build:
runs-on: ubuntu-18.04
container:
image: elixir:1.9.1
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app
POSTGRES_DB: my_app_test
steps:
- uses: actions/checkout#v1
- name: Install Dependencies
env:
MIX_ENV: test
run: |
cp config/test.secret.ci.exs config/test.secret.exs
mix local.rebar --force
mix local.hex --force
apt-get update -qqq && apt-get install make gcc -y -qqq
mix deps.get
- name: Compile
env:
MIX_ENV: test
run: mix compile --warnings-as-errors
- name: Run formatter
env:
MIX_ENV: test
run: mix format --check-formatted
- name: Run Credo
env:
MIX_ENV: test
run: mix credo
- name: Run Tests
env:
MIX_ENV: test
run: mix test
Also, on Elixir I have set up the test task to connect to postgres:5432, but it says the host does not exist.
According to some tutorials and examples I found on the Internet, this configurations looks like valid, but nothing I could do made it work.
You need to pass the name of the service ("postgres") as POSTGRES_HOST to the application and set the port POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} (spaces matter.)
Github CI dynamically routes port and host to it.
I wrote a blog post on the subject a couple of days ago.
I am working with my repo to build a test app for swift with circleCI and postgres but when it comes to testing I can't seem to grasp how to connect the two images in the testing phase.
I am running
circleci local execute --job build
Which should build both the swift and postgres images. I give them both the same env variables I give in the application. However I get this error when trying to run it. In my experience when trying to set up the two docker containers with compose this error was showing up when my api could not connect to the db container over the network.
Test Case 'AppTests.RemoveUserTest' started at 2019-04-09 19:46:15.380
Fatal error: 'try!' expression unexpectedly raised an error: NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "db", port: 5432, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), connectionErrors: [])): file /home/buildnode/jenkins/workspace/oss-swift-4.2-package-linux-ubuntu-16_04/swift/stdlib/public/core/ErrorType.swift, line 184
I know it says it failed because of a try statement but that try statement is failing because it's requesting actions from Postgres which is not there. Any ideas?
My current config.yml for circleci
version: 2
jobs:
build:
docker:
- image: swift:4.2
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
DB_HOSTNAME: db
PORT: 5432
- image: postgres:11.2-alpine
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
steps:
- checkout
- run: apt-get update -qq
- run: apt-get install -yq libssl-dev pkg-config wget
- run: apt-get install -y postgresql-client || true
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Compile code
command: swift build
- run:
name: Run unit tests
command: swift test
release:
docker:
- image: swift:4.2
steps:
- checkout
- run:
name: Compile code with optimizations
command: swift build -c release
push-to-docker-hub:
docker:
- image: docker:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl jq python py-pip
- run:
name: Build Docker Image
command: |
docker build -t api .
docker tag api <>/repo:latest
docker tag api <>/repo:$CIRCLE_SHA1
docker login -u $DOCKER_USER -p $DOCKER_PASS
docker push <>/repo:latest
docker push <>/repo:$CIRCLE_SHA1
# - persist_to_workspace:
# root: ./
# paths:
# - k8s-*.yml
workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master
#- linux-release
You're setting the hostname for the database to db, but not defining that anywhere. You need to name your Docker container to match the DB_HOSTNAME environment variable like so https://github.com/vapor/postgresql/blob/master/circle.yml#L8