Testing containerized microservice with external dependency - rest

So I built a REST API microservice which queries a local Elasticsearch instance and translates the results according to an internal protocol. I built it into a Docker image and I would like to run some unit tests on it in build. Being ES connected to a private Docker network, it isn't reachable by the microservice during build, so the tests obviously fail. I was wondering, is there a way around this situation without having to use some complicated testing framework to do dependency injection? How do you test this kind of containers in your work practice?

I would build the application without any testing. Then I would test it using docker run so you can take the docker network advantages.
Roughly this is more elegant than test in the middle of the build:
docker build -t my_app:1.0-early your application in order to obtain an image.
docker run --network my_test_network my_app:1.0-early /run_test_cases.sh. Return the properly exit code or text.
Depending on the success or not of the test, re tag: docker tag my_app:1.0
You will need to have already created a docker network (docker network create my_test_network), or better use docker-compose.

Related

I cannot connect to kafka using a test container while the integration is writing a test

using testcontainer; I'm trying to write an integration test for my project with kafka, postgreSql and elasticsearch, but when I run docker-compose_v2.yml file and run my tests, my tests are successful, but when I use testcontainer, my tests fail, I can't connect to kafka while the tests are running.
Docker Compose should not use IP addresses. You should use Service names directly for endpoint resolution.
Your Compose file is not attached to the TestContainer network when you use Network.newNetwork().
If you want to use Compose in your tests, then do that, not run individual GenericContainers - https://www.testcontainers.org/modules/docker_compose/

Tests that require orchestration of multiple containers in GitLab CI/CD

I am considering porting a legacy pipeline that builds and tests Docker/OCI images into GitLab CI/CD. I already have a GitLab Runner in a Kubernetes cluster and it's registered to a GitLab instance. Testing a particular image requires running certain commands inside (for running unit tests, etc.). Presumably this could be modeled by a job my_test like so:
my_test:
stage: test
image: my_image_1
script:
- my_script.sh
However, these tests are not completely self-contained but also require the presence of a second container (a database, i.e.). At the outset, I can imagine one, perhaps suboptimal way for handling this (there would also have to be some logic for waiting until my_image2 has started up and a way for kubectl to obtain sufficient credentials):
before_script: kubectl deployment create my_deployment2 ...
after_script: kubectl delete deployment my_deployment2 ...
I am fairly new to GitLab CI/CD so I am wondering: What is best practice for modeling a test like this one, i.e. situations where tests requires orchestration of multiple containers? (Does this fit into the scope of a GitLab job or should it better be delegated to other software that my_test could talk to?)
Your first look should be at Services.
With services you can start a container running MySQL or Postgres and run tests which will connect to it.

Docker compose build order

I have a problem with docker compose and build order. Below is my dockerfile for starting my .net application
As you can see as part of my build process I run some tests using "RUN dotnet test backend_test/backend_test.csproj"
These tests require a mongodb database to be present.
I try to solve this dependency with docker-compose and its "depends_on" feature, see below.
However this doesn't seem to work as when I run "docker-compose up" I get the following:
The tests eventually timeout since there is no mongodb present.
Does depends_on actually affect build order at all or does it only affect start-order (i.e builds everything the proceeds to start in correct order) ?
Is there another way of doing this ? (I want tests to run as part of building my final app)
Ty in advance, let me know If you need extra information
As you guessed, depends_on is for runtime order only, not build time - it just affects docker-compose up and docker-compose stop.
I highly recommend you make all the builds independent of each other. Perhaps you need to consider separate builder and runtime images here, and / or use a Docker-based CI (Gitlab, Travis, Circle etc) to have these dependencies available for testing.
Note also, depends_on often disappoints people - as it just waits for Docker's startup to finish, not the application startup. So your DB / service / whatever may still be starting up when the container that depends on it start will start using it, causing timeouts etc. This is why HEALTH_CHECK now exists (with a similar healthcheck feature in Docker Compose)

Managing resources (database, elasticsearch, redis, etc) for tests using Docker and Jenkins

We need to use Jenkins to test some web apps that each need:
a database (postgres in our case)
a search service (ElasticSearch in our case, but only sometimes)
a cache server, such as redis
So far, we've just had these services running on the Jenkins master, but this causes problems when we want to upgrade Postgres, ES or Redis versions. Not all apps can move in lock step, and we want to run the tests on new versions before committing to move an app in production.
What we'd like to do is have these services provided on a per-job-run basis, each one running in its own container.
What's the best way to orchestrate these containers?
How do you start up these ancillary containers and tear them down, regardless of whether to job succeeds or not?
how do you prevent port collisions between, say, the database in a run of a job for one web app and the database in the job for another web app?
Check docker-compose and write a docker-compose file for your tests.
The latest network features of Docker (private network) will help you to isolate builds running in parallel.
However, start learning docker-compose as if you only had one build at the same time. When confident with this, look further for advanced docker documentation around networking.

Docker: Running multiple applications VS running multiple containers

I am trying to run Wildfly, Jenkins and Postgresql in Docker container(s).
As far as I could understand from articles I've read, the Docker way is to have each application run in a different container.
Is my assumption correct or is it better to have only one container containing these three applications?
Afaik the basic philosophy behind docker is to run one service per container. You can run whole application inside a container, but I don't think that will go well with the way docker work. Running different services in different containers gives you more flexibility and a better modularity for your app.