Start a depends_on docker only if it is not started - docker-compose

I have a service that runs in a docker. For reasons I want to run a suite of tests on it in parallel, for example integration tests and performance tests.
I have a docker-compose.yaml that looks like this:
# My service - the thing under test in this scenario
service:
ports:
- 4000:4000
...
# Integration tests
integration:
depends_on:
- service
...
# Performance tests
performance:
depends_on:
- service
...
I would like to continue to expose 4000 so that components outside of docker world can interact with it. However when I run these tests in parallel I get this error for one of the tests
Cannot start service service ... 0.0.0.0:4000 failed: port is already in use.
This is because docker-compose is trying to start a service for each of the tests. Is it possible to tell docker-compose to use the same instance of service? Is there a better way to achieve the same results?

I've solved this for myself and I'll document it here for anyone who faces a similar problem in the future.
Publishing ports from the service by default is the problem here. Depending on the context of how the service is started, the ports can be published or not. Better to use the docker subnet for communication between the docker containers.
The docker-compose.yaml would look more like this now:
service:
# no ports declaration
...
integration:
depends_on:
- service
environment:
- SERVICE_URL=http://service:4000
...
performance:
depends_on:
- service
environment:
- SERVICE_URL=http://service:4000
...
Instead ports are published when needed with whatever starts the service:
docker-compose run -p 4000:4000 service

Related

How to access dockerized app under test in gitlab CI

I have testng project with selenium for integration testing of frontend app in vuejs and springboot backend. So in order to run tests I need first to bring up all dependent projects:
springboot and mongodb
vuejs frrontend app
Each project is in its own repo.
So I have created docker images of springboot and frontend app and will put it up in gitlab container registry.
Then in the testeng project plan to use docker-compose in .gitlab-ci.yml. Here is docker-compose.yml for testng project:
version: '3.7'
services:
frontendapp:
image: demo.app-frontend-selenium
container_name: frontend-app-selenium
depends_on:
- demoapi
ports:
- 8080:80
demoapi:
image: demo.app-backend-selenium
container_name: demo-api-selenium
depends_on:
- mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod
- SCOUNT_API_ENDPOINTS_WEB_CORS_OPTIONS_ALLOWEDORIGINS=*
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_DATABASE=demo-api-selenium
- KEYCLOAK_AUTH-SERVER-URL=https://my-keycloak-url/auth
ports:
- 8082:80
mongodb:
image: mongo:4-bionic
container_name: mongodb-selenium
environment:
MONGO_INITDB_DATABASE: demo-api-selenium
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
After running docker-compose in gitlab-ci.yml what will be url of frontend app in order to execute tests?
When I do it locally I am using following urls for testing:
frontend app: http://localhost:8080
api: http://localhost:8082
But in case when running on gitlab ci what will be url to access frontend and api?
TL;DR instead of using localhost you need to use the hostname of your docker daemon (docker:dind) service. If you setup docker-in-docker for your GitLab job per usual setup, this is most likely docker.
So the urls you need to use according to your compose file are:
frontend app: http://docker:8080
api: http://docker:8082
my_job:
services:
- name: docker:dind
alias: docker # this is the hostname of the daemon
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
image: docker:stable
script:
- docker run -d -p 8000:80 strm/helloworld-http
- apk update && apk add curl # install curl and let server start
- curl http://docker:8000 # use the daemon to reach your containers
For a full explanation of this, read on.
Docker port mapping in Gitlab CI vs locally
How it works locally
Normally, when you use docker-compose locally on your system, you are typically running the docker daemon on your localhost (e.g. using docker desktop).
When you provide a port mapping like 8080:80 it means to publish port 8080 on the daemon host bound to port 80 in the container. When running locally, that means you can reach the container via localhost.
In GitLab
However, when you're running docker-in-docker on GitLab CI the important difference in this environment is that the docker daemon is remote. So, when you expose ports through the docker API, the ports are exposed on the docker daemon host not locally in your job container.
Hence, you must use the hostname of the docker daemon, not localhost, to reach your started containers.
Alternative solutions
An alternative to this would be to conduct your testing inside the same docker network that you create with your compose stack. That way, your testing is agnostic of where the docker environment lives and can, for example, leverage the service aliases in your compose file (like frontendapp, demoapi, etc) instead of relying on published ports.
For example, you may choose add a test container to your compose stack. Some testing libraries like Testcontainers can help set this up, too.

ECS: Generate 2 Task Definitions from a single Docker Compose File

I have an architecture represented in this docker-compose.yml file:
version: '3'
services:
flask:
container_name: flask
image: "user/demo_flask"
ports:
- "5000:5000"
links:
- mysql
mysql:
container_name: mysql
image: "user/demo_mysql"
environment:
- MYSQL_ROOT_PASSWORD=mypassword
Here the flask service is a Flask app connecting to the mysql DB, which is just mysql:5.7 with some custom configuration. I need the services to communicate (in particular, flask has to be able to reach mysql).
I want to deploy such an architecture to ECS using EC2 Launch Type. I plan to use ecs-cli to generate the Task Definitions. As long as I can understand, if I include in my directory the file ecs-params.yml :
version: 1
task_definition:
services:
flask:
cpu_shares: 50
mem_limit: 262144000
mysql:
cpu_shares: 50
mem_limit: 262144000
I get a single Task Definition, which is not what I want. I would like two separate Task Definitions, each one with a single container. Is it possible to get this?
Thanks.
One service can not be run with two task definition in ECS Cluster.
What you can do here is to create two task definition (one for flask and another for mysql) and create two service using these two task definition.Now you can make communication using service discovery in ECS Cluster. Please check this aws service discovery document.

How to get long hostname of a docker container inside a container?

Our application is behind traefik reverse proxy. We manage many subdomains and we use the watch-file ability of traefik to dynamically setup new subdomains to proxyfy.
So our application generate a traefik .yaml dynamic config file.
The same traefik will manage many instances of the same application.
For that purprose we need to indicate to traefik how to reach our application inside it own network.
We know we can use the simple hostname, the one which is the container name.
But this only work inside the default docker-compose network of the app instance and not the external network shared with traefik.
This one need the long hostname version so we are sure it reach the right application instance.
(<compose_name>_<container_name>_1 or depending docker-compose version (<compose_name>_<container_name>_1_<hash>)
Do you know a way to get the long version of the hostname of a docker-compose container inside another container of the same docker-compose default network ?
For better context, here a simple docker-compose.yaml file
version: "3"
services:
app:
image: app_image
networks:
- app_network
restart: unless-stopped
nginx:
image: nginx
links:
- app
networks:
- app_network
- traefik_traefik
restart: unless-stopped
networks:
traefik_traefik:
external: true
app_network:
driver: bridge
We want, from inside the app container, to get the nginx long hostname version, so we can use it to generate the dynamic configuration for traefik.
Thanks for your help.
We thought to have found a solution. Querying for the FQDN of the short-named hostname give the needed long hostname.
dig +short -x `dig +short nginx`
Return composename_app_1.composename.app_network.
In our python app we can get the same result with
import socket
socket.getfqdn('nginx')

Ambassador API Gateway doesn't pickup services

I'm a new Ambassador user here. I have walked thru the tutorial, in an effort to understand how use ambassador gateway. I am attempting to run this locally via Docker Compose until it's ready for deployment to K8s in production.
My use case is that all http traffic comes in on port 80, and then directed to the appropriate service. Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory? I ask because this doesn't appear to actually pickup my files (the postgres startup doesn't show in console). And when I run "docker ps" I only show:
CONTAINER ID IMAGE PORTS NAMES
8bc8393ac04c 05a916199684 k8s_statsd_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
1c00f2341caf d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
fe20c4819514 05a916199684 k8s_statsd_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
ba6415b028ba d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
9df07dc5083d 05a916199684 k8s_statsd_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
682e1f9902a0 d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
bb6d2f749491 quay.io/datawire/ambassador:0.40.2 0.0.0.0:80->80/tcp apigateway_ambassador_1
I have a docker-compose.yaml:
version: '3.1'
# Define the services/containers to be run
services:
ambassador:
image: quay.io/datawire/ambassador:0.40.2
ports:
- 80:80
volumes:
# mount a volume where we can inject configuration files
- ./config:/ambassador/config
postgres:
image: my-postgresql
ports:
- '5432:5432'
and in /config/mapping-postgres.yaml:
---
apiVersion: ambassador/v0
kind: Mapping
name: postgres_mapping
rewrite: ""
service: postgres:5432
volumes:
- ../my-postgres:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db1, db2, db3
- POSTGRES_USER=<>
- POSTGRES_PASSWORD=<>
volumes and environment are not valid configs for Ambassador Mappings. Ambassador lets you proxy to postgres but the authentication has to be handled by your application.
Having said that, it looks like your Postgres container is not starting. (Perhaps because it needs an initial config). You can check for errors with:
$ docker ps -a | grep postgres
$ docker logs <container-id-from-previous-step>
You can also check a postgres docker compose example here.
Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory?
It's pretty standard, but you can use any directory you like for this.

Spring Boot with MongoDB on Docker

In these days, I am trying to deploy my Spring Boot OAuth2 project. It has 3 different modules.(Authentication Server, Resource Server and Front-end)
Authentication and Resource servers have own *.yml file for configurations such as mongodb name-port, server profile-ip etc.
What I am trying to do exactly? I want to deploy spring boot application on docker but i dont want to put my database(mongodb) on docker as a container.
I am not sure this structure is possible or not ?
Because When i run my mongodb on my local(localhost:27017) after that try to deploy spring boot application on local docker as a container, i am getting Timeout exception for MongoDB. The application couldnt connect to external mongoDB(non docker container).
What should I do? Should I run mongodb on docker? I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo.
I tried to run another spring boot app without mongodb, it is working successfully and i made request from browser by ip&port, i got response from application as i expected.
*** MONGO URL ****
mongodb://127.0.0.1:27017/db-localhost
**** Authentication server .yml file ****
server:
port: 9080
contextPath: /auth-service
tomcat:
access_log_enabled: true
basedir: target/tomcat
security:
basic:
enabled: false
spring:
profiles:
active: development
thymeleaf:
cache: false
mongo:
db:
server: 127.0.0.1
port: 27017
logging:
level:
org.springframework.security: DEBUG
---
spring:
profiles: development
data:
mongodb:
database: db-localhost
---
spring:
profiles: production
data:
mongodb:
database: db-prod
---
***** DOCKER FILE *******
FROM java:8
VOLUME /tmp
ADD auth-server-1.0-SNAPSHOT.jar app.jar
EXPOSE 9080
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
**** DOCKER COMMAND *******
docker run -it -P --name authserver authserver
The issue with your configuration is referencing the mongodb from inside of the authservice on 127.0.0.1 which is the loopback adapter inside of the authservice container. So you tell your spring application that mongodb is running in the same container as the authservice spring application, which is not the case.
Either you are running your database as an own container (which requires to handle the data volumes correctly) and referencing it using the container name as hostname (via link) or you need to reference the externally running mongodb instance with the correct address. This would be the ip address of the machine running the docker daemon (I assume for your local environment something like 192.168.0.xxx).
Question: What should I do?
At least for developing purposes I would recommend to also use docker for your mongodb instance. I had a similar setup with RabbitMQ in addition and it solved a lot of problems when I used docker for those as well. Using docker-compose to set everything up makes it even easier. Later you can still specify which mongodb instance you want to use through your spring properties.
Problem: I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo
The problem is probably because you have not set up any networks or hostnames for you services. Your spring application can not resolve the hostname of your mongo server, since you specified 127.0.0.1 for your mongodb server in your properties.
I would recommend using docker for your mongodb and use a docker-compose.yml file like this to set everything up:
version: '3.7'
services:
resource-server:
image: demo/resource-server:latest
container_name: resource-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8080:8080
auth-server:
image: demo/auth-server:latest
container_name: auth-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8081:8080
mongodb-example:
image: mongo:latest
container_name: mongo-example
hostname: mongo-example
networks:
- your-network
ports:
- 27017:27017
networks:
your-network:
name: network-name
Of course you then need to adapt your property file or specify environment variables through your docker-compose.yml file.