I have an architecture represented in this docker-compose.yml file:
version: '3'
services:
flask:
container_name: flask
image: "user/demo_flask"
ports:
- "5000:5000"
links:
- mysql
mysql:
container_name: mysql
image: "user/demo_mysql"
environment:
- MYSQL_ROOT_PASSWORD=mypassword
Here the flask service is a Flask app connecting to the mysql DB, which is just mysql:5.7 with some custom configuration. I need the services to communicate (in particular, flask has to be able to reach mysql).
I want to deploy such an architecture to ECS using EC2 Launch Type. I plan to use ecs-cli to generate the Task Definitions. As long as I can understand, if I include in my directory the file ecs-params.yml :
version: 1
task_definition:
services:
flask:
cpu_shares: 50
mem_limit: 262144000
mysql:
cpu_shares: 50
mem_limit: 262144000
I get a single Task Definition, which is not what I want. I would like two separate Task Definitions, each one with a single container. Is it possible to get this?
Thanks.
One service can not be run with two task definition in ECS Cluster.
What you can do here is to create two task definition (one for flask and another for mysql) and create two service using these two task definition.Now you can make communication using service discovery in ECS Cluster. Please check this aws service discovery document.
Related
I have a springboot application which uses a postgresql database and a mongoDB database , I have been able to correctly configure them but now when I want to dockerize my application to later deploy it on a Kubernetes cluster, I am completely clueless. Most of the youtube tutorials and articles are on how to dockerize simple springboot applications or springboot applications that use only one database, thus any input on how I can proceed to dockerize my application would be really appreciated!
Edit:
I am following this tutorial -
https://www.section.io/engineering-education/running-a-multi-container-springboot-postgresql-application-with-docker-compose/
Here in the docker-compose.yml file-
version: '3.1'
services:
API:
image: 'blog-api-docker.jar'
ports:
- "8080:8080"
depends_on:
PostgreSQL:
condition: service_healthy
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
PostgreSQL:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
Only one postgreSQL datasource is defined ,in my project with a similar postgreSQL datasource as given in the tutorial I am also using a mongoDB database which is running on atlas.
I am also including my application.properties file for your reference-
spring.ds-psql.datasource.jdbcUrl=jdbc:postgresql://localhost:5432/devicestatspsql
spring.ds-psql.datasource.username=
spring.ds-psql.datasource.password=
spring.data.mongodb.users-mongo-atlas.uri=*mongodb database url here*
spring.jpa.generate-ddl=true
spring.jpa.show-sql=true
So I just need to know what changes are required in my docker-compose.yml file to accommodate this mongodb database in the docker image
You can use the Kubernetes Configmap and Secret as of now to store the configuration of your application.
Configmap and Secret are mostly for storing configurations like database connection strings, usernames, and passwords.
You can create different configuration maps as per requirement for Dev,Stag and Prod then inject the specific to your deployment so the application will get those values from either .env file or from OS environment.
Here's the reference article.
I have testng project with selenium for integration testing of frontend app in vuejs and springboot backend. So in order to run tests I need first to bring up all dependent projects:
springboot and mongodb
vuejs frrontend app
Each project is in its own repo.
So I have created docker images of springboot and frontend app and will put it up in gitlab container registry.
Then in the testeng project plan to use docker-compose in .gitlab-ci.yml. Here is docker-compose.yml for testng project:
version: '3.7'
services:
frontendapp:
image: demo.app-frontend-selenium
container_name: frontend-app-selenium
depends_on:
- demoapi
ports:
- 8080:80
demoapi:
image: demo.app-backend-selenium
container_name: demo-api-selenium
depends_on:
- mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod
- SCOUNT_API_ENDPOINTS_WEB_CORS_OPTIONS_ALLOWEDORIGINS=*
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_DATABASE=demo-api-selenium
- KEYCLOAK_AUTH-SERVER-URL=https://my-keycloak-url/auth
ports:
- 8082:80
mongodb:
image: mongo:4-bionic
container_name: mongodb-selenium
environment:
MONGO_INITDB_DATABASE: demo-api-selenium
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
After running docker-compose in gitlab-ci.yml what will be url of frontend app in order to execute tests?
When I do it locally I am using following urls for testing:
frontend app: http://localhost:8080
api: http://localhost:8082
But in case when running on gitlab ci what will be url to access frontend and api?
TL;DR instead of using localhost you need to use the hostname of your docker daemon (docker:dind) service. If you setup docker-in-docker for your GitLab job per usual setup, this is most likely docker.
So the urls you need to use according to your compose file are:
frontend app: http://docker:8080
api: http://docker:8082
my_job:
services:
- name: docker:dind
alias: docker # this is the hostname of the daemon
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
image: docker:stable
script:
- docker run -d -p 8000:80 strm/helloworld-http
- apk update && apk add curl # install curl and let server start
- curl http://docker:8000 # use the daemon to reach your containers
For a full explanation of this, read on.
Docker port mapping in Gitlab CI vs locally
How it works locally
Normally, when you use docker-compose locally on your system, you are typically running the docker daemon on your localhost (e.g. using docker desktop).
When you provide a port mapping like 8080:80 it means to publish port 8080 on the daemon host bound to port 80 in the container. When running locally, that means you can reach the container via localhost.
In GitLab
However, when you're running docker-in-docker on GitLab CI the important difference in this environment is that the docker daemon is remote. So, when you expose ports through the docker API, the ports are exposed on the docker daemon host not locally in your job container.
Hence, you must use the hostname of the docker daemon, not localhost, to reach your started containers.
Alternative solutions
An alternative to this would be to conduct your testing inside the same docker network that you create with your compose stack. That way, your testing is agnostic of where the docker environment lives and can, for example, leverage the service aliases in your compose file (like frontendapp, demoapi, etc) instead of relying on published ports.
For example, you may choose add a test container to your compose stack. Some testing libraries like Testcontainers can help set this up, too.
I have a service that runs in a docker. For reasons I want to run a suite of tests on it in parallel, for example integration tests and performance tests.
I have a docker-compose.yaml that looks like this:
# My service - the thing under test in this scenario
service:
ports:
- 4000:4000
...
# Integration tests
integration:
depends_on:
- service
...
# Performance tests
performance:
depends_on:
- service
...
I would like to continue to expose 4000 so that components outside of docker world can interact with it. However when I run these tests in parallel I get this error for one of the tests
Cannot start service service ... 0.0.0.0:4000 failed: port is already in use.
This is because docker-compose is trying to start a service for each of the tests. Is it possible to tell docker-compose to use the same instance of service? Is there a better way to achieve the same results?
I've solved this for myself and I'll document it here for anyone who faces a similar problem in the future.
Publishing ports from the service by default is the problem here. Depending on the context of how the service is started, the ports can be published or not. Better to use the docker subnet for communication between the docker containers.
The docker-compose.yaml would look more like this now:
service:
# no ports declaration
...
integration:
depends_on:
- service
environment:
- SERVICE_URL=http://service:4000
...
performance:
depends_on:
- service
environment:
- SERVICE_URL=http://service:4000
...
Instead ports are published when needed with whatever starts the service:
docker-compose run -p 4000:4000 service
In these days, I am trying to deploy my Spring Boot OAuth2 project. It has 3 different modules.(Authentication Server, Resource Server and Front-end)
Authentication and Resource servers have own *.yml file for configurations such as mongodb name-port, server profile-ip etc.
What I am trying to do exactly? I want to deploy spring boot application on docker but i dont want to put my database(mongodb) on docker as a container.
I am not sure this structure is possible or not ?
Because When i run my mongodb on my local(localhost:27017) after that try to deploy spring boot application on local docker as a container, i am getting Timeout exception for MongoDB. The application couldnt connect to external mongoDB(non docker container).
What should I do? Should I run mongodb on docker? I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo.
I tried to run another spring boot app without mongodb, it is working successfully and i made request from browser by ip&port, i got response from application as i expected.
*** MONGO URL ****
mongodb://127.0.0.1:27017/db-localhost
**** Authentication server .yml file ****
server:
port: 9080
contextPath: /auth-service
tomcat:
access_log_enabled: true
basedir: target/tomcat
security:
basic:
enabled: false
spring:
profiles:
active: development
thymeleaf:
cache: false
mongo:
db:
server: 127.0.0.1
port: 27017
logging:
level:
org.springframework.security: DEBUG
---
spring:
profiles: development
data:
mongodb:
database: db-localhost
---
spring:
profiles: production
data:
mongodb:
database: db-prod
---
***** DOCKER FILE *******
FROM java:8
VOLUME /tmp
ADD auth-server-1.0-SNAPSHOT.jar app.jar
EXPOSE 9080
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
**** DOCKER COMMAND *******
docker run -it -P --name authserver authserver
The issue with your configuration is referencing the mongodb from inside of the authservice on 127.0.0.1 which is the loopback adapter inside of the authservice container. So you tell your spring application that mongodb is running in the same container as the authservice spring application, which is not the case.
Either you are running your database as an own container (which requires to handle the data volumes correctly) and referencing it using the container name as hostname (via link) or you need to reference the externally running mongodb instance with the correct address. This would be the ip address of the machine running the docker daemon (I assume for your local environment something like 192.168.0.xxx).
Question: What should I do?
At least for developing purposes I would recommend to also use docker for your mongodb instance. I had a similar setup with RabbitMQ in addition and it solved a lot of problems when I used docker for those as well. Using docker-compose to set everything up makes it even easier. Later you can still specify which mongodb instance you want to use through your spring properties.
Problem: I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo
The problem is probably because you have not set up any networks or hostnames for you services. Your spring application can not resolve the hostname of your mongo server, since you specified 127.0.0.1 for your mongodb server in your properties.
I would recommend using docker for your mongodb and use a docker-compose.yml file like this to set everything up:
version: '3.7'
services:
resource-server:
image: demo/resource-server:latest
container_name: resource-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8080:8080
auth-server:
image: demo/auth-server:latest
container_name: auth-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8081:8080
mongodb-example:
image: mongo:latest
container_name: mongo-example
hostname: mongo-example
networks:
- your-network
ports:
- 27017:27017
networks:
your-network:
name: network-name
Of course you then need to adapt your property file or specify environment variables through your docker-compose.yml file.
I'm using a Dockerfile in combination with a docker-compose.yml to start two services:
My app service
A MongoDB service
My docker-compose.yml:
web:
build: .
ports:
- "80:3000"
environment:
NODE_ENV: production
links:
- mongo
mongo:
image: mongo
command: --smallfiles
ports:
- "27017:27017"
I can't seem to figure out how to control access to the MongoDB container (like with the --auth flag), and how to have external access (say a GUI) using a username/password.
The two services get redeployed via Tutum by a webhook after a Docker Automated Build. In other words, I don't want to manually configure the database every time.
How do I control access a.k.a. set a root/admin user to secure my MongoDB database using the Dockerfile or the docker-compose.yml file?