How can I use docker compose to deploy to ECS - docker-compose

I'm trying to deploy docker compose to amazon ECS.
I have created this docker compose file:
services:
consul-server:
container_name: consul-server
hostname: consul-server
image: consul:1.12.2
command: agent -server -ui -node=server1 -bootstrap-expect=1 -client=0.0.0.0
ports:
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- xp_network
infra-service:
image: infra-service-docker-img:latest
build: .
container_name: infra-service
hostname: infra-service
ports:
- 5011:5011
networks:
- xp_network
networks:
dxp_network:
name: xp-vpc
I create an ECS context to target Amazon ECS using the following commands:
docker context create ecs ecscontext
I have AWS credentials set up in the local environment for authenticating with the ECS platform.(I did aws configure and add the keys)
and then I use an existing AWS profile. After I checked i created the new context (docker context ls) I ensured that i was using my context.
Run --> docker compose up
When I a do docker-compose up -d I can see that Container infra-service Started and Container consul-server Started.
When I check the state of the services I can not see any difference on the PORTS. No connection to aws and no resources created there also :S
enter image description here
basically I did:
$ aws configure
--> keys
--> region
$ docker compose build
$ docker context create ecs ecscontext
--> An existing AWS profile
$ docker context use ecscontext
$ docker compose up
$ docker compose ps
PLEASE!!!Anyone can tell me what I'm doing wrong? Do you think that it's something related to the credentials set up?

Related

Docker with postgresql in flask web application (part 2)

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

How to access dockerized app under test in gitlab CI

I have testng project with selenium for integration testing of frontend app in vuejs and springboot backend. So in order to run tests I need first to bring up all dependent projects:
springboot and mongodb
vuejs frrontend app
Each project is in its own repo.
So I have created docker images of springboot and frontend app and will put it up in gitlab container registry.
Then in the testeng project plan to use docker-compose in .gitlab-ci.yml. Here is docker-compose.yml for testng project:
version: '3.7'
services:
frontendapp:
image: demo.app-frontend-selenium
container_name: frontend-app-selenium
depends_on:
- demoapi
ports:
- 8080:80
demoapi:
image: demo.app-backend-selenium
container_name: demo-api-selenium
depends_on:
- mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod
- SCOUNT_API_ENDPOINTS_WEB_CORS_OPTIONS_ALLOWEDORIGINS=*
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_DATABASE=demo-api-selenium
- KEYCLOAK_AUTH-SERVER-URL=https://my-keycloak-url/auth
ports:
- 8082:80
mongodb:
image: mongo:4-bionic
container_name: mongodb-selenium
environment:
MONGO_INITDB_DATABASE: demo-api-selenium
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
After running docker-compose in gitlab-ci.yml what will be url of frontend app in order to execute tests?
When I do it locally I am using following urls for testing:
frontend app: http://localhost:8080
api: http://localhost:8082
But in case when running on gitlab ci what will be url to access frontend and api?
TL;DR instead of using localhost you need to use the hostname of your docker daemon (docker:dind) service. If you setup docker-in-docker for your GitLab job per usual setup, this is most likely docker.
So the urls you need to use according to your compose file are:
frontend app: http://docker:8080
api: http://docker:8082
my_job:
services:
- name: docker:dind
alias: docker # this is the hostname of the daemon
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
image: docker:stable
script:
- docker run -d -p 8000:80 strm/helloworld-http
- apk update && apk add curl # install curl and let server start
- curl http://docker:8000 # use the daemon to reach your containers
For a full explanation of this, read on.
Docker port mapping in Gitlab CI vs locally
How it works locally
Normally, when you use docker-compose locally on your system, you are typically running the docker daemon on your localhost (e.g. using docker desktop).
When you provide a port mapping like 8080:80 it means to publish port 8080 on the daemon host bound to port 80 in the container. When running locally, that means you can reach the container via localhost.
In GitLab
However, when you're running docker-in-docker on GitLab CI the important difference in this environment is that the docker daemon is remote. So, when you expose ports through the docker API, the ports are exposed on the docker daemon host not locally in your job container.
Hence, you must use the hostname of the docker daemon, not localhost, to reach your started containers.
Alternative solutions
An alternative to this would be to conduct your testing inside the same docker network that you create with your compose stack. That way, your testing is agnostic of where the docker environment lives and can, for example, leverage the service aliases in your compose file (like frontendapp, demoapi, etc) instead of relying on published ports.
For example, you may choose add a test container to your compose stack. Some testing libraries like Testcontainers can help set this up, too.

How to communicate multiple containers with each other in Docker?

I'm trying to containerize my application. I use mongodb and 2 more micro services.
As you can see in the docker compose file below, I have some problems.
My requirements:
main_image needs to connect to MongoDB.
gui_image needs to connect to MongoDB.
gui_image needs to show its GUI on port 8080 (Can use another port as well)
gui_image has to read and write to a file inside my computer.
MongoDB has to access a volume inside my computer.
main_image needs access to the internet.
Here is my questions:
1- Does exposing ports in docker file and docker-compose the same thing?
2- How do I mount a volume to mongodb as best practice?
3- How to accomplish the requirements above with the diagram below in docker-compose?
Here is my docker-compose file:
version: "3"
services:
mongo:
image: mongo:latest
ports:
- 27017:27017
main_image:
build:
context: .
dockerfile: .\my_project\dockerfile
depends_on:
- mongo
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
Here is my dockerfile under my_gui directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
RUN pip install dash
EXPOSE 8080
EXPOSE 27017
ENTRYPOINT [ "python","gui_script.py"]
And lastly, here is my dockerfile under my_project directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
EXPOSE 27017
ENTRYPOINT [ "python","main_script.py"]
1.The EXPOSE instruction in Dockerfile informs Docker that the container listens on the specified network ports at runtime(like when using docker run -p command).
However using ports in compose is a dynamic way of specifying these ports. So images like nginx or apache which are always supposed to run on port 80 inside the container will use EXPOSE in Dockerfile itself.
While an image which has dynamic port which may be controlled using an environment variable will then use expose in docker run or compose file.
some_webapi:
environment:
- ASPNETCORE_URLS=http://*:80
build:
context: .
dockerfile: ./Dockerfile
2.As documented on the docker hub page for mongo image (https://hub.docker.com/_/mongo/) you can use
volumes:
- '/path/to/your/pc/folder:/path/inside/docker'
3.And for the last question you might wanna use Networking in Compose.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Services can join networks like this
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
networks:
- gui
And also you have to define all the networks used by services in global scope of compose file
version: '3'
services:
networks:
gui:
After that containers will be able to see each other even by their container_name which you can define in services
mongo:
image: mongo:latest
ports:
- 27017:27017
container_name: gui_mogno
then you will be able to connect to mongo with a connection string like this mongodb://gui_mogno:27017/
You can get more information about networking here

docker-compose external_link mongo network not reachable

I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet

Docker compose yml static IP addressing

I have such docker-compose.yml (not a full list here):
version: '2'
services:
nginx:
build: ./nginx/
ports:
- 8080:80
links:
- php
volumes_from:
- app
networks:
app_subnet:
ipv4_address: 172.16.1.3
php:
build: ./php/
expose:
- 9000
volumes_from:
- app
networks:
app_subnet:
ipv4_address: 172.16.1.4
networks:
app_subnet:
driver: bridge
ipam:
config:
- subnet: 172.16.1.0/24
gateway: 172.16.1.1
After docker-compose up I got such an error:
User specified IP address is supported only when connecting to
networks with user configured subnets
So I'm creating subnet with docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24 app_subnet
But this doesn't solve the problem because docker-compose creates the subnet with name dev_app_subnet on the fly. And my subnet is not used - I'm getting the same error.
The main purpose of doing this is to assign static IP for nginx service - open my project web url from etc/hosts file.
[SOLVED] Found the solution. When pointing to the network, we should use flag "external" telling compose that network is already created and should be taken outside (otherwise it will be created on the fly with project prefix):
networks:
app_subnet:
external: true
So, after that docker-compose will attach containers to existing app_subnet
Before that the subnet must be created:
docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24 app_subnet
In my case is i first run docker-compose up failed, but the network already created, can see using docker network ls.
In this case, just docker-compose down, fix the yml, rerun docker-compose up is fine
It is probable that from previous run of the script the network interface was already created but without the subnet parameter.
For fixing it run
docker network ls -a
and remove the network that is blocking the creation of the service
docker network rm <network interface id>
Adding to the Qiushi's answer, you can use the docker-compose 3.9 version to specify the external network as below.
version: "3.9"
networks:
network1:
external: true
name: etl_subnet
Ref: https://docs.docker.com/compose/compose-file/compose-file-v3/#network-configuration-reference
To be specific if you're using docker stack deploy to deploy a swarm cluster. And you need to specify the scope of the subnet as 'swarm':
docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24
--scope swarm app_subnet
in the docker-compose.yml, you need to specify the external network as:
networks: default:
external:
name: etl_subnet
Use it as default.