ECS Fargate application container cannot establish connection with Postgres database container - postgresql

I am trying to use ecs-cli to push a two container docker compose file up to FARGATE ECS. This is for a preview environment only. The first container is postgres:12 and the second is hasura/graphql-engine:v1.3.3
The docker-compose.yml looks like the following
version: '3'
services:
postgres:
image: postgres:12
ports:
- "5432:5432"
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgrespassword
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: postgres
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "80:80"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#127.0.0.1:5432/postgres
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: hasura
volumes:
db_data:
The ecs-params.yml looks like the following
version: 1
task_definition:
ecs_network_mode: awsvpc
task_role_arn: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_execution_role: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_size:
cpu_limit: "256"
mem_limit: "512"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-***"
- "subnet-***"
security_groups:
- "sg-***"
assign_public_ip: "ENABLED"
I am using the following command line call to trigger the push
ecs-cli compose --file docker-compose.yml --ecs-params ecs-params.yml --debug service up --deployment-max-percent 100 --deployment-min-healthy-percent 0 --region us-east-1 --cluster "{ARN CLUSTER VALUE}" --create-log-groups --launch-type "FARGATE"
In ECS I can see the new service created and its 1 Fargate task is spinning up. If I open the task, the containers move from PENDING -> RUNNING. After some time, the application container moves to STOPPED and then eventually the database container moves to STOPPED as well. Once this happens the task stops and a new task goes through the same cycle.
Here is the log for the application container
Here is the log for the database container
In the docker-compose I have tried changing the environment variable for the PG database connection string to both postgres://postgres:postgrespassword#127.0.0.1:5432/postgres and postgres://postgres:postgrespassword#localhost:5432/postgres, both result in the same issue.
Any idea what might be going on here? This is inspired from this article: https://dev.to/raphaelmansuy/10-minutes-to-deploy-a-docker-compose-stack-on-aws-illustrated-with-hasura-and-postgres-3f6e
The only difference is that article uses EC2, not Fargate.

try and add
links:
- postgres
to your graphql-engine service instead of depends_on which doesn't seem to work with AWS ECS.

Related

Docker compose read connection reset by peer error on pipeline

when running a docker compose in a pipeline I'm getting this error when the tests on the pipleine are making use of mycontainer's API.
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer [recovered]
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer
This is my docker copose file:
version: "3"
volumes:
postgres_vol:
driver: local
driver_opts:
o: size=1000m
device: tmpfs
type: tmpfs
networks:
mynetwork:
driver: bridge
services:
postgres:
image: postgres:14
container_name: postgres
restart: always
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
- POSTGRES_DB=newdatabase
volumes:
#- ./postgres-init-db.sql:/docker-entrypoint-initdb.d/postgres-init-db.sql
- "postgres_vol:/var/lib/postgresql/data"
ports:
- 5432:5432
networks:
- mynetwork
mycontainer:
image: myprivaterepo/mycontainer-image:1.0.0
container_name: mycontainer
restart: always
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=newdatabase
- DATABASE_USERNAME=xxx
- DATABASE_PASSWORD=xxx
- DATABASE_SSL=false
depends_on:
- postgres
ports:
- 8080:8080
networks:
- mynetwork
mycontainer is listening on port 8080 and locally everything works fine.
However, when I run the pipeline which is initiating this docker compose is where I'm getting the error.
Basically, I'm running some tests in the pipeline that make use of mycontainer API (http://localhost:8080/api/admin/folder).
If I run the docker compose locally and I reproduce the steps followed on my pipeline to make use of the API everything is working fine. I can comunicate locally with both containers through localhost.
Also, I tried using healthchecks on the containers and 127.0.0.1:8080:8080 on mycontainer & 127.0.0.1:5432:5432 in postgres (including 0.0.0.0:8080:8080 & 0.0.0.0:5342:5432 just in case).
Any idea about that?
I was able to reproduce your error in a pipeline.
Make sure that you are not catching anything (e.g the code that's interacting with your container's API).
You did not mention anything related to your pipeline but just in case, delete the catching on your pipeline.

Rundeck from docker: no logs?

Running rundeck from docker (default backend), but noticed there are no logs. This documentation seems not complete / not valid for docker deployment: https://docs.rundeck.com/docs/administration/maintenance/logs.html
All the logs inside docker:/home/rundeck/server/logs have 0 size.
How to review the logs when running as a docker ?
Thanks,
The execution logs are stored at the /home/rundeck/var/logs/rundeck path, so, a good idea is to mount it as a volume (to see them in your local filesystem), take a look at this docker-compose example:
version: '3'
services:
rundeck:
image: rundeck/rundeck:4.2.1
environment:
RUNDECK_GRAILS_URL: http://localhost:4440
RUNDECK_DATABASE_DRIVER: org.mariadb.jdbc.Driver
RUNDECK_DATABASE_USERNAME: rundeck
RUNDECK_DATABASE_PASSWORD: rundeck
RUNDECK_DATABASE_URL: jdbc:mariadb://mysql/rundeck?autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true
RUNDECK_LOGGING_STRATEGY: FILE
volumes:
- ./data/logs/:/home/rundeck/var/logs/rundeck/
ports:
- 4440:4440
tty: true
mysql:
image: mysql:8
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=rundeck
- MYSQL_USER=rundeck
- MYSQL_PASSWORD=rundeck
The service.log is available in the docker logs command, to see it just do docker logs <container_id> -f.

How to use fluent-bit with Docker-compose

I want to use the fluent-bit docker image to help me persist the ephemeral docker container logs to a location on my host (and later use it to ship logs elsewhere).
I am facing issues such as:
Cannot start service clamav: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused
I have read a number of post including
configuring fluentbit with docker but I'm still at a lost.
My docker-compose is made up of nginx, our app, keycloak, elasticsearch and clamav. I have added fluent-bit, made it first to starts via depends on. I changed the other services to use the fluentd logging driver.
Part of config:
clamav:
container_name: clamav-app
image: tiredofit/clamav:latest
restart: always
volumes:
- ./clamav/data:/data
- ./clamav/logs:/logs
environment:
- ZABBIX_HOSTNAME=clamav-app
- DEFINITIONS_UPDATE_FREQUENCY=60
networks:
- iris-network
expose:
- "3310"
depends_on:
- fluentbit
logging:
driver: fluentd
fluentbit:
container_name: iris-fluent
image: fluent/fluent-bit:latest
restart: always
networks:
- iris-network
volumes:
- ./fluent-bit/etc:/fluent-bit/etc
ports:
- "24224:24224"
- "24224:24224/udp"
I have tried to proxy_pass 24224 to fluentbit in nginx and start nginx first, and that avoided the error on clamav and es, but same error with keycloak.
So how can I configure the service to use the host or is it that localhost is not the "external" host?

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

Docker containers with volume mounting exits immediately on using docker-compose up

I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com