Setting up localstack resources in docker compose file results in connection aborted failure - docker-compose

I have a docker compose file that looks like the following:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
# aws kinesis create-stream --endpoint-url=http://localstack:4568 --stream-name my_stream --shard-count 1;
aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:
which has been copied from this blog post that I have found, but when I run docker-compose up the bucket fails to create with the following error: ('Connection aborted.', error(99, 'Address not available'))

I ran it with small changes and it's correctly working, change the localhost to localstack
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- '4568-4576:4568-4576'
- '8055:8080'
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=us-east-1
- DEBUG=1
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:

Small detail here but it should not be aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
it should instead be aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket, that's right localhost becomes localstack

Starting with version 0.11.0,
All APIs are exposed via a single edge service,
which is accessible on
http://localhost:4566 by default
and EDGE_PORT=4566.
Found on this
Article

Related

How To Pass Environment variables in Compose ( docker compose )

There are multiple parts of Compose that deal with environment variables in one sense or another. So how do I pass Environment variables in Compose ( docker-compose )
According to the documentation If you have multiple environment variables, you can substitute them by adding them to a default environment variable file named .env or by providing a path to your environment variables file using the --env-file command line option.
version: '3.9'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
postgres:
container_name: postgres
image: postgres:13-alpine
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
volumes:
- ./pgdata:/var/lib/postgresql/data
- ./database/app.sql:/docker-entrypoint-initdb.d/app.sql
restart: always
ports:
- "35000:5432"
networks:
- app_network
app-api:
container_name: app-api
build:
dockerfile: Dockerfile
context: ./app-api
target: production
environment:
- DB_TYPE=${DATABASE_TYPE}
- POSTGRES_HOST=${DB_HOST}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASS=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_PORT=${DB_PORT}
- APP_PORT=${SERVER_PORT}
- NODE_ENV:production
## AWS
- AWS_S3_ACCESS_KEY=${AWS_S3_ACCESS_KEY}
- AWS_S3_SECRET_ACCESS_KEY=${AWS_S3_SECRET_ACCESS_KEY}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- AWS_S3_REGION=${AWS_S3_REGION}
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_LISTEN_PORT=${PGADMIN_LISTEN_PORT}
restart: always
ports:
- "5400:5400"
depends_on:
- postgres
links:
- postgres
networks:
- app_network
you can use it like this. you have to pass value in each environment variable.
server:
environment:
- AWS_S3_ACCESS_KEY=ABCJQHWEQJHWQ
- AWS_S3_SECRET_ACCESS_KEY=ASKJHDAKJHNAWKLHEN
- AWS_S3_BUCKET=abc-text
- AWS_S3_REGION=eu-west-1
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
When you run docker-compose up, the web service defined above uses the image from the defined Dockerfile. You can verify this with the convert command, which prints your resolved application config to the terminal:
You can use this command to verify if you are pathing the proper environment variables
$ docker compose convert
https://docs.docker.com/compose/environment-variables/

Mongo Express to wait for MongoDB Cluster in Docker Compose

I'm trying to setup a local development with Docker Compose that has a MongoDB cluster as the database. I chose Mongo Express as the Database Admin User Interface so I can check inside the MongoDB database. It does take some time for the cluster to accept connections, I have the 3 db containers as part of the depends_on, but seems like I have to do more than that based on the Docker Compose documentation here. I can't seem to find a good example for waiting for MongoDB clusters. Has anyone figured this out already? Please share, that would be great. Thank you in advance!
Here's the docker-compose.yml file:
version: '3.9'
services:
mongodb-primary:
image: 'bitnami/mongodb:latest'
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
volumes:
- 'mongodb_master_data:/bitnami'
mongodb-secondary:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
mongodb-arbiter:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
dbadmin:
image: mongo-express
restart: always
ports:
- 8081:8081
depends_on:
- mongodb-primary
- mongodb-secondary
- mongodb-arbiter
environment:
ME_CONFIG_MONGODB_URL: mongodb://root:password#mongodb-primary:27017,mongodb-secondary:27017,mongodb-arbiter:27017?replicaSet=replicaset
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: mexpress
volumes:
mongodb_master_data:
driver: local

The AirFlow 1.10: the scheduler does not apper to be running

I run AirFlow on local machine with docker-compose:
version: '2'
services:
postgresql:
image: bitnami/postgresql:10
volumes:
- postgresql_data:/bitnami/postgresql
environment:
- POSTGRESQL_DATABASE=bitnami_airflow
- POSTGRESQL_USERNAME=bn_airflow
- POSTGRESQL_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
redis:
image: bitnami/redis:5.0
volumes:
- redis_data:/bitnami
environment:
- ALLOW_EMPTY_PASSWORD=yes
airflow-scheduler:
image: bitnami/airflow-scheduler:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_scheduler_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow-worker:
image: bitnami/airflow-worker:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_worker_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow:
image: bitnami/airflow:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_USERNAME=user
- AIRFLOW_PASSWORD=password
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=yes
ports:
- '8080:8080'
volumes:
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
volumes:
airflow_scheduler_data:
driver: local
airflow_worker_data:
driver: local
airflow_data:
driver: local
postgresql_data:
driver: local
redis_data:
driver: local
But when I sing in the UI interface, I see
"The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled."
Why?
I use official docker images and there is no problem with this.
And another problem - while I don't switch AIRFLOW_LOAD_EXAMPLES=yes or no and restart docker-compose I don't see the updated DAGs' list. (
When I have got for AirFlow 1 docker-compose by Puckel, everything worked: https://github.com/puckel/docker-airflow/blob/master/README.md

Airflow via docker-compose keeps trying to access sqlite although postgres configured

I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.

docker-compose.yml for Bitnami Apache, MariaDB, PrestaShop and PHPMyAdmin is not working correctly

My 1st goal is to write a docker-compose.yml file with the following:
1 docker for the MariaDB server
1 docker for the PrestaShop-1.7 server
1 docker for the PHPMyAdmin server
Can you please help me get it working correctly ?
Then, my 2nd goal is to set passwords and disallow the "no password".
Kind regards,
Arnaud.
I'm using the bitnami's dockers so I've started the following script:
version: "3"
networks:
prestashop-network:
driver: bridge
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- ALLOW_EMPTY_PASSWORD=yes
networks:
- prestashop-network
volumes:
- 'mariadb_data:/bitnami'
ports:
- 3307:3306
phpmyadmin:
image: bitnami/phpmyadmin:latest
volumes:
- 'phpmyadmin_data:/bitnami'
depends_on:
- mariadb
ports:
- 81:80
environment:
- PHPMYADMIN_ALLOW_NO_PASSWORD=true
networks:
- prestashop-network
prestashop_1.7:
image: 'bitnami/prestashop:1.7'
volumes:
- 'prestashop_data:/bitnami'
- ./docker/prestashop/custom-php.ini:/usr/local/etc/php/conf.d/custom.ini
- ./docker/prestashop/phpinfo.php:/var/www/html/phpinfo.php
depends_on:
- mariadb
ports:
- 8085:80
- 8086:443
environment:
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_ROOT_USER=root
- MARIADB_ROOT_PASSWORD=
- MYSQL_CLIENT_CREATE_DATABASE_NAME=bitnami_prestashop
- MYSQL_CLIENT_CREATE_DATABASE_USER=bn_prestashop
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
networks:
- prestashop-network
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local
For information, I use Mac OS X Mojave with the following docker tools version:
$ docker-compose version
docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
When I launch with the following command:
docker-compose up
Then the different images are downloaded and started.
When I try to access the PhpMyAdmin instance using http://localhost:81 I can reach the PhpMyAdmin instance correctly using root and no password.
I get two major problems:
I see the 'prestashop' database is created but empty
When I try to access the PrestaShop instance using http://localhost:8085 I get an error 500
When tying your docker-compose file I got this errors:
mariadb_1 | 2019-08-15 9:28:47 13 [Warning] Access denied for user 'bn_prestashop'#'192.168.48.4' (using password: YES)
prestashop_1.7_1 | mysql-c ERROR [canConnect] Connection with 'bn_prestashop' user is unsuccessful
You need to set up the user password in the mariadb container too.
This docker-compose file worked for me, may be you can build up from here.
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- MARIADB_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
prestashop:
image: 'bitnami/prestashop:1.7'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
ports:
- '80:80'
- '443:443'
volumes:
- 'prestashop_data:/bitnami'
depends_on:
- mariadb
phpmyadmin:
image: 'bitnami/phpmyadmin:4'
ports:
- '8080:80'
- '8443:443'
depends_on:
- mariadb
volumes:
- 'phpmyadmin_data:/bitnami'
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local