Found out about appwrite.io and I really like the features appwrite offers. It's similar to the Firebase, but open source.
I'm trying to make appwrite work with Python/FastAPI.
Bellow it the folder structure of the project. Folder api will contain all additional the logic. Dockerfile is taken from uvicorn-gunicorn-fastapi-docker repo.
├── project
│ ├── docker-compose.yml
│ ├── app
│ ├── api
│ ├── Dockerfile
│ ├── main.py
│ ├── requirements.txt
└──
In the docker-compose file I added the app service which starts FastAPI.
version: '3'
services:
app:
build: ./app
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./app:/usr/src/app
traefik:
image: traefik:v2.1.4
command:
- --providers.file.directory=/storage/config
- --providers.file.watch=true
- --providers.docker=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
restart: unless-stopped
ports:
- 5000:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- appwrite-config:/storage/config:ro
- appwrite-certificates:/storage/certificates:ro
depends_on:
- appwrite
networks:
- gateway
- appwrite
appwrite:
image: appwrite/appwrite:0.6.2
restart: unless-stopped
networks:
- appwrite
labels:
- traefik.http.routers.appwrite.rule=PathPrefix(`/`)
- traefik.http.routers.appwrite-secure.rule=PathPrefix(`/`)
- traefik.http.routers.appwrite-secure.tls=true
volumes:
- appwrite-uploads:/storage/uploads:rw
- appwrite-cache:/storage/cache:rw
- appwrite-config:/storage/config:rw
- appwrite-certificates:/storage/certificates:rw
depends_on:
- mariadb
- redis
- smtp
- influxdb
- telegraf
environment:
- _APP_ENV=production
- _APP_OPENSSL_KEY_V1=your-secret-key
- _APP_DOMAIN=localhost
- _APP_DOMAIN_TARGET=localhost
- _APP_REDIS_HOST=redis
- _APP_REDIS_PORT=6379
- _APP_DB_HOST=mariadb
- _APP_DB_PORT=3306
- _APP_DB_SCHEMA=appwrite
- _APP_DB_USER=user
- _APP_DB_PASS=password
- _APP_INFLUXDB_HOST=influxdb
- _APP_INFLUXDB_PORT=8086
- _APP_STATSD_HOST=telegraf
- _APP_STATSD_PORT=8125
- _APP_SMTP_HOST=smtp
- _APP_SMTP_PORT=25
mariadb:
image: appwrite/mariadb:1.0.3
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-mariadb:/var/lib/mysql:rw
environment:
- MYSQL_ROOT_PASSWORD=rootsecretpassword
- MYSQL_DATABASE=appwrite
- MYSQL_USER=user
- MYSQL_PASSWORD=password
command: 'mysqld --innodb-flush-method=fsync'
smtp:
image: appwrite/smtp:1.0.1
restart: unless-stopped
networks:
- appwrite
environment:
- MAILNAME=appwrite
- RELAY_NETWORKS=:192.168.0.0/24:10.0.0.0/16
redis:
image: redis:5.0
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-redis:/data:rw
influxdb:
image: influxdb:1.6
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-influxdb:/var/lib/influxdb:rw
telegraf:
image: appwrite/telegraf:1.0.0
restart: unless-stopped
networks:
- appwrite
networks:
gateway:
appwrite:
volumes:
appwrite-mariadb:
appwrite-redis:
appwrite-cache:
appwrite-uploads:
appwrite-certificates:
appwrite-influxdb:
appwrite-config:
I tried docker-compose links and networks to appwrite but none of them worked.
Bellow is the error I get when I try to use python appwrite sdk.
{"result":"HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: /v1/database/collections (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5517795d60>: Failed to establish a new connection: [Errno 111] Connection refused'))"}
Adding custom Docker containers to Appwrite
https://dev.to/streamlux/adding-custom-docker-containers-to-appwrite-2chp
Can also refer to -
Learn How to Run Appwrite With Your Own Custom Proxy or Load Balancer
https://dev.to/appwrite/learn-how-to-run-appwrite-with-your-own-custom-proxy-or-load-balancer-28k
Related
There are multiple parts of Compose that deal with environment variables in one sense or another. So how do I pass Environment variables in Compose ( docker-compose )
According to the documentation If you have multiple environment variables, you can substitute them by adding them to a default environment variable file named .env or by providing a path to your environment variables file using the --env-file command line option.
version: '3.9'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
postgres:
container_name: postgres
image: postgres:13-alpine
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
volumes:
- ./pgdata:/var/lib/postgresql/data
- ./database/app.sql:/docker-entrypoint-initdb.d/app.sql
restart: always
ports:
- "35000:5432"
networks:
- app_network
app-api:
container_name: app-api
build:
dockerfile: Dockerfile
context: ./app-api
target: production
environment:
- DB_TYPE=${DATABASE_TYPE}
- POSTGRES_HOST=${DB_HOST}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASS=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_PORT=${DB_PORT}
- APP_PORT=${SERVER_PORT}
- NODE_ENV:production
## AWS
- AWS_S3_ACCESS_KEY=${AWS_S3_ACCESS_KEY}
- AWS_S3_SECRET_ACCESS_KEY=${AWS_S3_SECRET_ACCESS_KEY}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- AWS_S3_REGION=${AWS_S3_REGION}
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_LISTEN_PORT=${PGADMIN_LISTEN_PORT}
restart: always
ports:
- "5400:5400"
depends_on:
- postgres
links:
- postgres
networks:
- app_network
you can use it like this. you have to pass value in each environment variable.
server:
environment:
- AWS_S3_ACCESS_KEY=ABCJQHWEQJHWQ
- AWS_S3_SECRET_ACCESS_KEY=ASKJHDAKJHNAWKLHEN
- AWS_S3_BUCKET=abc-text
- AWS_S3_REGION=eu-west-1
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
When you run docker-compose up, the web service defined above uses the image from the defined Dockerfile. You can verify this with the convert command, which prints your resolved application config to the terminal:
You can use this command to verify if you are pathing the proper environment variables
$ docker compose convert
https://docs.docker.com/compose/environment-variables/
I'm working with docker-compose.yml files and have done so for about three years now.
In my solution, I have 6 containers that reside on one network, that network defined as a bridge. One of the containers must have access to a specific internet host, which it's able to do so without any issue. I'll name this solution as the "original" solution.
My next development task is now to replicate the original solution, on my workstation, ensuring that the replica can't interact with the original. My understanding is that I simply create another network, define it as a "bridge" and I should be good to go.
The issue here is that the one container in my replica solution is unable to resolve a cURL call to the internet host. Code is nigh on identical between original and replica (I've added curl debugging to replica and that's how I caught the unresolved host error) and the only thing I can see being any different is the fact that it's on it's own network (but it's bridged so that it should still be able to resolve to the internet host).
The YAML files that I'm using are as follows:
original-docker-compose.yml
version: '3.8'
services:
console-mysql:
container_name: console-mysql
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "7306:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console
console-www:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www
env_file:
- ./.env
ports:
- 7080:80
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php
networks:
- dev-vlt-console
console-proc:
container_name: console-proc
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 7082:9001
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-www-php:
container_name: console-www-php
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-dicom:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom
depends_on: [console-mysql]
ports: [7084:8042, 10401:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console
console-vpacs:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs
depends_on: [console-mysql]
ports: ["7085:8042"]
secrets:
- vpacs-orthanc.json
networks:
- dev-vlt-console
secrets:
console-orthanc.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console:
name: dev-vlt-console
driver: bridge
replica-docker-compose.yml
version: '3.8'
services:
console-mysql-dev3:
container_name: console-mysql-dev3
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "60001:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-www-dev3:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www-dev3
env_file:
- ./.env
ports:
- 60002:80
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php-dev3
networks:
- dev-vlt-console-dev3
console-proc-dev3:
container_name: console-proc-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 60003:9001
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-www-php-dev3:
container_name: console-www-php-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-dicom-dev3:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom-dev3
depends_on: [console-mysql-dev3]
ports: [60004:8042, 60005:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-vpacs-dev3:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs-dev3
depends_on: [console-mysql-dev3]
ports: ["60006:8042"]
secrets:
- vpacs-orthanc-dev3.json
networks:
- dev-vlt-console-dev3
secrets:
console-orthanc-dev3.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc-dev3.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console-dev3:
name: dev-vlt-console-dev3
driver: bridge
Am I missing something here? I've added tens of other containers, all with their own networks configured as "bridge" and they are all able to access the internet without issue.
I've read posts from 5 years ago about this and the only resolution that seemed to work, was rebooting the docker host - which I've done, but it didn't help.
Any thoughts / comments?!
Thanks
I have VM with docker containers in a cloud.
It have 2 containers: wireguard and redmine.
I have LDAP-authorization in redmine.
LDAP-server locates in private LAN (behind NAT), and I have VPN via wireguard to this LAN.
I need add route in Redmine-container so that redmine has access to a private LAN via Wireguard-container.
Now I make it by hand after containers start I write docker-compose exec redmine ip route add 192.168.42.0/23 via 172.20.0.50
Could you advice me, how implement it to my pipeline?
P.S. redmine-container already has entrypoint and cmd directives in Dockerfile.
version: '3.9'
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
networks:
default:
ipv4_address: 172.20.0.50
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.1-alpine
cap_add:
- NET_ADMIN
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
ports:
- 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
networks:
default:
ipam:
config:
- subnet: 172.20.0.0/24
volumes:
postgres-data:
redmine-files:
I solve my problem:
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
ports:
- 3000:3000
environment:
- TZ=Europe/Moscow
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.2-alpine
network_mode: service:wireguard
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
# ports:
# - 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
volumes:
postgres-data:
redmine-files:
I'm using a .yml to launch an Odoo instance & its postgresql through a traefik and it works perfectly fine. It assigns the domain and subdomain with a Let's Encrypt certificate, perfect.
My issue now is that I wish to launch more of these but I haven't succeeded in doing so.
I've tried:
Changing the ports: in the odoo section
Removing the ports: section from the odoo section
Editing the ports: in the traefik section
The .yml:
version: "3"
services:
odoo:
image: odoo:12.0
depends_on:
- db
restart: unless-stopped
networks:
- internal
ports:
- "8069:8069"
- "8072:8072"
environment:
- HOST=db
- USER=${ODOO_USER}
- PASSWORD=${ODOO_PASS}
volumes:
- ./odoo/odoo-web-data:/var/lib/odoo
- ./odoo/config:/etc/odoo
- ./odoo/addons:/mnt/extra-addons
- ./odoo/logs:/var/log/odoo
labels:
- 'traefik.http.routers.odoo.rule=Host(`${ODOO_TRAEFIK_URL}`)'
- 'traefik.http.routers.odoo.entrypoints=websecure'
- 'traefik.http.routers.odoo.tls.certresolver=odoo'
- 'traefik.port=8069'
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
db:
image: postgres:10
restart: unless-stopped
networks:
- internal
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=${ODOO_USER}
- POSTGRES_PASSWORD=${ODOO_PASS}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
traefik:
image: traefik:v2.0
networks:
- internal
- web
ports:
# The HTTP port
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- "./traefik/letsencrypt:/letsencrypt"
- "./traefik/traefik.yml:/etc/traefik.yml"
- "/var/run/docker.sock:/var/run/docker.sock"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.defaultRule=Host(`{{ trimPrefix `/` .Name }}.${TRAEFIK_DEFAULT_DOMAIN}`)"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--certificatesResolvers.odoo.acme.httpchallenge=true"
- "--certificatesresolvers.odoo.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.odoo.acme.email=${ACME_EMAIL}"
- "--certificatesresolvers.odoo.acme.storage=/letsencrypt/acme.json"
networks:
internal:
web:
external: true
Assuming I already have a Traefik + Odoo + Postgres already running, how could I go about launching more instances of these together?
My 1st goal is to write a docker-compose.yml file with the following:
1 docker for the MariaDB server
1 docker for the PrestaShop-1.7 server
1 docker for the PHPMyAdmin server
Can you please help me get it working correctly ?
Then, my 2nd goal is to set passwords and disallow the "no password".
Kind regards,
Arnaud.
I'm using the bitnami's dockers so I've started the following script:
version: "3"
networks:
prestashop-network:
driver: bridge
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- ALLOW_EMPTY_PASSWORD=yes
networks:
- prestashop-network
volumes:
- 'mariadb_data:/bitnami'
ports:
- 3307:3306
phpmyadmin:
image: bitnami/phpmyadmin:latest
volumes:
- 'phpmyadmin_data:/bitnami'
depends_on:
- mariadb
ports:
- 81:80
environment:
- PHPMYADMIN_ALLOW_NO_PASSWORD=true
networks:
- prestashop-network
prestashop_1.7:
image: 'bitnami/prestashop:1.7'
volumes:
- 'prestashop_data:/bitnami'
- ./docker/prestashop/custom-php.ini:/usr/local/etc/php/conf.d/custom.ini
- ./docker/prestashop/phpinfo.php:/var/www/html/phpinfo.php
depends_on:
- mariadb
ports:
- 8085:80
- 8086:443
environment:
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_ROOT_USER=root
- MARIADB_ROOT_PASSWORD=
- MYSQL_CLIENT_CREATE_DATABASE_NAME=bitnami_prestashop
- MYSQL_CLIENT_CREATE_DATABASE_USER=bn_prestashop
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
networks:
- prestashop-network
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local
For information, I use Mac OS X Mojave with the following docker tools version:
$ docker-compose version
docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
When I launch with the following command:
docker-compose up
Then the different images are downloaded and started.
When I try to access the PhpMyAdmin instance using http://localhost:81 I can reach the PhpMyAdmin instance correctly using root and no password.
I get two major problems:
I see the 'prestashop' database is created but empty
When I try to access the PrestaShop instance using http://localhost:8085 I get an error 500
When tying your docker-compose file I got this errors:
mariadb_1 | 2019-08-15 9:28:47 13 [Warning] Access denied for user 'bn_prestashop'#'192.168.48.4' (using password: YES)
prestashop_1.7_1 | mysql-c ERROR [canConnect] Connection with 'bn_prestashop' user is unsuccessful
You need to set up the user password in the mariadb container too.
This docker-compose file worked for me, may be you can build up from here.
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- MARIADB_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
prestashop:
image: 'bitnami/prestashop:1.7'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
ports:
- '80:80'
- '443:443'
volumes:
- 'prestashop_data:/bitnami'
depends_on:
- mariadb
phpmyadmin:
image: 'bitnami/phpmyadmin:4'
ports:
- '8080:80'
- '8443:443'
depends_on:
- mariadb
volumes:
- 'phpmyadmin_data:/bitnami'
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local