docker-compose.yml for Bitnami Apache, MariaDB, PrestaShop and PHPMyAdmin is not working correctly - docker-compose

My 1st goal is to write a docker-compose.yml file with the following:
1 docker for the MariaDB server
1 docker for the PrestaShop-1.7 server
1 docker for the PHPMyAdmin server
Can you please help me get it working correctly ?
Then, my 2nd goal is to set passwords and disallow the "no password".
Kind regards,
Arnaud.
I'm using the bitnami's dockers so I've started the following script:
version: "3"
networks:
prestashop-network:
driver: bridge
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- ALLOW_EMPTY_PASSWORD=yes
networks:
- prestashop-network
volumes:
- 'mariadb_data:/bitnami'
ports:
- 3307:3306
phpmyadmin:
image: bitnami/phpmyadmin:latest
volumes:
- 'phpmyadmin_data:/bitnami'
depends_on:
- mariadb
ports:
- 81:80
environment:
- PHPMYADMIN_ALLOW_NO_PASSWORD=true
networks:
- prestashop-network
prestashop_1.7:
image: 'bitnami/prestashop:1.7'
volumes:
- 'prestashop_data:/bitnami'
- ./docker/prestashop/custom-php.ini:/usr/local/etc/php/conf.d/custom.ini
- ./docker/prestashop/phpinfo.php:/var/www/html/phpinfo.php
depends_on:
- mariadb
ports:
- 8085:80
- 8086:443
environment:
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_ROOT_USER=root
- MARIADB_ROOT_PASSWORD=
- MYSQL_CLIENT_CREATE_DATABASE_NAME=bitnami_prestashop
- MYSQL_CLIENT_CREATE_DATABASE_USER=bn_prestashop
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
networks:
- prestashop-network
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local
For information, I use Mac OS X Mojave with the following docker tools version:
$ docker-compose version
docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
When I launch with the following command:
docker-compose up
Then the different images are downloaded and started.
When I try to access the PhpMyAdmin instance using http://localhost:81 I can reach the PhpMyAdmin instance correctly using root and no password.
I get two major problems:
I see the 'prestashop' database is created but empty
When I try to access the PrestaShop instance using http://localhost:8085 I get an error 500

When tying your docker-compose file I got this errors:
mariadb_1 | 2019-08-15 9:28:47 13 [Warning] Access denied for user 'bn_prestashop'#'192.168.48.4' (using password: YES)
prestashop_1.7_1 | mysql-c ERROR [canConnect] Connection with 'bn_prestashop' user is unsuccessful
You need to set up the user password in the mariadb container too.
This docker-compose file worked for me, may be you can build up from here.
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- MARIADB_USER=bn_prestashop
- MARIADB_DATABASE=bitnami_prestashop
- MARIADB_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
prestashop:
image: 'bitnami/prestashop:1.7'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- PRESTASHOP_DATABASE_USER=bn_prestashop
- PRESTASHOP_DATABASE_NAME=bitnami_prestashop
- PRESTASHOP_DATABASE_PASSWORD=my_passwd
- ALLOW_EMPTY_PASSWORD=yes
- PRESTASHOP_FIRST_NAME=Toto
- PRESTASHOP_LAST_NAME=FAMILLE
- PRESTASHOP_PASSWORD=bitnami1
- PRESTASHOP_EMAIL=user#example.com
- PRESTASHOP_HOST=localhost
- PRESTASHOP_COUNTRY=fr
- PRESTASHOP_LANGUAGE=fr
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_PROTOCOL=tls
- SMTP_USER=your_email#gmail.com
- SMTP_PASSWORD=your_password
ports:
- '80:80'
- '443:443'
volumes:
- 'prestashop_data:/bitnami'
depends_on:
- mariadb
phpmyadmin:
image: 'bitnami/phpmyadmin:4'
ports:
- '8080:80'
- '8443:443'
depends_on:
- mariadb
volumes:
- 'phpmyadmin_data:/bitnami'
volumes:
mariadb_data:
driver: local
prestashop_data:
driver: local
phpmyadmin_data:
driver: local

Related

Custom installation of docker nextcloud

I'm trying to configure my nextcloud on my digitalocean server (debian 11). Using nginx proxy manager and nextcloud under docker
I change the root directory of the docker-compose (since I was out of disk space, I added a volume and mounted it at /var/lib/docker/volumes/volume_nyc1_01)
I created a new folder called nextcloud. Inside that created docker-co
Version: "3"
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
# add this if the network is already existing!
# external: true
backend:
services:
nextcloud-app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/lib/docker/volumes/volume_nyc1_01/var/www/html
environment:
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm
- DB_MYSQL_PASSWORD=raspberrypi
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/var/lib/docker/volumes/volume_nyc1_01/data
- npm-ssl:/var/lib/docker/volumes/volume_nyc1_01/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_DATABASE=npm
- MYSQL_USER=npm
- MYSQL_PASSWORD=raspberrypi
volumes:
- npm-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
networks:
- backend
As you could notice, I created through mkdir, each folder after volume_nyc1_01.
Finally I started the server, from /var/lib/docker/volumes/volume_nyc1_01/nextcloud, using docker-compose up -d
Once logged in the ip-addres-server:81, I created the proxy host with domain name mydomain.com and forward hostname/ip nextcloud-app port 80. Saved
When i check in the domain name, it just doesn't show anything. The same happens when tried to establish the ssl.
I know I'm missing something, but I searched a lot and couldn't find anything. I really appreciate any help or suggestion

Mongo Express to wait for MongoDB Cluster in Docker Compose

I'm trying to setup a local development with Docker Compose that has a MongoDB cluster as the database. I chose Mongo Express as the Database Admin User Interface so I can check inside the MongoDB database. It does take some time for the cluster to accept connections, I have the 3 db containers as part of the depends_on, but seems like I have to do more than that based on the Docker Compose documentation here. I can't seem to find a good example for waiting for MongoDB clusters. Has anyone figured this out already? Please share, that would be great. Thank you in advance!
Here's the docker-compose.yml file:
version: '3.9'
services:
mongodb-primary:
image: 'bitnami/mongodb:latest'
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
volumes:
- 'mongodb_master_data:/bitnami'
mongodb-secondary:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
mongodb-arbiter:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
dbadmin:
image: mongo-express
restart: always
ports:
- 8081:8081
depends_on:
- mongodb-primary
- mongodb-secondary
- mongodb-arbiter
environment:
ME_CONFIG_MONGODB_URL: mongodb://root:password#mongodb-primary:27017,mongodb-secondary:27017,mongodb-arbiter:27017?replicaSet=replicaset
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: mexpress
volumes:
mongodb_master_data:
driver: local

The AirFlow 1.10: the scheduler does not apper to be running

I run AirFlow on local machine with docker-compose:
version: '2'
services:
postgresql:
image: bitnami/postgresql:10
volumes:
- postgresql_data:/bitnami/postgresql
environment:
- POSTGRESQL_DATABASE=bitnami_airflow
- POSTGRESQL_USERNAME=bn_airflow
- POSTGRESQL_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
redis:
image: bitnami/redis:5.0
volumes:
- redis_data:/bitnami
environment:
- ALLOW_EMPTY_PASSWORD=yes
airflow-scheduler:
image: bitnami/airflow-scheduler:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_scheduler_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow-worker:
image: bitnami/airflow-worker:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_worker_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow:
image: bitnami/airflow:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_USERNAME=user
- AIRFLOW_PASSWORD=password
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=yes
ports:
- '8080:8080'
volumes:
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
volumes:
airflow_scheduler_data:
driver: local
airflow_worker_data:
driver: local
airflow_data:
driver: local
postgresql_data:
driver: local
redis_data:
driver: local
But when I sing in the UI interface, I see
"The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled."
Why?
I use official docker images and there is no problem with this.
And another problem - while I don't switch AIRFLOW_LOAD_EXAMPLES=yes or no and restart docker-compose I don't see the updated DAGs' list. (
When I have got for AirFlow 1 docker-compose by Puckel, everything worked: https://github.com/puckel/docker-airflow/blob/master/README.md

Identity Server4: Unable to obtain configuration from 'http://localhost:44338/.well-known/openid-configuration

I'm working on microservices in .Net core 3.1 with docker-compose. we are using Linux based Docker containers. MVC Client application is perfectly running in IIS express along with Identity 4 API, when I run microservices in docker-compose then I am getting error in the client application.
{"StatusCode":500,"Message":"Internal Server Error."}
In the MVC client container, I checked the container's log. I got the below error.
Unable to obtain configuration from 'http://localhost:44338/.well-known/openid-configuration'
this above URL is representing Identity Server 4 that is also running in a container.
The sequence of my docker-compose is
version: '3.4'
services:
identity.api:
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
- MvcClient=http://localhost:44353
ports:
- "44338:80"
- "443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
rinmvc:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80;
- ASPNETCORE_URLS=http://0.0.0.0:80
- IdentityUrl=http://localhost:44338
ports:
- "44353:80"
- "443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
After spending few days on this issue, I got the correct solution.
I've modified my docker-compose.override file as below:
version: '3.4'
services:
rabbitmq:
ports:
- "15672:15672"
- "5672:5672"
identity.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- MvcClient=http://rinmvc
ports:
- "44338:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
rinmvc:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- IdentityUrl=http://identity.api
ports:
- "44353:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro

Launching more containers with a Traefik+Odoo+Postgres .yml

I'm using a .yml to launch an Odoo instance & its postgresql through a traefik and it works perfectly fine. It assigns the domain and subdomain with a Let's Encrypt certificate, perfect.
My issue now is that I wish to launch more of these but I haven't succeeded in doing so.
I've tried:
Changing the ports: in the odoo section
Removing the ports: section from the odoo section
Editing the ports: in the traefik section
The .yml:
version: "3"
services:
odoo:
image: odoo:12.0
depends_on:
- db
restart: unless-stopped
networks:
- internal
ports:
- "8069:8069"
- "8072:8072"
environment:
- HOST=db
- USER=${ODOO_USER}
- PASSWORD=${ODOO_PASS}
volumes:
- ./odoo/odoo-web-data:/var/lib/odoo
- ./odoo/config:/etc/odoo
- ./odoo/addons:/mnt/extra-addons
- ./odoo/logs:/var/log/odoo
labels:
- 'traefik.http.routers.odoo.rule=Host(`${ODOO_TRAEFIK_URL}`)'
- 'traefik.http.routers.odoo.entrypoints=websecure'
- 'traefik.http.routers.odoo.tls.certresolver=odoo'
- 'traefik.port=8069'
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
db:
image: postgres:10
restart: unless-stopped
networks:
- internal
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=${ODOO_USER}
- POSTGRES_PASSWORD=${ODOO_PASS}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
traefik:
image: traefik:v2.0
networks:
- internal
- web
ports:
# The HTTP port
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- "./traefik/letsencrypt:/letsencrypt"
- "./traefik/traefik.yml:/etc/traefik.yml"
- "/var/run/docker.sock:/var/run/docker.sock"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.defaultRule=Host(`{{ trimPrefix `/` .Name }}.${TRAEFIK_DEFAULT_DOMAIN}`)"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--certificatesResolvers.odoo.acme.httpchallenge=true"
- "--certificatesresolvers.odoo.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.odoo.acme.email=${ACME_EMAIL}"
- "--certificatesresolvers.odoo.acme.storage=/letsencrypt/acme.json"
networks:
internal:
web:
external: true
Assuming I already have a Traefik + Odoo + Postgres already running, how could I go about launching more instances of these together?