I am trying to stand up 2 ghost containers, with mysql on the back end with eeacms/haproxy as the load balancer in docker containers error 503 - docker-compose

I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated

I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050

Related

Custom installation of docker nextcloud

I'm trying to configure my nextcloud on my digitalocean server (debian 11). Using nginx proxy manager and nextcloud under docker
I change the root directory of the docker-compose (since I was out of disk space, I added a volume and mounted it at /var/lib/docker/volumes/volume_nyc1_01)
I created a new folder called nextcloud. Inside that created docker-co
Version: "3"
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
# add this if the network is already existing!
# external: true
backend:
services:
nextcloud-app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/lib/docker/volumes/volume_nyc1_01/var/www/html
environment:
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm
- DB_MYSQL_PASSWORD=raspberrypi
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/var/lib/docker/volumes/volume_nyc1_01/data
- npm-ssl:/var/lib/docker/volumes/volume_nyc1_01/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_DATABASE=npm
- MYSQL_USER=npm
- MYSQL_PASSWORD=raspberrypi
volumes:
- npm-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
networks:
- backend
As you could notice, I created through mkdir, each folder after volume_nyc1_01.
Finally I started the server, from /var/lib/docker/volumes/volume_nyc1_01/nextcloud, using docker-compose up -d
Once logged in the ip-addres-server:81, I created the proxy host with domain name mydomain.com and forward hostname/ip nextcloud-app port 80. Saved
When i check in the domain name, it just doesn't show anything. The same happens when tried to establish the ssl.
I know I'm missing something, but I searched a lot and couldn't find anything. I really appreciate any help or suggestion

Can't connect containers mariadb and phpmyadmin

I get the error "mysqli::real_connect(): (HY000/2002): No such file or directory" when trying to login to phpmyadmin. I verified I can connect to the DB container from the localhost using mysql -h 127.0.0.1 -P 3306 -u root -p. Below is my docker-compose file:
version: "3.7"
########################### SECRETS
secrets:
mysql_root_password:
file: $DOCKERDIR/secrets/mysql_root_password
########################### SERVICES
services:
# Portainer - WebUI for Containers
portainer:
container_name: portainer
image: portainer/portainer-ce:latest
restart: unless-stopped
command: -H unix:///var/run/docker.sock
security_opt:
- no-new-privileges:true
ports:
- "$PORTAINER_PORT:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/portainer/data:/data
environment:
- TZ=$TZ
# MariaDB - MySQL Database
db:
container_name: db
image: linuxserver/mariadb:latest
restart: always
security_opt:
- no-new-privileges:true
ports:
- "$MARIADB_PORT:3306"
volumes:
- $DOCKERDIR/mariadb/data:/config
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
- FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# phpMyAdmin - Database management
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: unless-stopped
depends_on:
- db
security_opt:
- no-new-privileges:true
ports:
- "$PHPMYADMIN_PORT:80"
volumes:
- $DOCKERDIR/phpmyadmin:/etc/phpmyadmin
environment:
- PMA_HOST=db
#- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# Dozzle - Real-time Docker Log Viewer
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- "$DOZZLE_PORT:8080"
environment:
DOZZLE_LEVEL: info
DOZZLE_TAILSIZE: 300
DOZZLE_FILTER: "status=running"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
For the life of me, I can't figure out what I am doing wrong to log into Phpmyadmin. Can someone explain my mistake or mistakes and point me in the right direction? Thanks
I figured the issue out, first was I had the network set on the pphpmyadmin section, and not db, once I added the network statement to db section, I was able to connect.

docker compose phpmyadmin php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution

I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.

Containers launched with Docker-Compose cannot connect to each other

I have a beginner question with Docker Compose. I am trying to extend the docker-compose-slim.yml example file from Zipkin GitHub repository.
I need to change it so that it can include a simple FastAPI app that I have written. Unfortunately, I cannot make them connect to each other. FastAPI gets rejected when it attempts to send a POST request to the Zipkin container, even though they are both connected to the same network with explicit links and port mapping defined in the YAML file. However, I am able to connect to both of them from the host, however.
Could you please tell me what I have done wrong?
Here is the error message:
Error emitting zipkin trace. ConnectionError(MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=9411): Max retries exceeded with url: /api/v2/spans (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fce354711c0>: Failed to es
tablish a new connection: [Errno 111] **Connection refused**'))"))
Here is the Docker Compose YAML file:
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
storage:
image: busybox:1.31.0
container_name: fake_storage
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
default:
name: foo_network
Here is the Dockerfile:
FROM python:3.8.5
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["uvicorn", "wsgi:app", "--host", "0.0.0.0", "--port", "8000"]
You must tell the containers the network "foo_network". The External flag says that the containers are not accessible from outside. Of course you don't have to bet, but I thought as an example it might be quite good.
And because of the "links" function look here Link
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
networks:
- foo_network
storage:
image: busybox:1.31.0
container_name: fake_storage
networks:
- foo_network
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
networks:
- foo_network
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
- foo_network
networks:
foo_network:
external: false

Intergrate elasticsearch with multiple mongodb in docker-compose

I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve