MaxScale no Slave State set - docker-compose

We want to use MaxScale and two MariaDB databases with docker-compose.
We have the problem that we do not achieve replication of the database via maxscale.
Write permissions are available via MaxScale on both databases. Via the command maxscale list servers in the maxscale container, we see both servers. The first server has the states Master, Running and the second server has only the state Running.
My docker-compose.yaml:
version: '3'
services:
# Application
app:
build:
context: .
dockerfile: app.dockerfile
working_dir: /var/www/project
volumes:
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
links:
- database:database
environment:
- "DATABASE_HOST=database"
- "DATABASE_PORT=4006"
# Web server
web:
image: nginx:latest
volumes:
- ./vhost.conf:/etc/nginx/conf.d/default.conf
- ./nginx-logs:/var/log/nginx
# Inherit from app container
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
ports:
- 0.0.0.0:8021:80
links:
- app:app
# Database
database:
image: mariadb:latest
ports:
- 0.0.0.0:3306:3306
volumes:
- ./database:/var/lib/mysql
- ./database-config:/etc/mysql/
command: mysqld --log-bin=mariadb-bin --binlog-format=ROW --server-id=3001 --log-slave-updates
environment:
- "MYSQL_ROOT_PASSWORD=secretDummyPassword"
- "MYSQL_DATABASE=database"
- "MYSQL_USER=database"
- "MYSQL_PASSWORD=secretDummyPassword"
- "skip-networking=0"
#Max Scale
maxscale:
image: mariadb/maxscale:6.2.3
depends_on:
- database
volumes:
- ./maxscale.cnf:/etc/maxscale.cnf
ports:
- 0.0.0.0:4006:4006 # readwrite port
- 0.0.0.0:4008:4008 # readonly port
- 0.0.0.0:8989:8989 # REST API port
links:
- database:database
volumes:
app: {}
My maxscale.cnf:
[maxscale]
threads=auto
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2
user=database
password=secretDummyPassword
auto_failover=true
auto_rejoin=true
enforce_read_only_slaves=1
[Read-Write-Service]
type=service
router=readwritesplit
servers=server1,server2
user=database
password=secretDummyPassword
master_failure_mode=fail_on_write
[Read-Write-Listener]
type=listener
service=Read-Write-Service
protocol=MariaDBClient
port=4006
[server1]
type=server
address=195.XXX.123.22
port=3306
protocol=MariaDBBackend
[server2]
type=server
address=142.XXX.186.188
port=3306
protocol=MariaDBBackend

If you haven't configured the replication manually, you can use the following command inside the Maxscale container to set up replication between the servers:
maxctrl call command mariadbmon reset-replication MariaDB-Monitor server1
This causes all other servers configured for the MariaDB-Monitor to start replicating from server1.
Note: this command resets the GTID positions so it should not be used on a live system. If you are using a live system, use the CHANGE MASTER TO command with the correct GTID coordinates. It won't touch the data but you'll lose the history (it does a RESET MASTER).
If you want the replication to be configured automatically when the container is first started, you can mount a file with SQL commands in it at /docker-entrypoint-initdb.d and MariaDB will execute them during startup. This is probably a better solution for automated systems and it is quite convenient for a test setup.

Related

Access container from docker-compose using linuxserver/duckdns IP

I was looking for a software like No-IP to dynamically update my IP using a free domain from them like <domain>.zapto.org, but this time for setting up with docker containers. So I found about duckdns and tried setting it up.
Well, perhaps I got it wrong, but as per what I understood, I can create a service within my docker-compose services setting up the linuxserver/duckdns. When I do that, I suppose that I can then access my other services from that same compose using the domain created on duckdns, is that right?
For instance, I got this docker-compose:
version: "3.9"
services:
dns_server:
image: linuxserver/duckdns:version-13f609b7
restart: always
environment:
TOKEN: ${DUCKDNS_TOKEN}
TZ: ${TZ}
SUBDOMAINS: ${DUCKDNS_SUBDOMAINS}
depends_on:
- server
- db
- phpmyadmin
server:
# ...
restart: always
ports:
- "7171:7171"
- "7172:7172"
# ...
command: sh -c "/wait && screen -S tfs ./tfs"
# Database
db:
image: bitnami/mariadb:10.8.7-debian-11-r1
restart: always
ports:
- "3306:3306"
# ...
# phpmyadmin
phpmyadmin:
# ...
image: bitnami/phpmyadmin:5.2.1-debian-11-r1
restart: always
ports:
- "8080:8080"
- "8443:8443"
# ...
That compose gives me these containers running:
When I try to reach my server service by using 127.0.0.1:7171 or localhost:7171, and also access my phpmyadmin by 127.0.0.1:8080, it works, but it doesn't when I try using <mydomain>.duckdns.org:7171 or <mydomain>.duckdns.org:8080
What is wrong?
As I know, when you define the port - "7171:7171" like this it will bound to your localhost 127.0.0.1, which you can access. If you want to allow public access try something like
server:
ports:
- "0.0.0.0:7171:7171"
- "0.0.0.0:7172:7172"
And you can access the port via your Public IP address or hostname of duckDNS.
FYI: Beware of the security risks of exposing the code to the public.

Access Postgres database remotely that's hosted on Azure in docker container with webapi

I am new to Azure cloud services so excuse me if this is a dumb question.
I have a docker-compose file with a .Net core webapi and postgres database. I have it running on Azure as a web-app and its working (I can see when I query the API that there's data in the database). However I would like to get access to the database remotely so that I can inspect and see the data in the database via pgAdmin or something similar.
I did bind a port to my pgAdmin site in my docker-compose but it does not seem like that port is open. I've read somewhere that only port 80 and 443 can be exposed from Azure web-apps when using multi-image containers. (This docker-compose works locally 100% and I can access the pgAdmin site and see the database with all its tables).
So my question is, how do I run my web-api with my postgres database on azure and have visibility to my database?
Docker-compose file:
version: '3.8'
services:
web:
container_name: 'bootcampapi'
image: 'myimage'
build:
context: .
dockerfile: backend.dockerfile
restart: always
ports:
- 8080:80
depends_on:
postgres:
condition: service_healthy
networks:
- bootcampbackend-network
postgres:
container_name: 'postgres'
restart: always
image: 'postgres:latest'
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
- POSTGRES_DB=database-name
- PGDATA=database-data
networks:
- bootcampbackend-network
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- bootcampbackend-network
volumes:
- database-other:/var/lib/pgadmin/
networks:
bootcampbackend-network:
driver: bridge
As you have found, App Service only listens on one port. One solution around that is to use a reverse proxy like Nginx to route the traffic to both your containers.
BTW, build, depends_on and networks are unsupported. See doc

Postgres Database not being Created by Docker-Compose.yml file

This error is ONLY occurring on one of my 4 devices, and I am trying to debug it. This device is a Macbook pro with an Intel processor.
The database container (db service) spins up but doesn't create the database.
version: "3.7"
services:
db:
networks:
new:
aliases:
- database
restart: always
container_name: db
image: postgres:latest
ports:
- 5433:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=user
- POSTGRES_DB=core
# - PGDATA=/tmp
volumes:
- ./pgdata:/var/lib/postgresql/data
migrate:
image: migrate/migrate
depends_on:
- db
networks:
- new
volumes:
- ./db/migrations:/migrations
command: ["-path", "/migrations", "-database", "postgres://user:password#database:5432/core?sslmode=disable", "up"]
links:
- db
web:
networks:
- new
build: .
ports:
- "8080:8080"
volumes:
- .:/server
links:
- db
depends_on:
- db
- redis
environment:
PORT: 8080
CONNECTION_STRING_DEV: db://user:password#db:5433/db
DSN: "db://user:password#db:5433/core"
redis:
networks:
- new
image: "redis"
ports:
- "6379:6379"
networks:
new:
The container stops at 2022-01-19 15:37:02.916 UTC [49] LOG: database system is ready to accept connections and never actually executes "CREATE DATABASE"
Because the database isn't created, my connected Go API isn't functioning properly. The docker-compose should be creating the database "core", spinning up the redis instance, and then spinning up the web service. Afterwards, I typically pull up the migrate container which makes my database migrations. All of my other devices (macOS, windows, and linux), function properly and bring up the database when docker-compose up web is run.
here is warning from postgresql docker image page:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue on with your scripts.
so one of the reason that your host you are using has something in ./pgdata
also you they have pretty detailed documentation on how you can extend image or run something on startup - you can actually clean up everything on first startup.
https://hub.docker.com/_/postgres

Works on mac, but on windows get ECONNREFUSED, docker-toolbox

This is my docker-compose file that runs when I do docker-compose up -d on mac. I am now trying this on windows, with docker-toolbox (as docker desktop isn't supported on my windows). I run my application on http://localhost:1337 and then the application needs to talk to inside this container. Works totally fine on mac.
version: '3.4'
services:
# Add a redis instance to which our app can connect. Quite simple.
redis-dev:
image: redis:5.0.5-alpine
ports:
- 6379:6379
# Add a postgres instance as our primary data store
postgres-dev:
image: postgres:11.5-alpine
environment:
- POSTGRES_DB=the-masjid-app
ports:
- 5432:5432
volumes:
# Here we specify that docker should keep postgres data,
# so the next time we start docker-compose,
# our data is intact.
- the-masjid-app-pgdata-dev:/var/lib/postgresql/data
# Add a postgres instance as our primary data store
postgres-test:
image: postgres:11.5-alpine
environment:
- POSTGRES_DB=the-masjid-app
ports:
- 5433:5432
# Here we can configure settings for the default network
networks:
default:
# Here we can configure settings for the postgres data volume where our data is kept.
volumes:
the-masjid-app-pgdata-dev:
Doing the same thing in Windows is giving me:
Error: Redis connection to localhost:6379 failed - connect
ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as
oncomplete] (net.js:1141:16)
Any ideas on how to fix?

How to make persistent storage with docker-compose up-down-up?

I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.