I'm trying to perform horizontal scaling of long processing task with docker compose.
To simulate that task, I used the following code in python:
#app.get("/test")
def test():
start = time.time()
print("Start time", time.asctime(time.localtime(start)))
time.sleep(5)
end = time.time()
print("End time", time.asctime(time.localtime(end)))
return {"start":start, "end":end}
In docker compose I created 2 containers for the same code above. Mainly back1 and back2
version: '3.8'
services:
back1:
build:
context: ./backend
dockerfile: DockerFile
volumes:
- ./backend:/usr/code
command: python main.py
ports:
- "8001"
back2:
build:
context: ./backend
dockerfile: DockerFile
volumes:
- ./backend:/usr/code
command: python main.py
ports:
- "8001"
nginx:
build:
context: ./backend/config
dockerfile: DockerFile
ports:
- "80:80"
And this is my nginx conf file:
upstream backend {
server back1:8001;
server back2:8001;
keepalive 32;
}
server {
listen 80;
server_name api.localhost;
location / {
proxy_pass http://backend;
}
}
I tried to simultaneously trigger the API http://api.localhost/test. However it seems that the second request was activated after the 1st request is completed. Even though it was processed by different instance of container.
Is nginx actually blocking?? I know that we can use queueing (eg: rabbitMQ, etc), but I thought that spinning up 2 docker container and using nginx as load balancer would achieve horizontal scalling. Am I wrong?
Related
When trying to deploy my app on Docker swarm I have two services: NGINX to serve static files and app to compile some static files. To run static files compilation I'm using entrypoint in Compose file.
docker-compose.yml:
version: "3.9"
services:
nginx:
image: nginx
healthcheck:
test: curl --fail -s http://localhost:80/lib/tether/examples/viewport/index.html || exit 1
interval: 1m
timeout: 5s
retries: 3
volumes:
- /www:/usr/share/nginx/html/
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
ports:
- "8000:80"
depends_on:
- client
client:
image: my-client-image:latest
restart: "no"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
volumes:
- /www:/app/www
entrypoint: /entrypoint.sh
entrypoint.sh
./node_modules/.bin/gulp compilescss
I tried adding restart: "no" in my service, but service is restarted on entrypoint completion anyway
Docker 23.0.0 is now out. As such you have two options:
stack files now support swarm jobs. Swarm understands that these run to completion. i.e. mode: replicated-job.
Docker Compose V3 Reference makes it clear that "restart:" applies to compose and "deploy.restart_policy.condition: on-failure" is the equivalent swarm statement.
I am using the following techs for my project
Backend / API : Nest JS ( Running on port localhost:3001 )
Frontend: Next JS ( Running on port localhost:3000 )
Database: MongoDB
Reverse Proxy Server: Nginx
and Docker
When I am using "docker-compose up -d" the Backend is running properly as well as all the API URLs are working fine by this " localhost:3001 "
Also, in the Frontend, the site is loading properly and also all the API data is showing. But, an error is showing in the console and popup like this screenshot
Error Image 1: click here Error Image 2: click here
Error in text:
In popup:
Unhandled Runtime Error
Error: Network Error
Call Stack
createError
node_modules/axios/lib/core/createError.js (16:0)
XMLHttpRequest.handleError
/_next/static/development/pages/Index.js (16144:14)
In Console:
Access to XMLHttpRequest at 'http://dev:3001/news?page=1&limit=3' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I have called the API from the frontend as "http://dev:3001" as I am loading it from docker image and the service name is dev.
Here is the docker-compose file:
version: "3.9"
services:
dev:
container_name: nest-backend
image: nest-backend:1.0.0
build:
context: ./backend
target: development
dockerfile: ./Dockerfile
expose:
- 3001
ports:
- "3001:3001"
links:
- mongo
volumes:
- ./backend:/backend/app
- /backend/app/node_modules
restart: unless-stopped
command: npm run start:dev
networks:
- app-network
app:
image: bjithp-next
build: frontend
expose:
- 3000
ports:
- 3000:3000
depends_on:
- dev
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next
networks:
- app-network
mongo:
image: mongo
# environment:
# - MONGO_INITDB_ROOT_USERNAME=root
# - MONGO_INITDB_ROOT_PASSWORD=1234
ports:
- "27037:27017"
volumes:
- mongodb_data_container:/data/db
networks:
- app-network
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
restart: always
depends_on:
- app
- dev
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- app-network
volumes:
mongodb_data_container:
mongodb-data:
name: mongodb-data
networks:
app-network:
driver: bridge
I have also used nginx for reverse proxy. Here is the nginx default.conf file:
// nginx.conf
upstream dev {
servdever dev:3001;
}
upstream client {
server app:3000;
}
server {
listen 80;
server_name localhost:3000;
access_log /path/to/access/log/access.log;
error_log /path/to/error/log/error.log;
location / {
proxy_pass http://app;
}
location ~ /dev/(?<section>.*) {
rewrite ^/dev/(.*)$ /$1 break;
proxy_pass http://dev;
}
}
And also in the frontend, I have used .env, which is this:
DEFAULT_PORT=3000
BASE_API_URL=http://dev:3001
NODE_ENV=development
HOST=http://localhost
PRODUCTION_IMAGE_PORT=443
BASE_IMAGE_URL=http://localhost:3001
HIRE_US_PAGE=http://103.197.206.56:3000
BLOG_SHARE_URL_HOST=http://bjitgroup.com
Please suggest me a solution. My site is working fine with that annoying network-error problem.
We want to use MaxScale and two MariaDB databases with docker-compose.
We have the problem that we do not achieve replication of the database via maxscale.
Write permissions are available via MaxScale on both databases. Via the command maxscale list servers in the maxscale container, we see both servers. The first server has the states Master, Running and the second server has only the state Running.
My docker-compose.yaml:
version: '3'
services:
# Application
app:
build:
context: .
dockerfile: app.dockerfile
working_dir: /var/www/project
volumes:
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
links:
- database:database
environment:
- "DATABASE_HOST=database"
- "DATABASE_PORT=4006"
# Web server
web:
image: nginx:latest
volumes:
- ./vhost.conf:/etc/nginx/conf.d/default.conf
- ./nginx-logs:/var/log/nginx
# Inherit from app container
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
ports:
- 0.0.0.0:8021:80
links:
- app:app
# Database
database:
image: mariadb:latest
ports:
- 0.0.0.0:3306:3306
volumes:
- ./database:/var/lib/mysql
- ./database-config:/etc/mysql/
command: mysqld --log-bin=mariadb-bin --binlog-format=ROW --server-id=3001 --log-slave-updates
environment:
- "MYSQL_ROOT_PASSWORD=secretDummyPassword"
- "MYSQL_DATABASE=database"
- "MYSQL_USER=database"
- "MYSQL_PASSWORD=secretDummyPassword"
- "skip-networking=0"
#Max Scale
maxscale:
image: mariadb/maxscale:6.2.3
depends_on:
- database
volumes:
- ./maxscale.cnf:/etc/maxscale.cnf
ports:
- 0.0.0.0:4006:4006 # readwrite port
- 0.0.0.0:4008:4008 # readonly port
- 0.0.0.0:8989:8989 # REST API port
links:
- database:database
volumes:
app: {}
My maxscale.cnf:
[maxscale]
threads=auto
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2
user=database
password=secretDummyPassword
auto_failover=true
auto_rejoin=true
enforce_read_only_slaves=1
[Read-Write-Service]
type=service
router=readwritesplit
servers=server1,server2
user=database
password=secretDummyPassword
master_failure_mode=fail_on_write
[Read-Write-Listener]
type=listener
service=Read-Write-Service
protocol=MariaDBClient
port=4006
[server1]
type=server
address=195.XXX.123.22
port=3306
protocol=MariaDBBackend
[server2]
type=server
address=142.XXX.186.188
port=3306
protocol=MariaDBBackend
If you haven't configured the replication manually, you can use the following command inside the Maxscale container to set up replication between the servers:
maxctrl call command mariadbmon reset-replication MariaDB-Monitor server1
This causes all other servers configured for the MariaDB-Monitor to start replicating from server1.
Note: this command resets the GTID positions so it should not be used on a live system. If you are using a live system, use the CHANGE MASTER TO command with the correct GTID coordinates. It won't touch the data but you'll lose the history (it does a RESET MASTER).
If you want the replication to be configured automatically when the container is first started, you can mount a file with SQL commands in it at /docker-entrypoint-initdb.d and MariaDB will execute them during startup. This is probably a better solution for automated systems and it is quite convenient for a test setup.
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)
I have 'docker-compose.yml' file like below (skipped only volumes. environment and network). I would like to add new port to 'logstash' service without restarting all 3 services. I did 'docker-compose build logstash --no-cache' but it didn't add the port
docker#ubuntu-elastic:~/docker-elk$ cat docker-compose.yml
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
ports:
- "9200:9200"
- "9300:9300"
logstash:
build:
context: logstash/
ports:
- "11514:11514/udp"
- "8514:8514/udp"
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
ports:
- "5601:5601"
depends_on:
- elasticsearch
This will do the trick:
docker-compose up -d logstash
If you do not change the other sections, this should also only update logstash:
docker-compose up -d
To make sure that only logstash gets updated, even if the other sections where updated too, use the first command.