I'm trying to run my first container on docker swarm and have following issue:
on swarm node it is looping - starting 1,2,3 seconds and initialising like in a loop,
I don't know if because of my docker-compose this fails - maybe I created wrong yml file.
version: '3.7'
services:
app1:
image: debian:latest
# command: sh -c 'apt update'
#command: sh -c 'apt install ssh -y'
deploy:
replicas: 1
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '2'
memory: 2G
networks:
net1:
driver: overlay
The plain debian image is just going to go to an interactive session, which counts as an exit when nothing is attached.
You need to do something like this to keep a plain linux os image alive as a service:
services:
my-service:
image: debian
command: tail -f /dev/null
Related
I'm trying to start locust using docker-compose on Macbook M1:
the issue:
it starts but there is no workers
expected behaviour:
should have one worker
logs:
no error logs
code to reproduce it:
version: '3.3'
services:
master_locust:
image: locustio/locust:master
ports:
- "8089:8089"
volumes:
- ./backend:/mnt/locust
command: -f /mnt/locust/locustfile.py --master
depends_on:
- worker_locust
worker_locust:
image: locustio/locust:master
volumes:
- ./backend:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host=master_locust
commands:
docker-compose up master_locust
docker-compose up --scale worker_locust=4 master_locust
screenshot:
I am using a docker compose on a 2GO digitalOcean server to deploy my app, but I noticed that the postgresql container was using all the ram available for him !
This is not normal and I wanted to know how to fix this problem..?
So I go in the logs of the container (docker logs postgres) and I found this:
postgresql container logs
I didn't expect to have logs after 'database is ready to accept connections' logs are like if I didn't have package installed in the container, but I am using the official image so I think it should work...
To help you to help me haha:
my docker-compose file:
version: "3"
services:
monapp:
image: registry.gitlab.com/touretchar/workhouse-api-bdd/master:latest
container_name: monapp
depends_on:
- postgres
ports:
- "3000:3000"
command: "npm run builded-test"
restart: always
deploy:
resources:
limits:
cpus: 0.25
memory: 500M
reservations:
memory: 150M
postgres:
image: postgres:13.1
container_name: postgres
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_datas:/var/lib/postgresql/data/
- postgres_dumps:/home/dumps/test
ports:
- "5432:5432"
restart: always
deploy:
resources:
limits:
cpus: 0.25
memory: 500M
reservations:
memory: 150M
volumes:
postgres_datas:
driver: local
driver_opts:
type: none
device: $PWD/util/databases/pgDatas
o: bind
postgres_dumps:
driver: local
driver_opts:
type: none
device: $PWD/util/databases/test
o: bind
and output of docker stats there:
enter image description here
If you have ideas ! thanks by advance :)
I finally found a solution, it was because my container was compromised!
Indeed my container with postgres had an open port on 5432 to internet, so everyone could connect to it using the digitalocean droplet ip and port (:port), and I think someone was hacking my container and was using all my Ram/cpu allow to the container!
I am sure about this beaucause to correct the problem, I blocked access to the container from outside of my droplet by adding a firewall rule with iptables (you should add the rule in chain DOCKER-USER), and since I add the rule, ram consumption of the container is back to normal, and I Don t have the weird logs I published in my question anymore!
Conclusion: be careful of your Docker container security when they are running on web!
Thanks hope it helps someone :)
There are two services in my docker-compose.yml. A mail service which uses MailHog and a MongoDB for storage.
The problem is that the MongoDB service needs to be up and running before MailHog. Otherwise, MailHog will do a fallback and use its in-memory storage.
A simple depends_on is not sufficient because the MongoDB service takes some time to start.
I'm aware of scripts like wait-for-it etc. but they all require modifying the Dockerfile where in my case I'm using the unmodified Docker image of MailHog.
Is there any "built-in" mechanism or workaround how I can delay the mail service until MongoDB is ready?
mail:
image: mailhog/mailhog:v1.0.0
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 60s
mail-db:
image: mongo:4.2.6
environment:
MONGO_INITDB_DATABASE: mailhog
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
ports:
- 27017
deploy:
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.5"
memory: 500M
One way is to supply your own entrypoint script, which you can add to the container with volumes. In the script, wait for a successful connection to MongoDB and then exec the original entrypoint.
Stack:
volumes:
- /path/to/entrypoint.sh:/tmp/entrypoint.sh
entrypoint: /bin/bash
command: /tmp/entrypoint.sh
entrypoint.sh:
# Wait for service or whatever
exec /path/to/original/entrypoint
No, there are not. However, there is no need to modify the original Dockerfile. You can extend it with jwilder/dockerize, a tool developed for this specific purpose (amongst others).
FROM mailhog/mailhog:v1.0.0
# If required, you can change MONGOURL via docker -e [...]
ENV MONGOURL mail-db:27017
# The dockerize version used. You can set a different version with
# docker build --build-arg DOCKERIZE_VERSION=[...]
ARG DOCKERIZE_VERSION=v0.6.1
# Change to root to be able to install dockerize
USER root
# 1: Ensure the image is up to date, while we are at it
RUN apk update && apk upgrade \
# 2: Install curl and its dependencies as the virtual package ".deps"
&& apk add --virtual .deps curl \
# 3: Get dockerize
&& curl -L -O https://github.com/jwilder/dockerize/releases/download/${DOCKERIZE_VERSION}/dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \
# 4: Unpack it and put it to the appropriate location as per FHS
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \
# 5: Remove the tarball
&& rm dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \
# 6: Cleanup
&& rm -rf /var/cache/apk/* \
# 7: Remove the virtual package ".deps"
&& apk del .deps
# Switch back to the user mailhog is supposed to run under
USER mailhog
# Run dockerize, which will start mailhog as soon as it was able to connect to $MONGOURL
ENTRYPOINT ["/bin/sh","-c","/usr/local/bin/dockerize -wait tcp://$MONGOURL MailHog"]
Note: the syntax highlighting does not work properly on the Dockerfile, for whatever reason
Tested with the following docker-compose.yaml (the deploy parts are obviously ignored by docker-compose):
version: "3"
services:
mail:
image: robertstauch/mailhog:v1.0.0-dockerized-v0.6.1
build: .
ports:
- "8025:8025"
- "1025:1025"
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 60s
mail-db:
image: mongo:4.2.6
environment:
MONGO_INITDB_DATABASE: mailhog
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
deploy:
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.5"
memory: 500M
I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com
I´m having problems getting data volume containers running in docker-compose v3. As a test I´ve tried to connect two simple images like:
version: '3'
services:
assets:
image: cpgonzal/docker-data-volume
container_name: data_container
command: /bin/true
volumes:
- assets_volume:/tmp
web:
image: python:3
volumes:
- assets_volume:/tmp
depends_on:
- assets
volumes:
assets_volume:
I would expect that python:3 container can see /tmp of data_container. Unfortunately
docker-compose up
fails with
data_container exited with code 0
desktop_web_1 exited with code 0
What am I doing wrong?
Both of your containers exited because there's no command to keep it running.
Use these three options: stdin_open, tty, and command to keep it running.
Here's an example:
version: '3'
services:
node:
image: node:8
stdin_open: true
tty: true
command: sh