How to execute multiple commands from docker-compose.yml file - docker-compose

I am using YAML file configuration as below mentioned, unable to create a folder and change access of dir.
It is able to create the directory but unable to complete this command chmod 777 /opt/ldap
ldap:
image: osixia/openldap:1.3.0
env_file:
- env/ldap.env
networks:
appworks_net:
aliases:
- docker_container.local
restart: "no"
command: bash -c "mkdir -p /opt/ldap && chmod 777 /opt/ldap" - **creates only folder**
command: ["/bin/bash", "-c", "mkdir -p /opt/ldap && chmod 777 /opt/ldap"] -- fails start container
command: ["-c", "chmod 777 /opt/ldap"] -- does not work
command: chmod 777 /opt/ldap -- does not work
command: ["-c", "mkdir -p /opt/appworks/ldap", "chmod 777 /opt/appworks/ldap"] -- only creates folder
command:
- -c
- |
mkdir -p /opt/ldap
chmod 777 /opt/ldap -- **creates only folder**
healthcheck:
test: ["CMD-SHELL", '/usr/bin/ldapsearch -h ldaps://$$HOSTNAME -p $$PORT -w $$LDAP_ADMIN_PASSWORD -D "cn=admin,dc=trialorg,dc=local" -b "dc=trialorg,dc=local" | grep "dn:"']
interval: 5s
timeout: 5s
retries: 10
please help me here if there are any possible working ideas.
I don't want to create a new image using a docker / entrypoint. I just that two commands to run post container is started.

What about
command:
- -c
- |
mkdir -p /opt/ldap;
chmod 777 /opt/ldap;
or
command:
- -c
- |
mkdir -p /opt/ldap && chmod 777 /opt/ldap;
Also are you sure that you have UNIX line endings in your file?
Another possible solution Using Docker-Compose, how to execute multiple commands

Related

How can I start Tortoise-ORM in a celery docker container?

I have an application in which I use a postgresql and celery database. Each element is running in a different container, in the celery container I am already connected to the postgres database, however I don't know how I could configure tortoise-orm to start in the celery container, since I have a task in which I want to interact with the database using tortoise.
This is my docker compose:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=fastapi_celery
- POSTGRES_USER=fastapi_celery
- POSTGRES_PASSWORD=fastapi_celery
redis:
image: redis:7-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
This is my dockerfile:
FROM python:3.10-slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Additional dependencies
&& apt-get install -y telnet netcat \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/fastapi/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/fastapi/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/fastapi/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/fastapi/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/fastapi/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
The task:
#shared_task()
def task_send_welcome_email(user_pk):
from project.users.models import User
user = User.filter(id=user_pk).first()
logger.info(f'send email to {user.email} {user.id}')

User permission is denied with chown using Dockerfile on docker-compose

I'm trying to mount a volume in docker-compose. But it seems my user does not have permission to use volume. :/
My dockerfile is:
FROM openjdk:8u181-jdk-slim
ENV HOME /app
ENV CONFIG_PATH $HOME/config
ENV DATA_PATH $HOME/data
ENV LOG_PATH $HOME/log
RUN addgroup --gid 1001 myuser \
&& adduser --uid 1001 --gid 1001 --home $HOME --shell /bin/bash \
--gecos "" --no-create-home --disabled-password myuser \
&& mkdir -p $CONFIG_PATH $DATA_PATH $LOG_PATH \
&& chown -R myuser:myuser $HOME \
&& chmod -R g=u $HOME \
&& chmod +x $HOME/*
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get clean
VOLUME $CONFIG_PATH $DATA_PATH $LOG_PATH
USER myuser:myuser
EXPOSE 7777
EXPOSE 8080
HEALTHCHECK --interval=1m --timeout=10s --start-period=2m \
CMD curl -f http://localhost:7777/health || exit 1
COPY --chown=myuser my-service-*.jar $HOME/my-service.jar
ENTRYPOINT ["/bin/bash", "-c", "java $JAVA_OPTS -jar $HOME/my-service.jar $0 $#"]
my docker-compose file is:
volumes:
my-service_stream:
my-service:
image: my-service-image
networks:
- internal
env_file:
- config/common.env
volumes:
- my-service_stream:/app/data/state
not able to use myuser and not able to mount for this user :/. I can not use and myuser does not have permission to write that volume.
I have tried adding user to my docker-compose file as
user: "1001:1001"
but nothing is changed.

Docker: run script : ERROR: Unable to lock database: Permission denied

Hello I'm trying to run a script to just start my yarn dev after my postgres is connected:
until psql -c '\l'; do
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is unavailable - sleeping"
sleep 1
done
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is up - executing command"
exec ${#}
docker file:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh ./
RUN chmod +x /wait-pg.sh
ENTRYPOINT ["/wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]
docker compose:
version: '3.7'
services:
db-pg:
image: postgres:12
container_name: db-pg
ports:
- '${DB_PORT}:5432'
environment:
ALLOW_EMPTY_PASSWORD: 'no'
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- ci-postgres-data:/data
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- db-pg
command: ['./wait-pg.sh', 'yarn', 'dev']
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '5'
volumes:
ci-postgres-data:
and get this error:
---> Running in c5add5098b70 ERROR: Unable to lock database:
Permission denied ERROR: Failed to open apk database: Permission
denied ERROR: Service 'ci-api' failed to build: The command '/bin/sh
-c apk add --no-cache openssl' returned a non-zero code: 99
You are getting the error because the node user you are trying to use does not have permissions to run the command.
Move the user definition to after the commands, something like this:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh .
RUN chmod +x ./wait-pg.sh
USER node
ENTRYPOINT ["./wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]

Waiting for a Docker container to be ready

I have the following docker-compose.yml:
version: '2'
services:
server:
build: .
command: ["./setup/wait-for-postgres.sh", "tide_server::5432", "cd /app", "npm install", "npm run start"]
ports:
- 3030:3030
links:
- database
depends_on:
- database
database:
image: postgres
environment:
- "POSTGRES_USER=postgres"
- "POSTGRES_PASSWORD=postgres"
- "POSTGRES_DB=tide_server"
ports:
- 5432:5432
I tried following this tutorial and using the following shell script to determine when postgres is ready.
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
My node Dockerfile is minimal but I have added it for reference:
FROM node:latest
ADD . /app
WORKDIR /app
EXPOSE 3030
Now when I try to run docker-compose up I get the following (after the postgres container is ready:
server_1 | Postgres is unavailable - sleeping
server_1 | ./setup/wait-for-postgres.sh: line 10: psql: command not found
server_1 | Postgres is unavailable - sleeping
server_1 | ./setup/wait-for-postgres.sh: line 10: psql: command not found
server_1 | Postgres is unavailable - sleeping
server_1 | ./setup/wait-for-postgres.sh: line 10: psql: command not found
server_1 | Postgres is unavailable - sleeping
server_1 | ./setup/wait-for-postgres.sh: line 10: psql: command not found
Now I am not sure if this is a linking issue, or something wrong with my script but I have tried every variation I can think of and have had no luck getting this up/
This will successfully wait for Postgres to start. (Specifically line 6)
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
When you declare command in docker-compose.yml, the command itself is executed in the container it is declared to start in. Catch my drift?
Your ./setup/wait-for-postgres.sh is being executed in the container, which has no postgres installed. And you can't change it.
But no, actually you can. Run your script in the postgres container. But if you define command in the database section, it will override the default CMD defined in postgres:latest, which is just CMD ["postgres"].
This means, you have to slightly rewrite your script:
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
postgres
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd

Symfony app in Docker doesn't work

I created an app in Symfony with MongoDB and I added that in a Docker image.
Image for MongoDB works good with message: 2017-04-19T12:47:33.936+0000 I NETWORK [initandlisten] waiting for connections on port 27017
But image for app dosen't work I receive the message:
stdin: is not a tty
hello
when I call the instruction: docker run docker_web_server:latest
I use for that docker-compose file:
web_server:
build: web_server/
ports:
- 5000:5000
links:
- mongo
tty: true
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
And Dockerfile for app is:
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
git \
curl \
php5-cli \
php5-json \
php5-intl
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
ADD entrypoint.sh /entrypoint.sh
ADD ./code /var/www
WORKDIR /var/www
#RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/entrypoint.sh" ]
entrypoint.sh
#!/bin/bash
rm -rf /var/www/app/cache/*
exec php -S 0.0.0.0:8000 # This will run a web-server on port 8000
What is the problem?
I call wrong the server from docker image?
I expected to have the message:
[OK] Server running on http://127.0.0.1:8000
You should remove CMD ['echo', 'hello'] from your Dockerfile, has this is being passed as a parameter to your ENTRYPOINT
You should also add tty: true to your service definition, web-server
I'm hoping entrypoint.sh runs php -S 0.0.0.0:8000 at end. Please post that for further advice. If you do use php -S inside of it, prefix it with exec to get it to take over as the main process.
Edit since new information added:
I'd modify the entrypoint.sh to:
#!/bin/bash
rm -rf /var/www/app/cache/*
php -S 0.0.0.0:8000 # This will run a web-server on port 8000
I'd get rid of the symfony_environment.sh and instead, add the following to your web-server service:
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
As a side-note, I contribute to a project called boilr that generates boilerplates, like this. I've even created a template for php/docker-compose projects, it would be worth checking out. I always keep it up-to-date with the best practices.
https://github.com/rawkode/boilr-docker-compose-php