env variables docker-compose - docker-compose

I would like to pass environment variables to this command inside my docker-compose.yml, how to do that? I guess the problem is in quotes. How to solve it?
web:
build: .
env_file:
- .env.dev
command: >
sh -c "sleep 1 && python manage.py collectstatic --no-input --clear && \
python manage.py migrate --noinput && \
python manage.py createsuperuserwithpassword --username \
${DJANGO_SUPERUSER_USERNAME} --password admin --email admin#example.org \ --preserve && gunicorn job_filter_app.wsgi:application --bind 0.0.0.0:8000"

It's work:
build: .
env_file:
- .env.dev
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py collectstatic --no-input --clear
python manage.py migrate --noinput
python manage.py createsuperuserwithpassword --username
$${DJANGO_SUPERUSER_USERNAME} --password
$${DJANGO_SUPERUSER_PASSWORD} --email $${DJANGO_SUPERUSER_EMAIL} ...etc

Related

How can I start Tortoise-ORM in a celery docker container?

I have an application in which I use a postgresql and celery database. Each element is running in a different container, in the celery container I am already connected to the postgres database, however I don't know how I could configure tortoise-orm to start in the celery container, since I have a task in which I want to interact with the database using tortoise.
This is my docker compose:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=fastapi_celery
- POSTGRES_USER=fastapi_celery
- POSTGRES_PASSWORD=fastapi_celery
redis:
image: redis:7-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
This is my dockerfile:
FROM python:3.10-slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Additional dependencies
&& apt-get install -y telnet netcat \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/fastapi/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/fastapi/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/fastapi/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/fastapi/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/fastapi/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
The task:
#shared_task()
def task_send_welcome_email(user_pk):
from project.users.models import User
user = User.filter(id=user_pk).first()
logger.info(f'send email to {user.email} {user.id}')

Celery itegration in CircleCI

I want to know how celery is implemented in CircleCI?
Below is my initial configuration of config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7 run
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
steps:
- checkout
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- run:
name: Running tests
command: |
. venv/bin/activate
cd app/
python3 manage.py test

pg_dump from Celery container differs from pg_dump in other containers

I can't understand where pg_dump version is coming from.
I forced everywhere postrgresql-client-13 to be installed.
/usr/bin/pg_dump --version
Celery Beat and Celery
pg_dump (PostgreSQL) 11.12 (Debian 11.12-0+deb10u1)
Other containers (web & postgre and local machine) :
pg_dump (PostgreSQL) 13.4 (Debian 13.4-1.pgdg100+1)
Here is my Dockerfile
FROM python:3
#testdriven turotial they use an other user than root but seemed to fail ehre .
# create directory for the app user
RUN mkdir -p /home/app
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONUNBUFFERED 1
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update -qq && apt-get install -y \
postgresql-client-13 \
binutils \
libproj-dev \
gdal-bin
RUN apt-get update \
&& apt-get install -yyq netcat
# install psycopg2 dependencies
#install dependencies
RUN pip3 install --no-cache-dir --upgrade pip && pip install --no-cache-dir --no-cache-dir -U pip wheel setuptools
COPY ./requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
and here is my docker-compose
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./app/.env.prod
depends_on:
- db
db:
image: postgis/postgis:13-master
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./app/.env.prod.db
redis:
image: redis:6
celery:
build: ./app
command: celery -A core worker -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
celery-beat:
build: ./app
command: celery -A core beat -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
nginx-proxy:
image : nginxproxy/nginx-proxy:latest
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./app/.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
certs:
html:
vhost:
I really need to have Celery with the same pg_dump version.
Can you guys provide some inputs ?

Docker: run script : ERROR: Unable to lock database: Permission denied

Hello I'm trying to run a script to just start my yarn dev after my postgres is connected:
until psql -c '\l'; do
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is unavailable - sleeping"
sleep 1
done
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is up - executing command"
exec ${#}
docker file:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh ./
RUN chmod +x /wait-pg.sh
ENTRYPOINT ["/wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]
docker compose:
version: '3.7'
services:
db-pg:
image: postgres:12
container_name: db-pg
ports:
- '${DB_PORT}:5432'
environment:
ALLOW_EMPTY_PASSWORD: 'no'
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- ci-postgres-data:/data
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- db-pg
command: ['./wait-pg.sh', 'yarn', 'dev']
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '5'
volumes:
ci-postgres-data:
and get this error:
---> Running in c5add5098b70 ERROR: Unable to lock database:
Permission denied ERROR: Failed to open apk database: Permission
denied ERROR: Service 'ci-api' failed to build: The command '/bin/sh
-c apk add --no-cache openssl' returned a non-zero code: 99
You are getting the error because the node user you are trying to use does not have permissions to run the command.
Move the user definition to after the commands, something like this:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh .
RUN chmod +x ./wait-pg.sh
USER node
ENTRYPOINT ["./wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]

Looking for a docker image to containerized my spark scala streaming jobs

I am new to Spark(3.0.0_preview) and Scala(SBT). I have written a spark streaming job that I can run successfully on my local from my IDE
Now, I a looking for a way to dockerize the code so that I can run it with my docker-compose that builds the spark cluster
My docker-compose:
version: "3.3"
services:
spark-master:
image: rd/spark:latest
container_name: spark-master
hostname: spark-master
ports:
- "8080:8080"
- "7077:7077"
networks:
- spark-network
environment:
- "SPARK_LOCAL_IP=spark-master"
- "SPARK_MASTER_PORT=7077"
- "SPARK_MASTER_WEBUI_PORT=8080"
command: "/start-master.sh"
spark-worker:
image: rd/spark:latest
depends_on:
- spark-master
ports:
- 8080
networks:
- spark-network
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "SPARK_WORKER_WEBUI_PORT=8080"
command: "/start-worker.sh"
networks:
spark-network:
driver: bridge
ipam:
driver: default
Docker Files:
FROM openjdk:8-alpine
RUN apk --update add wget tar bash
RUN wget http://apache.mirror.anlx.net/spark/spark-3.0.0-preview2/spark-3.0.0-preview2-bin-hadoop2.7.tgz
RUN tar -xzf spark-3.0.0-preview2-bin-hadoop2.7.tgz && \
mv spark-3.0.0-preview2-bin-hadoop2.7 /spark && \
rm spark-3.0.0-preview2-bin-hadoop2.7.tgz
COPY start-master.sh /start-master.sh
COPY start-worker.sh /start-worker.sh
This seems a simple request but I am having a hard time finding good documentation on it.
Got it working with the following:
project-structure:
project
src
build.sbt
Dockerfile
Dockerfile-app
docker-compose.yml
docker-compose.yml
version: "3.3"
services:
spark-scala-env:
image: app/spark-scala-env:latest
build:
context: .
dockerfile: Dockerfile
app-spark-scala:
image: app/app-spark-scala:latest
build:
context: .
dockerfile: Dockerfile-app
spark-master:
image: app/app-spark-scala:latest
container_name: spark-master
hostname: localhost
ports:
- "8080:8080"
- "7077:7077"
networks:
- spark-network
environment:
- "SPARK_LOCAL_IP=spark-master"
- "SPARK_MASTER_PORT=7077"
- "SPARK_MASTER_WEBUI_PORT=8080"
command: ["sh", "-c", "/spark/bin/spark-class org.apache.spark.deploy.master.Master --ip $${SPARK_LOCAL_IP} --port $${SPARK_MASTER_PORT} --webui-port $${SPARK_MASTER_WEBUI_PORT}"]
spark-worker:
image: app/app-spark-scala:latest
hostname: localhost
depends_on:
- spark-master
ports:
- 8080
networks:
- spark-network
#network_mode: host
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "SPARK_WORKER_WEBUI_PORT=8080"
- "SPARK_WORKER_CORES=2"
command: ["sh", "-c", "/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port $${SPARK_WORKER_WEBUI_PORT} $${SPARK_MASTER}"]
app-submit-job:
image: app/app-spark-scala:latest
ports:
- "4040:4040"
environment:
- "SPARK_APPLICATION_MAIN_CLASS=com.app.spark.TestAssembly"
- "SPARK_MASTER=spark://spark-master:7077"
- "APP_PACKAGES=org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0-preview2"
- "APP_JAR_LOC=/app/target/scala-2.12/app_spark_scala-assembly-0.2.jar"
hostname: localhost
networks:
- spark-network
volumes:
- ./appdata:/appdata
command: ["sh", "-c", "/spark/bin/spark-submit --packages $${APP_PACKAGES} --class $${SPARK_APPLICATION_MAIN_CLASS} --master $${SPARK_MASTER} $${APP_JAR_LOC}"]
networks:
spark-network:
driver: bridge
ipam:
driver: default
Dockerfile
FROM openjdk:8-alpine
ARG SPARK_VERSION
ARG HADOOP_VERSION
ARG SCALA_VERSION
ARG SBT_VERSION
ENV SPARK_VERSION=${SPARK_VERSION:-3.0.0-preview2}
ENV HADOOP_VERSION=${HADOOP_VERSION:-2.7}
ENV SCALA_VERSION ${SCALA_VERSION:-2.12.8}
ENV SBT_VERSION ${SBT_VERSION:-1.3.4}
RUN apk --update add wget tar bash
RUN \
echo "$SPARK_VERSION $HADOOP_VERSION" && \
echo http://apache.mirror.anlx.net/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz && \
wget http://apache.mirror.anlx.net/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN tar -xzf spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz && \
mv spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} /spark && \
rm spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN \
echo "$SCALA_VERSION $SBT_VERSION" && \
mkdir -p /usr/lib/jvm/java-1.8-openjdk/jre && \
touch /usr/lib/jvm/java-1.8-openjdk/jre/release && \
apk add -U --no-cache bash curl && \
curl -fsL http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /usr/local && \
ln -s /usr/local/scala-$SCALA_VERSION/bin/* /usr/local/bin/ && \
scala -version && \
scalac -version
RUN \
curl -fsL https://github.com/sbt/sbt/releases/download/v$SBT_VERSION/sbt-$SBT_VERSION.tgz | tar xfz - -C /usr/local && \
$(mv /usr/local/sbt-launcher-packaging-$SBT_VERSION /usr/local/sbt || true) \
ln -s /usr/local/sbt/bin/* /usr/local/bin/ && \
sbt sbt-version || sbt sbtVersion || true
Dockerfile-app
FROM app/spark-scala-env:latest
WORKDIR /app
# Pre-install base libraries
ADD build.sbt /app/
ADD project/plugins.sbt /app/project/
ADD src/. /app/src/
RUN sbt update
RUN sbt clean assembly
Commands:
Build the docker image with spark and scala env. i.e docker-compose build spark-scala-env
Build the image with the app jar. i.e docker-compose build app-spark-scala
Bring the spark env container i.e docker-compose up -d --scale spark-worker=2 spark-worker
Submit the job via docker-compose up -d app-submit-job