How can I start Tortoise-ORM in a celery docker container? - postgresql

I have an application in which I use a postgresql and celery database. Each element is running in a different container, in the celery container I am already connected to the postgres database, however I don't know how I could configure tortoise-orm to start in the celery container, since I have a task in which I want to interact with the database using tortoise.
This is my docker compose:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=fastapi_celery
- POSTGRES_USER=fastapi_celery
- POSTGRES_PASSWORD=fastapi_celery
redis:
image: redis:7-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
This is my dockerfile:
FROM python:3.10-slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Additional dependencies
&& apt-get install -y telnet netcat \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/fastapi/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/fastapi/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/fastapi/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/fastapi/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/fastapi/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
The task:
#shared_task()
def task_send_welcome_email(user_pk):
from project.users.models import User
user = User.filter(id=user_pk).first()
logger.info(f'send email to {user.email} {user.id}')

Related

pg_dump from Celery container differs from pg_dump in other containers

I can't understand where pg_dump version is coming from.
I forced everywhere postrgresql-client-13 to be installed.
/usr/bin/pg_dump --version
Celery Beat and Celery
pg_dump (PostgreSQL) 11.12 (Debian 11.12-0+deb10u1)
Other containers (web & postgre and local machine) :
pg_dump (PostgreSQL) 13.4 (Debian 13.4-1.pgdg100+1)
Here is my Dockerfile
FROM python:3
#testdriven turotial they use an other user than root but seemed to fail ehre .
# create directory for the app user
RUN mkdir -p /home/app
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONUNBUFFERED 1
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update -qq && apt-get install -y \
postgresql-client-13 \
binutils \
libproj-dev \
gdal-bin
RUN apt-get update \
&& apt-get install -yyq netcat
# install psycopg2 dependencies
#install dependencies
RUN pip3 install --no-cache-dir --upgrade pip && pip install --no-cache-dir --no-cache-dir -U pip wheel setuptools
COPY ./requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
and here is my docker-compose
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./app/.env.prod
depends_on:
- db
db:
image: postgis/postgis:13-master
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./app/.env.prod.db
redis:
image: redis:6
celery:
build: ./app
command: celery -A core worker -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
celery-beat:
build: ./app
command: celery -A core beat -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
nginx-proxy:
image : nginxproxy/nginx-proxy:latest
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./app/.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
certs:
html:
vhost:
I really need to have Celery with the same pg_dump version.
Can you guys provide some inputs ?

Installing and using pg_cron extension on Postgres running inside of Docker container

I tried installing pg_cron on Postgres running inside a Docker container but getting this error could not access file "pg_cron": No such file or directory. Any ideas on how to resolve?
Based on https://stackoverflow.com/a/51797554, I tried the following:
docker-compose.yml
version: '3.7'
services:
pg:
container_name: pg-container
image: postgres:11.5
environment:
POSTGRES_DB: "pgdb"
POSTGRES_USER: "pguser"
POSTGRES_PASSWORD: "pgpass"
volumes:
- ./:/docker-entrypoint-initdb.d
- pgstorage
ports:
- "5432:5432"
volumes:
pgstorage:
002-setup.sh
#!/bin/sh
# Remove last line "shared_preload_libraries='citus'"
sed -i '$ d' ${PGDATA}/postgresql.conf
cat <<EOT >> ${PGDATA}/postgresql.conf
shared_preload_libraries='pg_cron'
cron.database_name='${POSTGRES_DB:-postgres}'
EOT
# Required to load pg_cron
pg_ctl restart
003-main.sql
CREATE EXTENSION pg_cron;
From what I can see you are not installing pg_cron anywhere. Since it is not packaged with the default Postgres Docker image you will have to care of that.
For example by extending the Image and using a build entry in your docker-compose.yml.
# Dockerfile relative to docker-compose.yml
FROM postgres:11.5
RUN apt-get update && apt-get -y install git build-essential postgresql-server-dev-11
RUN git clone https://github.com/citusdata/pg_cron.git
RUN cd pg_cron && make && make install
version: '3.7'
services:
pg:
container_name: pg-container
build: .
environment:
POSTGRES_DB: "pgdb"
POSTGRES_USER: "pguser"
POSTGRES_PASSWORD: "pgpass"
volumes:
- ./:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
This worked for me - it probably needs some more optimization.
The proposed solution didn't work with a newly created container for me. So, I did it like this:
Docker file
FROM postgres:11.5
RUN apt-get update && apt-get -y install git build-essential postgresql-server-dev-11
RUN git clone https://github.com/citusdata/pg_cron.git
RUN cd pg_cron && make && make install
RUN cd / && \
rm -rf /pg_cron && \
apt-get remove -y git build-essential postgresql-server-dev-11 && \
apt-get autoremove --purge -y && \
apt-get clean && \
apt-get purge
COPY init-db /docker-entrypoint-initdb.d
init-db/pg-cron.sh
#!/usr/bin/env bash
# use same db as the one from env
dbname="$POSTGRES_DB"
# create custom config
customconf=/var/lib/postgresql/data/custom-conf.conf
echo "" > $customconf
echo "shared_preload_libraries = 'pg_cron'" >> $customconf
echo "cron.database_name = '$dbname'" >> $customconf
chown postgres $customconf
chgrp postgres $customconf
# include custom config from main config
conf=/var/lib/postgresql/data/postgresql.conf
found=$(grep "include = '$customconf'" $conf)
if [ -z "$found" ]; then
echo "include = '$customconf'" >> $conf
fi
Also, you can place other init files into init-db directory.
Docker compose file
version: '3.7'
services:
postgres:
container_name: your-container
build: .
environment:
POSTGRES_DB: "your_db"
POSTGRES_USER: "your_user"
POSTGRES_PASSWORD: "your_user"
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
pgdata:
driver: local
For those who are looking for a ready image, please try the following:
docker pull ramazanpolat/postgres_cron:11

Mongodb error on docker

I've a dockerfile and a docker-compose
When I try to run docker-compose this happens
Does someone know how to fix it?
Edit 1
dockerfile
FROM node
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 \
&& echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list \
&& apt-get update \
&& apt-get install -y mongodb-org
RUN useradd --user-group --create-home --shell /bin/false app &&\
npm install --global npm
ENV HOME=/home/app
COPY package.json $HOME/library/
RUN chown -R app:app $HOME/*
USER root
WORKDIR $HOME/library
RUN npm install --silent --progress=false
COPY . $HOME/library
RUN chown -R app:app $HOME/*
RUN npm install --build-from-source bcrypt
CMD ["npm", "start"]
docker-compose
version: '2'
services:
db:
image: mongo
command: "mongod"
ports:
- "27018:27018"
library:
build:
context: .
dockerfile: Dockerfile
command: node_modules/.bin/nodemon --exec npm start
environment:
NODE_ENV: development
ports:
- 3000:3000
volumes:
- .:/home/app/library
- /home/app/library/node_modules
links:
- db
Edit 2
Running docker ps -a here
First container docker log here
Second container docker log here
Edit 3
mongoose.connect(config.database);
That's how I am connecting the database on server.js
Your Docker Compose file has the MongoDB Ports as 27018=>27018 but the error from your node application is saying it is trying to connect to 127.0.0.1:27017.
Because of how Docker's networking works (which is all kinds of awesome) you should change the port to be 27017 not 27018 in your Compose file & update the MongoDB connection string in your Node app to db:27017.
You shouldn't use 127.0.0.1 within Docker as it expects that whatever you are connecting to is running on within the same container.

Symfony app in Docker doesn't work

I created an app in Symfony with MongoDB and I added that in a Docker image.
Image for MongoDB works good with message: 2017-04-19T12:47:33.936+0000 I NETWORK [initandlisten] waiting for connections on port 27017
But image for app dosen't work I receive the message:
stdin: is not a tty
hello
when I call the instruction: docker run docker_web_server:latest
I use for that docker-compose file:
web_server:
build: web_server/
ports:
- 5000:5000
links:
- mongo
tty: true
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
And Dockerfile for app is:
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
git \
curl \
php5-cli \
php5-json \
php5-intl
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
ADD entrypoint.sh /entrypoint.sh
ADD ./code /var/www
WORKDIR /var/www
#RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/entrypoint.sh" ]
entrypoint.sh
#!/bin/bash
rm -rf /var/www/app/cache/*
exec php -S 0.0.0.0:8000 # This will run a web-server on port 8000
What is the problem?
I call wrong the server from docker image?
I expected to have the message:
[OK] Server running on http://127.0.0.1:8000
You should remove CMD ['echo', 'hello'] from your Dockerfile, has this is being passed as a parameter to your ENTRYPOINT
You should also add tty: true to your service definition, web-server
I'm hoping entrypoint.sh runs php -S 0.0.0.0:8000 at end. Please post that for further advice. If you do use php -S inside of it, prefix it with exec to get it to take over as the main process.
Edit since new information added:
I'd modify the entrypoint.sh to:
#!/bin/bash
rm -rf /var/www/app/cache/*
php -S 0.0.0.0:8000 # This will run a web-server on port 8000
I'd get rid of the symfony_environment.sh and instead, add the following to your web-server service:
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
As a side-note, I contribute to a project called boilr that generates boilerplates, like this. I've even created a template for php/docker-compose projects, it would be worth checking out. I always keep it up-to-date with the best practices.
https://github.com/rawkode/boilr-docker-compose-php

PostgreSQL PGDATA from host in Docker-System

I want to run a webapp with docker-compose. I have a centos7 host running postgresql and the docker-engine. My docker-compose includes a postgresql-image and should run with the PGDATA from the host system. But everytime I run the docker-compose I get the error:
initdb: directory "/var/lib/docker-postgresql" exists but is not empty
The docker-compose part for the postgresql-database looks like:
db:
build: ./postgres/
container_name: ps01
volumes:
- ./postgres:/tmp
- /var/lib/pgsql/9.4:/var/lib/docker-postgresql
expose:
- "5432"
environment:
PGDATA: /var/lib/docker-postgresql
I mount the /var/lib/pgsql/9.4 inside the Docker-Postgresql-App to /var/lib/docker-postgresql and set this path to the PGDATA-ENV.
The Dockerfile in ./postgres/ looks like:
FROM postgres:latest
ENV POSTGIS_MAJOR 2.3
ENV POSTGIS_VERSION 2.3.1+dfsg-1.pgdg80+1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION \
postgis=$POSTGIS_VERSION \
&& rm -rf /var/lib/apt/lists/*
What should I do to share my Postgres-Data from the host?