Celery itegration in CircleCI - celery

I want to know how celery is implemented in CircleCI?
Below is my initial configuration of config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7 run
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
steps:
- checkout
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
- run:
name: Running tests
command: |
. venv/bin/activate
cd app/
python3 manage.py test

Related

What is a proper way of installing dependencies in Airflow?

I am trying to install git inside Airflow scheduler by apt-get install -y git (see docker-compose.yml below) and I get sudo: no tty present and no askpass program specified.
Is this even a good direction here by installing this package in "command"?
docker-compose.yml
services:
postgres:
...
init:
...
webserver:
...
scheduler:
image: *airflow_image
restart: always
depends_on:
- postgres
volumes:
- ./dags:/opt/airflow/dags
entrypoint: ["/bin/sh"]
command: ["-c",
"apt-get install -y git \
&& pip install -r /opt/airflow/tmp/requirements.txt \
&& airflow scheduler"]
volumes:
logs:

pg_dump from Celery container differs from pg_dump in other containers

I can't understand where pg_dump version is coming from.
I forced everywhere postrgresql-client-13 to be installed.
/usr/bin/pg_dump --version
Celery Beat and Celery
pg_dump (PostgreSQL) 11.12 (Debian 11.12-0+deb10u1)
Other containers (web & postgre and local machine) :
pg_dump (PostgreSQL) 13.4 (Debian 13.4-1.pgdg100+1)
Here is my Dockerfile
FROM python:3
#testdriven turotial they use an other user than root but seemed to fail ehre .
# create directory for the app user
RUN mkdir -p /home/app
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONUNBUFFERED 1
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update -qq && apt-get install -y \
postgresql-client-13 \
binutils \
libproj-dev \
gdal-bin
RUN apt-get update \
&& apt-get install -yyq netcat
# install psycopg2 dependencies
#install dependencies
RUN pip3 install --no-cache-dir --upgrade pip && pip install --no-cache-dir --no-cache-dir -U pip wheel setuptools
COPY ./requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
and here is my docker-compose
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./app/.env.prod
depends_on:
- db
db:
image: postgis/postgis:13-master
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./app/.env.prod.db
redis:
image: redis:6
celery:
build: ./app
command: celery -A core worker -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
celery-beat:
build: ./app
command: celery -A core beat -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
nginx-proxy:
image : nginxproxy/nginx-proxy:latest
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./app/.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
certs:
html:
vhost:
I really need to have Celery with the same pg_dump version.
Can you guys provide some inputs ?

curl request to docker-compose port hangs in travis-ci

Our travis builds have started failing and I can't figure out why. Our app runs in docker-compose and then we run cypress to against it. This used to work perfectly. Now the host port for the web server is just unresponsive. I've removed cypress and am just trying to run curl http://localhost:3001 and it just hangs. Here's the travis.yml. Any suggestions would be highly appreciated. I have tried fiddling for several hours with the docker versions, distros, localhost vs 127.0.0.1, etc to no avail. All of this works fine locally on my workstation.
language: node_js
node_js:
- "12.19.0"
env:
- DOCKER_COMPOSE_VERSION=1.25.4
services:
- docker
sudo: required
# Supposedly this is needed for Cypress to work in Ubuntu 16
# https://github.com/cypress-io/cypress-example-kitchensink/blob/master/basic/.travis.yml
addons:
apt:
packages:
- libgconf-2-4
before_install:
# upgrade docker compose https://docs.travis-ci.com/user/docker/#using-docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
# upgrade docker itself https://docs.travis-ci.com/user/docker/#installing-a-newer-docker-version
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
# Put the .env file in place
- cp .env.template .env
install:
# Install node modules (for jest and wait-on) and start up the docker containers
- cd next
- npm ci
- cd ..
- cd e2e
- npm ci
- cd ..
script:
- docker --version
- docker-compose --version
- docker-compose up --build -d
# Run unit tests
# - cd next
# - npm run test
# Run e2e tests
# - cd ../e2e
# - npx cypress verify
# - CYPRESS_FAIL_FAST=true npx wait-on http://localhost:3001 --timeout 100000 && npx cypress run --config video=false,pageLoadTimeout=100000,screenshotOnRunFailure=false
- sleep 30
- curl http://127.0.0.1:3001 --max-time 30
- docker-compose logs db
- docker-compose logs express
- docker-compose logs next
post_script:
- docker-compose down
The logs look like this:
The command "docker-compose up --build -d" exited with 0.
30.01s$ sleep 30
The command "sleep 30" exited with 0.
93.02s$ curl http://127.0.0.1:3001 --max-time 30
curl: (28) Operation timed out after 30001 milliseconds with 0 bytes received
The command "curl http://127.0.0.1:3001 --max-time 30" exited with 28.
The docker compose logs show nothing suspicious. It's as if the network wasn't set up correctly and docker is not aware of any requests.
Here is the docker-compose.yml in case it's useful:
version: '3.7'
services:
db:
image: mg-postgres
build: ./postgres
ports:
- '5433:5432'
environment:
POSTGRES_HOST_AUTH_METHOD: 'trust'
adminer:
image: adminer
depends_on:
- db
ports:
- '8080:8080'
express:
image: mg-server
build: ./express
restart: always
depends_on:
- db
env_file:
- .env
environment:
DEBUG: express:*
volumes:
- type: bind
source: ./express
target: /app
- /app/node_modules
ports:
- '3000:3000'
next:
image: mg-next
build: ./next
depends_on:
- db
- express
env_file:
- .env
volumes:
- type: bind
source: ./next
target: /app
- /app/node_modules
ports:
- '3001:3001'
command: ['npm', 'run', 'dev']

CircleCI with Postgres - cannot connect

In my Circle build, I'm trying to use a Postgres container for testing. The test DB will be created automatically, but Python's not finding the DB. Here's my config.yml:
version: 2.1
​
orbs:
python: circleci/python#0.2.1
​
jobs:
build:
executor: python/default
docker:
- image: circleci/python:3.8.0
environment:
- ENV: CIRCLE
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: circleci
POSTGRES_DB: circle_test
POSTGRES_HOST_AUTH_METHOD: trust
steps:
- checkout
- python/load-cache
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- python/save-cache
- run:
name: Running tests
command: |
. venv/bin/activate
python3 ./api/manage.py test
- store_artifacts:
path: test-reports/
destination: python_app
​
workflows:
main:
jobs:
- build
It seems like everything's fine until the tests start:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
There's no such folder:
ls -a /var/run/
. .. exim4 lock utmp
It looks like psycopg2 defaults to using the UNIX socket, so you'll need to specify the db host as localhost in your connect() call.
Also an aside, but it looks like you're specifying an additional but unneeded Python image. The executor for the job is already set to be python/default so it doesn't seem like the next image circleci/python:3.8.0 is doing anything based on the config you've pasted in.
Some changes in the PGHOST and DATABASE_URL config solved it:
jobs:
build:
executor: python/default
docker:
- image: circleci/python:3.8.0
environment:
ENV: CIRCLE
DATABASE_URL: postgresql://circleci#localhost/circle_test
- image: circleci/postgres:9.6
environment:
PGHOST: localhost
PGUSER: circleci
POSTGRES_USER: circleci
POSTGRES_DB: circle_test
POSTGRES_HOST_AUTH_METHOD: trust
steps:
- checkout
- python/load-cache
- run:
name: Wait for db to run
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- python/save-cache
- run:
name: Running tests
command: |
. venv/bin/activate
cat /etc/hosts
python3 ./django/gbookapi/manage.py test gbook.tests
- store_test_results:
path: test-reports.xml
- store_artifacts:
path: test-reports/
destination: python_app
workflows:
main:
jobs:
- build

CircleCI Swift with Postgres connection issues

I am working with my repo to build a test app for swift with circleCI and postgres but when it comes to testing I can't seem to grasp how to connect the two images in the testing phase.
I am running
circleci local execute --job build
Which should build both the swift and postgres images. I give them both the same env variables I give in the application. However I get this error when trying to run it. In my experience when trying to set up the two docker containers with compose this error was showing up when my api could not connect to the db container over the network.
Test Case 'AppTests.RemoveUserTest' started at 2019-04-09 19:46:15.380
Fatal error: 'try!' expression unexpectedly raised an error: NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "db", port: 5432, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "db", port: 5432)), connectionErrors: [])): file /home/buildnode/jenkins/workspace/oss-swift-4.2-package-linux-ubuntu-16_04/swift/stdlib/public/core/ErrorType.swift, line 184
I know it says it failed because of a try statement but that try statement is failing because it's requesting actions from Postgres which is not there. Any ideas?
My current config.yml for circleci
version: 2
jobs:
build:
docker:
- image: swift:4.2
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
DB_HOSTNAME: db
PORT: 5432
- image: postgres:11.2-alpine
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test
steps:
- checkout
- run: apt-get update -qq
- run: apt-get install -yq libssl-dev pkg-config wget
- run: apt-get install -y postgresql-client || true
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Compile code
command: swift build
- run:
name: Run unit tests
command: swift test
release:
docker:
- image: swift:4.2
steps:
- checkout
- run:
name: Compile code with optimizations
command: swift build -c release
push-to-docker-hub:
docker:
- image: docker:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl jq python py-pip
- run:
name: Build Docker Image
command: |
docker build -t api .
docker tag api <>/repo:latest
docker tag api <>/repo:$CIRCLE_SHA1
docker login -u $DOCKER_USER -p $DOCKER_PASS
docker push <>/repo:latest
docker push <>/repo:$CIRCLE_SHA1
# - persist_to_workspace:
# root: ./
# paths:
# - k8s-*.yml
workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master
#- linux-release
You're setting the hostname for the database to db, but not defining that anywhere. You need to name your Docker container to match the DB_HOSTNAME environment variable like so https://github.com/vapor/postgresql/blob/master/circle.yml#L8