Hello I'm trying to run a script to just start my yarn dev after my postgres is connected:
until psql -c '\l'; do
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is unavailable - sleeping"
sleep 1
done
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is up - executing command"
exec ${#}
docker file:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh ./
RUN chmod +x /wait-pg.sh
ENTRYPOINT ["/wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]
docker compose:
version: '3.7'
services:
db-pg:
image: postgres:12
container_name: db-pg
ports:
- '${DB_PORT}:5432'
environment:
ALLOW_EMPTY_PASSWORD: 'no'
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- ci-postgres-data:/data
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- db-pg
command: ['./wait-pg.sh', 'yarn', 'dev']
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '5'
volumes:
ci-postgres-data:
and get this error:
---> Running in c5add5098b70 ERROR: Unable to lock database:
Permission denied ERROR: Failed to open apk database: Permission
denied ERROR: Service 'ci-api' failed to build: The command '/bin/sh
-c apk add --no-cache openssl' returned a non-zero code: 99
You are getting the error because the node user you are trying to use does not have permissions to run the command.
Move the user definition to after the commands, something like this:
#building code
FROM node:lts-alpine
RUN mkdir -p /home/node/api && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
RUN yarn
COPY --chown=node:node . .
RUN apk add --no-cache openssl
COPY wait-pg.sh .
RUN chmod +x ./wait-pg.sh
USER node
ENTRYPOINT ["./wait-pg.sh"]
EXPOSE 4000
CMD ["yarn", "dev"]
Related
I have an application in which I use a postgresql and celery database. Each element is running in a different container, in the celery container I am already connected to the postgres database, however I don't know how I could configure tortoise-orm to start in the celery container, since I have a task in which I want to interact with the database using tortoise.
This is my docker compose:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=fastapi_celery
- POSTGRES_USER=fastapi_celery
- POSTGRES_PASSWORD=fastapi_celery
redis:
image: redis:7-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
This is my dockerfile:
FROM python:3.10-slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Additional dependencies
&& apt-get install -y telnet netcat \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/fastapi/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/fastapi/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/fastapi/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/fastapi/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/fastapi/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
The task:
#shared_task()
def task_send_welcome_email(user_pk):
from project.users.models import User
user = User.filter(id=user_pk).first()
logger.info(f'send email to {user.email} {user.id}')
I can't understand where pg_dump version is coming from.
I forced everywhere postrgresql-client-13 to be installed.
/usr/bin/pg_dump --version
Celery Beat and Celery
pg_dump (PostgreSQL) 11.12 (Debian 11.12-0+deb10u1)
Other containers (web & postgre and local machine) :
pg_dump (PostgreSQL) 13.4 (Debian 13.4-1.pgdg100+1)
Here is my Dockerfile
FROM python:3
#testdriven turotial they use an other user than root but seemed to fail ehre .
# create directory for the app user
RUN mkdir -p /home/app
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONUNBUFFERED 1
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update -qq && apt-get install -y \
postgresql-client-13 \
binutils \
libproj-dev \
gdal-bin
RUN apt-get update \
&& apt-get install -yyq netcat
# install psycopg2 dependencies
#install dependencies
RUN pip3 install --no-cache-dir --upgrade pip && pip install --no-cache-dir --no-cache-dir -U pip wheel setuptools
COPY ./requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
and here is my docker-compose
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./app/.env.prod
depends_on:
- db
db:
image: postgis/postgis:13-master
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./app/.env.prod.db
redis:
image: redis:6
celery:
build: ./app
command: celery -A core worker -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
celery-beat:
build: ./app
command: celery -A core beat -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./app/.env.prod
depends_on:
- redis
nginx-proxy:
image : nginxproxy/nginx-proxy:latest
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./app/.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
certs:
html:
vhost:
I really need to have Celery with the same pg_dump version.
Can you guys provide some inputs ?
Our travis builds have started failing and I can't figure out why. Our app runs in docker-compose and then we run cypress to against it. This used to work perfectly. Now the host port for the web server is just unresponsive. I've removed cypress and am just trying to run curl http://localhost:3001 and it just hangs. Here's the travis.yml. Any suggestions would be highly appreciated. I have tried fiddling for several hours with the docker versions, distros, localhost vs 127.0.0.1, etc to no avail. All of this works fine locally on my workstation.
language: node_js
node_js:
- "12.19.0"
env:
- DOCKER_COMPOSE_VERSION=1.25.4
services:
- docker
sudo: required
# Supposedly this is needed for Cypress to work in Ubuntu 16
# https://github.com/cypress-io/cypress-example-kitchensink/blob/master/basic/.travis.yml
addons:
apt:
packages:
- libgconf-2-4
before_install:
# upgrade docker compose https://docs.travis-ci.com/user/docker/#using-docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
# upgrade docker itself https://docs.travis-ci.com/user/docker/#installing-a-newer-docker-version
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
# Put the .env file in place
- cp .env.template .env
install:
# Install node modules (for jest and wait-on) and start up the docker containers
- cd next
- npm ci
- cd ..
- cd e2e
- npm ci
- cd ..
script:
- docker --version
- docker-compose --version
- docker-compose up --build -d
# Run unit tests
# - cd next
# - npm run test
# Run e2e tests
# - cd ../e2e
# - npx cypress verify
# - CYPRESS_FAIL_FAST=true npx wait-on http://localhost:3001 --timeout 100000 && npx cypress run --config video=false,pageLoadTimeout=100000,screenshotOnRunFailure=false
- sleep 30
- curl http://127.0.0.1:3001 --max-time 30
- docker-compose logs db
- docker-compose logs express
- docker-compose logs next
post_script:
- docker-compose down
The logs look like this:
The command "docker-compose up --build -d" exited with 0.
30.01s$ sleep 30
The command "sleep 30" exited with 0.
93.02s$ curl http://127.0.0.1:3001 --max-time 30
curl: (28) Operation timed out after 30001 milliseconds with 0 bytes received
The command "curl http://127.0.0.1:3001 --max-time 30" exited with 28.
The docker compose logs show nothing suspicious. It's as if the network wasn't set up correctly and docker is not aware of any requests.
Here is the docker-compose.yml in case it's useful:
version: '3.7'
services:
db:
image: mg-postgres
build: ./postgres
ports:
- '5433:5432'
environment:
POSTGRES_HOST_AUTH_METHOD: 'trust'
adminer:
image: adminer
depends_on:
- db
ports:
- '8080:8080'
express:
image: mg-server
build: ./express
restart: always
depends_on:
- db
env_file:
- .env
environment:
DEBUG: express:*
volumes:
- type: bind
source: ./express
target: /app
- /app/node_modules
ports:
- '3000:3000'
next:
image: mg-next
build: ./next
depends_on:
- db
- express
env_file:
- .env
volumes:
- type: bind
source: ./next
target: /app
- /app/node_modules
ports:
- '3001:3001'
command: ['npm', 'run', 'dev']
I am trying to deploy keystoneJS app on the server. Before that I want to test it locally.
I started mongoDB locally. Then use the Dockerfile from the documentation https://www.keystonejs.com/guides/deployment here, I can't run it. I get the error below:
✖ Connecting to database
Error: Server selection timed out after 30000 ms
at /home/node/node_modules/#keystonejs/utils/dist/utils.cjs.prod.js:54:26
at async executeDefaultServer (/home/node/node_modules/#keystonejs/keystone/bin/utils.js:109:3) {
errors: {
MongooseAdapter: MongoTimeoutError: Server selection timed out after 30000 ms
at Timeout._onTimeout (/home/node/node_modules/mongodb/lib/core/sdam/server_selection.js:308:9)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7) {
name: 'MongoTimeoutError',
reason: [MongoNetworkError],
[Symbol(mongoErrorContextSymbol)]: {}
}
}
}
error Command failed with exit code 1.
I googled seems didn't know about local mongodb://localhost:27017.
Then I decided to use docker-compose:
Here is the docker-compose.yml
version: '3'
services:
app:
container_name: my-admin
restart: always
build: .
volumes:
- .:/mycode
environment:
- MONGO_URI=mongodb://mongo:27017
ports:
- "80:3030"
links:
- mongo
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
when I run docker-compose up, got the same error.
Also tried this:
const keystone = new Keystone({
name: PROJECT_NAME,
adapter: new Adapter({mongoUri: "mongodb://mongo:27017/myapp"}),
onConnect: initialiseData,
});
Any help? Thanks!
EDIT
Here is the Dockerfile
# https://docs.docker.com/samples/library/node/
ARG NODE_VERSION=12.10.0
# https://github.com/Yelp/dumb-init/releases
ARG DUMB_INIT_VERSION=1.2.2
# Build container
FROM node:${NODE_VERSION}-alpine AS build
ARG DUMB_INIT_VERSION
WORKDIR /home/node
RUN apk add --no-cache build-base python2 yarn && \
wget -O dumb-init -q https://github.com/Yelp/dumb-init/releases/download/v${DUMB_INIT_VERSION}/dumb-init_${DUMB_INIT_VERSION}_amd64 && \
chmod +x dumb-init
ADD . /home/node
RUN yarn install && yarn build && yarn cache clean
# Runtime container
FROM node:${NODE_VERSION}-alpine
WORKDIR /home/node
COPY --from=build /home/node /home/node
EXPOSE 3000
CMD ["./dumb-init", "yarn", "start"]
Trying to dockerise my Symfony 4 app, running with PostgreSQL
But when I'm running :
$ sudo docker-compose build
I'm having this error :
In AbstractPostgreSQLDriver.php line 73:
An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server : Connection refused :
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Here's my docker-compose.yml file :
version: '3.7'
services:
db:
image: ${POSTGRES_IMAGE}
restart: always
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
php:
build:
context: .
dockerfile: ./docker/php/Dockerfile
restart: on-failure
user: 1000:1000
volumes:
- ./docker/php/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
- .:/var/www/symfony
working_dir: /var/www/symfony
depends_on:
- db
Content of my .env file
DATABASE_NAME=db
DATABASE_HOST=127.0.0.1
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
Also FYI :
$ sudo docker-compose config
debian#debian:~/dev/symfony$ sudo docker-compose config
services:
db:
environment:
POSTGRES_DB: mrd
POSTGRES_PASSWORD: root
POSTGRES_USER: postgres
image: postgres
restart: always
php:
build:
context: /home/debian/dev/symfony
dockerfile: ./docker/php/Dockerfile
depends_on:
- db
restart: on-failure
user: 1000:1000
volumes:
- /home/debian/dev/symfony:/var/www/symfony:rw
working_dir: /var/www/symfony
version: '3.7'
My Dockerfile:
FROM php:7.3-fpm
WORKDIR /var/www/symfony
RUN apt-get update
# Install Postgres PDO
RUN apt-get install -y libpq-dev \
&& apt-get install -y zip \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apt-get install -y git
RUN pecl install apcu
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === 'a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php --filename=composer \
&& php -r "unlink('composer-setup.php');" \
&& mv composer /usr/local/bin/composer
RUN ls -larth
COPY . /var/www/symfony
RUN PATH=$PATH:/var/www/symfony/vendor/bin:bin
RUN pwd \
&& ls \
&& composer install --no-interaction --no-ansi --optimize-autoloader\
&& php bin/console doctrine:database:create \
&& php bin/console doctrine:schema:update --no-interaction \
&& php bin/console doctrine:fixtures:load --no-interaction
Has anybody a clue on why it does this ? And how to solve it ?
Searched a lot, and couldn't find anything that worked. Thought the depends_on: db would do the trick, but no.
Try to run:
docker-compose ps
see Container name
Set DATABASE_HOST=Container name
DATABASE_NAME=db
DATABASE_HOST=## paste Container name
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
The pastes in your question indicate a number of things to be addressed with your docker-compose.yml file:
You are missing an env_file: entry for your .env file
You are missing an image for the php service
You specify . as your build context (which should be a directory that indicates where your Dockerfile is located)
You specify a path as your Dockerfile (which should be a filename, not a directory) -- I guess this might all work, but it's a bit confusing to read
Indentation for -db and - /home/debian/dev/symfony:/var/www/symfony:rw is off -- this might not be a problem, but it's still difficult to read
From your question about build failing with an error regarding inability to connect to a database -- could you update your question and share your Dockerfile for symfony? I suspect that you need to remove a reference to a database connection.