Add WP-CLI container to LEMP stack - docker-compose

Docker version 18.09.1, build 4c52b90
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
I have a working LEMP stack with following docker-compose file.
version: "3.7"
services:
php:
build:
context: './config/php/'
networks:
- dev
volumes:
- /WORK/:/var/www/html/
working_dir: /var/www/html/
container_name: php
nginx:
image: nginx:1.14-alpine
depends_on:
- php
- mysql
networks:
- dev
ports:
- "80:80"
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
- /WORK/:/var/www/html/
container_name: nginx
mysql:
image: mysql:5.7
restart: always
ports:
- "3306:3306"
volumes:
- /MYSQL/:/var/lib/mysql
networks:
- dev
environment:
MYSQL_ROOT_PASSWORD: "db_root"
MYSQL_DATABASE: "test_db"
MYSQL_USER: "test_db_user"
MYSQL_PASSWORD: "test_db_pass"
container_name: mysql
wpcli:
image: wordpress:cli
depends_on:
- mysql
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_NAME: test_db
WORDPRESS_DB_USER: test_db_user
WORDPRESS_DB_PASSWORD: test_db_pass
networks:
- dev
command: '--path=`/var/www/html/WP-Site/`'
volumes:
- /WORK/:/var/www/html/
- /MYSQL/:/var/lib/mysql
working_dir: /var/www/html/
container_name: wpcli
networks:
dev:
The Dockerfile for the PHP image contains this.
FROM php:7.2-fpm-alpine
# docker-entrypoint.sh dependencies
RUN apk add --no-cache \
# in theory, docker-entrypoint.sh is POSIX-compliant, but priority is a working, consistent image
bash \
# BusyBox sed is not sufficient for some of our sed expressions
sed
# https://github.com/docker-library/wordpress/blob/master/php7.3/fpm-alpine/Dockerfile
# install the PHP extensions we need
RUN set -ex; \
\
apk add --no-cache --virtual .build-deps \
libjpeg-turbo-dev \
libpng-dev \
libzip-dev \
; \
\
docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; \
docker-php-ext-install gd mysqli opcache zip; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local/lib/php/extensions \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --virtual .wordpress-phpexts-rundeps $runDeps; \
apk del .build-deps
The NGINX conf is this.
upstream php {
server php:9000;
}
server {
listen 80;
server_name localhost;
## Your only path reference.
root /var/www/html;
## This should be in your http block and if it is, it's not needed here.
index index.php index.html index.htm;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# https://nginxlibrary.com/enable-directory-listing/
autoindex on;
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
All services except WP-CLI work. My folder structure is so that under the main WORK folder I have subfolders, each with a project in them.
Such projects can be WordPress but can also be using another CMS or just be small static sites.
Under http://localhost/ I see an overview of the WORK folder with all the subfolders. Clicking on any project folder, e.g. WP-Site, will lead me to e.g. http://localhost/WP-Site/. There I have a working WordPress install that connects to the PHP and MYSQL services just fine.
But when trying to use WP-CLI it either tells me that it cannot find a working WordPress install and when I pass the command in the service and give it the path of one of the WordPress projects I have it will just exit without error message.
Do I really need a WordPress image to be able to use the WP-CLI image. In the WP-CLI service I also gave the same environment variables as for the MYSQL service, so that WP-CLI can know where to connect to.
If there is any way to run a WP-CLI container and have that available under the main WORK folder, so for all projects I might make, well that would be great.
I thought of installing everything into one image but then this defeats the purpose of Docker, to be able to have things separate and exchangeable. So I like to be able to make a WP-CLI container that indeed is able to connect to my PHP and MYSQL containers.

Related

Keycloak urls setup

I want to run Keycloak and to play with it. So I run a container in Docker with quay.io/keycloak/keycloak:20.0.1 image.
version: '3.8'
networks:
default-dev-network:
external: true
services:
keycloak:
container_name: keycloak
image: quay.io/keycloak/keycloak:20.0.1
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgresdb:5432/keycloak
KC_DB_USERNAME: postgres
KC_DB_PASSWORD: pass
KC_DB_SCHEMA: public
KC_HOSTNAME: localhost
KC_HTTPS_PORT: 8443
KC_HTTPS_PROTOCOLS: TLSv1.3,TLSv1.2
KC_HTTP_ENABLED: "true"
KC_HTTP_PORT: 8080
KC_METRICS_ENABLED: "true"
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
ports:
- 18080:8080
- 8443:8443
command: start-dev
networks:
- default-dev-network
Then I created a realm test-realm, a client test-client. So I want to request a bearer token for it. I run
curl \
-d 'client_id=test-client' \
-d 'client_secret=xajewuZlBHL75rpiPttHday8t34aOnYa' \
-d 'grant_type=client_credentials' \
'http://localhost:18080/auth/realms/test-realm/protocol/openid-connect/token'
and I get
{"error":"RESTEASY003210: Could not find resource for full path: http://localhost:18080/auth/realms/test-realm/protocol/openid-connect/token"}
I'm reading a documentation on https://www.keycloak.org but there're so many details there that I'm afraid it will take weeks to figure everything out. Maybe there's a shorter guide?
New versions of Keycloak (after the rewrite in Quarkus) removed the /auth context path.
You can either remove it from the url or set the property KC_HTTP_RELATIVE_PATH=/auth.

What is the path for application.properties (or similar file) in docker container?

I am dockerizing springboot application(with PostgreSQL). I want to overwrite application.properties in docker container with my own application.properties.
My docker-compose.yml file looks like this:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
PostgreSQL:
image: postgres
volumes:
- C:/path/to/my/application.properties:/path/of/application.properties/in/container
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
I am doing this to overwrite the application.properties in container with my application.properties file so that the data gets stored in localhost
I tried the path /opt/application.properties but it didn't work.
You have two solutions:
1) First solution
Create application.properties with env variable
mycustomproperties1: ${MY_CUSTOM_ENV1}
mycustomproperties2: ${MY_CUSTOM_ENV2}
I advise you to create different application.properties (application-test,application-prod, etc...)
2) Another solution
Create docker file:
FROM debian:buster
RUN apt-get update --fix-missing && apt-get dist-upgrade -y
RUN apt install wget -y
RUN apt install apt-transport-https ca-certificates wget dirmngr gnupg software-properties-common -y
RUN wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
RUN add-apt-repository --yes https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
RUN apt update
RUN apt install adoptopenjdk-8-hotspot -y
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","-Dspring.config.location="file:///config/application.properties","/app.jar"]
or add env variable in docker compose
SPRING_CONFIG_LOCATION=file:///config/application.properties
modify docker-compose:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- SPRING_CONFIG_LOCATION=file:///config/application.properties
volumes:
- C:/path/to/my/application.properties:/config/application.properties
In case anybody comes across the same problem, here is the solution.
I am trying to use my localhost database instead of in-memory database(storing it in container). This is my docker-compose.yml configuration
version: '2'
services:
API:
image: 'api-docker.jar' #(your jar file name)
volumes:
- path/to/new/application.properties:/config/env
ports:
- "8080:8080"
You need to provide a new application.properties file which contains the configuration for storing the data into your local database(could be the copy of your actual application.properties). This file needs to be overwritten in the config file of the container and the path to that is /config/env (which is mentioned in the yml file)

Connecting to a MongoCryptD instance in docker environment with Mongoose

I've searched over the web but couldn't find my answer anywhere.
I'm trying to run an API web service, using NestJS framework.
I'm running docker-compose that spins up the API server, a MongoDB instance, and a mongocryptd instance to allow Client-Side Field Level Encryption on my app.
I'm able to connect to the MongoDB instance, but not to the mongocryptd instance.
Docker-Compose file:
version: "3.7"
services:
api:
build:
context: .
dockerfile: Dockerfile
labels:
env: dev
args:
APP: appname
APP_PORT: 3000
ports:
- "3000:3000"
command: ["sh", "-c", "npm run start:app:dev"]
volumes:
- .:/app
mongodb:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
command: ["--auth"]
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: usr
MONGO_INITDB_ROOT_PASSWORD: pwd
ports:
- "27017:27017"
volumes: ["/private/var/services/mongodb:/data/db"]
mongocryptd:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
entrypoint: mongocryptd
restart: always
ports:
- "27020:27020"
volumes: ["/private/var/services/mongodb:/data/db"]
The used dockerfile is mongo's official dockerfile, but supplied with args to build an enterprise version of the image which includes the enterprise features.
When trying to connect to the database from the app, I'm running:
MongooseModule.forRoot(`mongodb://usr:pwd#mongodb:27017`, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
retryAttempts: 2,
autoEncryption: {
keyVaultNamespace,
kmsProviders,
extraOptions: {
mongocryptdURI: `mongodb://mongocryptd:27020`,
mongocryptdBypassSpawn: true
}
} as any
})
** This is the NestJS version of supplying the configs. it's similar to mongoose - the first argument is the URI and the second is the settings object
Without the autoEncryption options, I'm able to connect without any problems. That means that my database address is correct.
With the autoEncyption options, I'm getting MongooseServerSelectionError: connect ECONNREFUSED 172.25.0.4:27020 (mongocryptd address). That means that the IP is correct (DNS resolved), but the connection is refused. As I showed before, the port (27020) is being published by the docker-compose file, and I even tried to add an EXPOSE step in the build itself.
BUT when I map the network of the containers to host (network_mode: "host"), the application is able to connect without any problems (changing the connections DNS to localhost:27017 and 27020 of course). So that must mean it's a docker-related problem.
Additional things I've tried && a recap of what I tried:
Attach a volume to replace /etc/mongod.conf.orig with the following network configurations:
net:
port: 27017
bindIp: 0.0.0.0
bindIpAll: true
Instead of attaching a volume, replacing it ^ at the build step before launching the mongo service.
I also tried changing the bindIp to the specific application IP that was given by the docker network.
All types of connection strings with & without user credentials, auth source, and default database.
Port 27020 is published in docker-compose & exposed on docker file.
I ran out of ideas. Any help is appreciated! :)
EDIT:
After more debugging, I can see that mongod is running with --bind_ip_all by default so changing the conf file shouldn't have an effect.
Tried also running mongocryptd with mongods docker-entrypoint.sh entrypoint instead of overriding it.
Verify mongocryptd is running (ps awwxu, etc.)
Verify you can connect to it from bash on the same container where it is running using mongo.
Verify you can connect to it from host system using mongo.
Check mongocryptd logs (it's basically a mongod with some extra functionality).
I had similar issues with mongocryptd, but I was installing php node instead of npm. Tried different solutions but didn't managed to succeed. Even tried similar Asaf Kfir docker-composer.yaml with pre-installed mongodb-enterprise-cryptd lib, but had same issue. Keep in mind, in php node via Dockerfile I already been able to install libmongocrypt-dev and
mongodb-enterprise-cryptd. (I will leave php Dockerfile below)
Those three containers I managed to link under the same ip address, tested with ncat and I was able to reach. But when I tried to run tests from php node, it start throwing:
MongoDB\Driver\Exception\BulkWriteException: Bulk write failed due to previous MongoDB\Driver\Exception\RuntimeException: key vault error: Invalid reply to find command.
and had this issue for two weeks. Basically didn't know how to resolve it.
P.S. remember these words: The automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later
At that time my docker-compose.yaml file looked like that:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: unless-stopped
tty: true
ports:
- "27017:27017"
- "27020:27020"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: local-mongo-db
container_name: mongodb
restart: unless-stopped
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
#MongoDB Service
mongocryptd:
image: local-mongocryptd
container_name: mongocryptd
entrypoint: mongocryptd
restart: unless-stopped
tty: true
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
volumes:
dbdata:
driver: local
Mongo images where build from here: https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-with-docker/
I managed to solve this issue like this:
Instead of building mongo-enterprise image I accidentally build up with official mongo image mongo:4.2 and everything worked well. I don't know why mongo says that enterprise is needed for encryption. Because for me mongo-enterprise encryption didn't worked. The original mongo:4.2 image worked perfectly.
working docker-compose.yaml:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: always
tty: true
ports:
- "27017:27017"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: mongo:4.2
container_name: mongodb
restart: always
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
php node Dockerfile:
FROM php:7.4
RUN apt-get update && apt-get install -y zip unzip libzip-dev git mercurial zlib1g-dev libicu-dev libcurl4-gnutls-dev libssl-dev libssh2-1-dev libgmp-dev libpng-dev uuid-dev
RUN cd /tmp && git clone https://github.com/php/pecl-networking-ssh2 && cd /tmp/pecl-networking-ssh2 \
&& phpize && ./configure && make && make install \
&& echo "extension=ssh2.so" > /usr/local/etc/php/conf.d/ext-ssh2.ini \
&& rm -rf /tmp/ssh2
RUN docker-php-ext-configure gmp
RUN docker-php-ext-install zip json pdo pdo_mysql curl opcache bcmath sockets gmp gd
RUN docker-php-ext-install -j$(nproc) intl
RUN pecl install uuid pcov redis mongodb
RUN docker-php-ext-enable uuid pcov redis mongodb
RUN curl -sS https://get.symfony.com/cli/installer | bash -s -- --install-dir /usr/local/bin
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -L -sS "https://github.com/splitsh/lite/releases/download/v1.0.1/lite_linux_amd64.tar.gz" | tar xvz -C /usr/local/bin
RUN apt-get update
RUN apt-get install -y curl gpg wget
RUN sh -c 'curl -s https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg'
RUN echo "deb https://libmongocrypt.s3.amazonaws.com/apt/ubuntu bionic/libmongocrypt/1.0 universe" | tee /etc/apt/sources.list.d/libmongocrypt.list
RUN wget -qO - mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb http://repo.mongodb.com/apt/debian stretch/mongodb-enterprise/4.2 main" | tee /etc/apt/sources.list.d/mongodb-enterprise.list
RUN apt-get update
RUN apt-get install -y libmongocrypt-dev && apt-get install --no-install-recommends -y mongodb-enterprise-cryptd
I hope I helped. Cheers!
UPDATE:
My tests are running without libmongocrypt-dev lib, so I guess you only need mongodb-enterprise-cryptd.
I decided to use socat forwarding traffic
Dockerfile:
# Build stage
FROM ubuntu:focal
ENV ENTRY_FILE=docker-entrypoint.sh
ENV MONGODB_PATH=/usr/src/mongodb
ENV ENTRY_POINT=$MONGODB_PATH/$ENTRY_FILE
RUN apt-get update && apt-get install sudo
RUN sudo apt-get install -y curl telnet vim socat libcurl4 libgssapi-krb5-2 libldap-2.4-2 libwrap0 libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit snmp openssl liblzma5
RUN curl -k -o mongodb.tgz "https://downloads.mongodb.com/linux/mongodb-linux-$(arch)-enterprise-ubuntu2004-5.0.10.tgz"
RUN tar -xf mongodb.tgz --strip-components=1
RUN sudo ln -s $MONGODB_PATH/bin/* /usr/local/bin/
RUN sudo mkdir -p /data/db
RUN sudo mkdir -p /data/log
RUN sudo chown `whoami` /data/db
RUN sudo chown `whoami` /data/log
COPY ./$ENTRY_FILE $ENTRY_POINT
RUN chmod +x $ENTRY_POINT
ENTRYPOINT $ENTRY_POINT
docker-entrypoint.sh:
#!/bin/sh
socat -d -d TCP-LISTEN:27017,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17017 &
socat -d -d TCP-LISTEN:27018,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17018 &
socat -d -d TCP-LISTEN:27019,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17019 &
socat -d -d TCP-LISTEN:27020,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17020 &
./bin/mongod --port 17017 --dbpath /data/db --logpath /data/log/mongod.log &
./bin/mongocryptd --port 17020 --logpath /data/log/mongocryptd.log
$(hostname -I | awk '{print $1}') is the remote ip(my docker host ip, 172.2.x.x), you can change to your another container ip.
docker-compose.yml:
version: "3.8"
networks:
ABC:
external: false
name: ABC
services:
mongo-database:
container_name: MongoDB
privileged: true
build:
context: .docker/db
dockerfile: Dockerfile
volumes:
- .mongo/db:/data/db
- .mongo/log:/data/log
ports:
- 27017-27020:27017-27020
networks:
- ABC

Docker - PostgreSQL could not connect to server: Connection refused 127.0.0.1:5432

Trying to dockerise my Symfony 4 app, running with PostgreSQL
But when I'm running :
$ sudo docker-compose build
I'm having this error :
In AbstractPostgreSQLDriver.php line 73:
An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server : Connection refused :
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Here's my docker-compose.yml file :
version: '3.7'
services:
db:
image: ${POSTGRES_IMAGE}
restart: always
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
php:
build:
context: .
dockerfile: ./docker/php/Dockerfile
restart: on-failure
user: 1000:1000
volumes:
- ./docker/php/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
- .:/var/www/symfony
working_dir: /var/www/symfony
depends_on:
- db
Content of my .env file
DATABASE_NAME=db
DATABASE_HOST=127.0.0.1
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
Also FYI :
$ sudo docker-compose config
debian#debian:~/dev/symfony$ sudo docker-compose config
services:
db:
environment:
POSTGRES_DB: mrd
POSTGRES_PASSWORD: root
POSTGRES_USER: postgres
image: postgres
restart: always
php:
build:
context: /home/debian/dev/symfony
dockerfile: ./docker/php/Dockerfile
depends_on:
- db
restart: on-failure
user: 1000:1000
volumes:
- /home/debian/dev/symfony:/var/www/symfony:rw
working_dir: /var/www/symfony
version: '3.7'
My Dockerfile:
FROM php:7.3-fpm
WORKDIR /var/www/symfony
RUN apt-get update
# Install Postgres PDO
RUN apt-get install -y libpq-dev \
&& apt-get install -y zip \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apt-get install -y git
RUN pecl install apcu
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === 'a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php --filename=composer \
&& php -r "unlink('composer-setup.php');" \
&& mv composer /usr/local/bin/composer
RUN ls -larth
COPY . /var/www/symfony
RUN PATH=$PATH:/var/www/symfony/vendor/bin:bin
RUN pwd \
&& ls \
&& composer install --no-interaction --no-ansi --optimize-autoloader\
&& php bin/console doctrine:database:create \
&& php bin/console doctrine:schema:update --no-interaction \
&& php bin/console doctrine:fixtures:load --no-interaction
Has anybody a clue on why it does this ? And how to solve it ?
Searched a lot, and couldn't find anything that worked. Thought the depends_on: db would do the trick, but no.
Try to run:
docker-compose ps
see Container name
Set DATABASE_HOST=Container name
DATABASE_NAME=db
DATABASE_HOST=## paste Container name
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
The pastes in your question indicate a number of things to be addressed with your docker-compose.yml file:
You are missing an env_file: entry for your .env file
You are missing an image for the php service
You specify . as your build context (which should be a directory that indicates where your Dockerfile is located)
You specify a path as your Dockerfile (which should be a filename, not a directory) -- I guess this might all work, but it's a bit confusing to read
Indentation for -db and - /home/debian/dev/symfony:/var/www/symfony:rw is off -- this might not be a problem, but it's still difficult to read
From your question about build failing with an error regarding inability to connect to a database -- could you update your question and share your Dockerfile for symfony? I suspect that you need to remove a reference to a database connection.

Nginx routing with Docker Rails 5 Postgres app

When I tried to Dockerize Rails app into container and run Nginx on host I got problem with routing from outside in.
I can't access /public in rails app container. Instead I can see /var/www/app/public at host.
How can I route from Nginx to Docker Rails container?
nginx.conf:
upstream puma_app {
server 127.0.0.1:3000;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
server_name localhost app;
root /var/www/app/public;
try_files $uri/index.html $uri #puma_app;
location #puma_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_app;
# limit_req zone=one;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location ~ \.(php|html)$ {
return 405;
}
}
docker-compose.yml:
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- 'app:/var/www/app'
- 'public:/var/www/app/public'
ports:
- '3000:3000'
depends_on:
- postgres
env_file:
- '.env'
postgres:
image: postgres:latest
environment:
POSTGRES_USER: 'postgres_user'
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
volumes:
postgres:
app:
public:
Dockerfile
# Base image:
FROM ruby:2.4
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Set an environment variable where the Rails app is installed to inside of Docker image:
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
ENV RAILS_ENV production
ENV RACK_ENV production
# Set working directory, where the commands will be ran:
WORKDIR $RAILS_ROOT
# Gems:
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY config/puma.rb config/puma.rb
# Copy the main application.
COPY . .
RUN bundle exec rake RAILS_ENV=production assets:precompile
VOLUME ["$RAILS_ROOT/public"]
EXPOSE 3000
# The default command that gets ran will be to start the Puma server.
CMD bundle exec puma -C config/puma.rb
I think you are trying to access /public from host inside the container at /var/www/app/public.
You need to mount host directory inside container. You can use -v "/public:/var/www/app/public" while running the container.
There are some issues with your Dockerfile. I'm not sure how you want to setup your docker image, but it seem like you try to use a public directory in a docker volume; I would suggest to store the compiled assets into the docker image itself. This way, you can sure that the assets are always together with the image.
You current Dockerfile should run the assets:precompile before the COPY . .; Meaning the assets should be compiled into the public directory first before copying it into the docker image.
Anyhow, you should try a running a really simple docker app first before using it on a more complex project setup, here's a blog post that might help you (disclaimer: I wrote that post)
In docker, every container has it's own IP address and are not local to each other. So you can't use 127.0.0.1 ip in the Nginx container as ip of the Rails container. Fortunately docker containers can be linked together using their service names. So you must replace change your upstream to
upstream puma_app {
server http://app:3000;
}
Also you should add Nginx container to your docker-compose file (suppose your nginx conf files are in config/nginx/conf.d dir):
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- app:/var/www/app
- public:/var/www/app/public
- nginx-confs:/var/www/app/config/nginx/conf.d
ports:
- 3000:3000
depends_on:
- postgres
env_file: .env
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=postgres_user
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
nginx:
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- nginx-confs:/etc/nginx/conf.d
volumes:
postgres:
app:
public:
nginx-confs: