Nginx routing with Docker Rails 5 Postgres app - postgresql

When I tried to Dockerize Rails app into container and run Nginx on host I got problem with routing from outside in.
I can't access /public in rails app container. Instead I can see /var/www/app/public at host.
How can I route from Nginx to Docker Rails container?
nginx.conf:
upstream puma_app {
server 127.0.0.1:3000;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
server_name localhost app;
root /var/www/app/public;
try_files $uri/index.html $uri #puma_app;
location #puma_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_app;
# limit_req zone=one;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location ~ \.(php|html)$ {
return 405;
}
}
docker-compose.yml:
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- 'app:/var/www/app'
- 'public:/var/www/app/public'
ports:
- '3000:3000'
depends_on:
- postgres
env_file:
- '.env'
postgres:
image: postgres:latest
environment:
POSTGRES_USER: 'postgres_user'
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
volumes:
postgres:
app:
public:
Dockerfile
# Base image:
FROM ruby:2.4
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Set an environment variable where the Rails app is installed to inside of Docker image:
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
ENV RAILS_ENV production
ENV RACK_ENV production
# Set working directory, where the commands will be ran:
WORKDIR $RAILS_ROOT
# Gems:
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY config/puma.rb config/puma.rb
# Copy the main application.
COPY . .
RUN bundle exec rake RAILS_ENV=production assets:precompile
VOLUME ["$RAILS_ROOT/public"]
EXPOSE 3000
# The default command that gets ran will be to start the Puma server.
CMD bundle exec puma -C config/puma.rb

I think you are trying to access /public from host inside the container at /var/www/app/public.
You need to mount host directory inside container. You can use -v "/public:/var/www/app/public" while running the container.

There are some issues with your Dockerfile. I'm not sure how you want to setup your docker image, but it seem like you try to use a public directory in a docker volume; I would suggest to store the compiled assets into the docker image itself. This way, you can sure that the assets are always together with the image.
You current Dockerfile should run the assets:precompile before the COPY . .; Meaning the assets should be compiled into the public directory first before copying it into the docker image.
Anyhow, you should try a running a really simple docker app first before using it on a more complex project setup, here's a blog post that might help you (disclaimer: I wrote that post)

In docker, every container has it's own IP address and are not local to each other. So you can't use 127.0.0.1 ip in the Nginx container as ip of the Rails container. Fortunately docker containers can be linked together using their service names. So you must replace change your upstream to
upstream puma_app {
server http://app:3000;
}
Also you should add Nginx container to your docker-compose file (suppose your nginx conf files are in config/nginx/conf.d dir):
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- app:/var/www/app
- public:/var/www/app/public
- nginx-confs:/var/www/app/config/nginx/conf.d
ports:
- 3000:3000
depends_on:
- postgres
env_file: .env
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=postgres_user
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
nginx:
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- nginx-confs:/etc/nginx/conf.d
volumes:
postgres:
app:
public:
nginx-confs:

Related

Docker compose connect back and frontend

Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true

Connect multiple containers, nginx and php through fastcgi

I have three containers, mysql, phpfpm and nginx.
When I try to run the localhost:8080 I get this error:
[error] 11#11: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: _,.
request: "GET / HTTP/1.1", upstream: "fastcgi://172.27.0.3:9000", host: "localhost:8080"
Can anybody help me with this? What do I miss?
Here is my docker-compose.yaml:
version: '3'
services:
php:
build: php
container_name: phpfpm
expose:
- '9002'
- '9000'
depends_on:
- db
volumes:
- /srv/www/mito/app:/var/www/html
- ./dockerlive/logs:/var/log
command: /bin/bash -c "rm -rf /var/run/php && mkdir /var/run/php && rm -rf /run/php && mkdir /run/php && /usr/sbin/php-fpm7.4 -F -R"
nginx:
build: nginx
container_name: webserver
ports:
- '8080:80'
depends_on:
- php
- db
volumes:
- /srv/www/mito/app:/var/www/html
- ./dockerlive/logs:/var/log/nginx
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
command: /bin/bash -c "nginx -g 'daemon off;'"
db:
build: database
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: mito
and the nginx config:
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html;
index index.html index.php;
#error_log /var/log/nginx/mito.localhost-error.log;
#access_log /var/log/nginx/mito.localhost-acces.log;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_pass phpfpm:9000;
}
}
I found somewhere a useful idea what I've implemented.
I've added an entry to the nginx and the php volumes:
- php-fpm-socket:/var/run/php
and I've also defined it in the main volumes section without any parameter.

Add WP-CLI container to LEMP stack

Docker version 18.09.1, build 4c52b90
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
I have a working LEMP stack with following docker-compose file.
version: "3.7"
services:
php:
build:
context: './config/php/'
networks:
- dev
volumes:
- /WORK/:/var/www/html/
working_dir: /var/www/html/
container_name: php
nginx:
image: nginx:1.14-alpine
depends_on:
- php
- mysql
networks:
- dev
ports:
- "80:80"
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
- /WORK/:/var/www/html/
container_name: nginx
mysql:
image: mysql:5.7
restart: always
ports:
- "3306:3306"
volumes:
- /MYSQL/:/var/lib/mysql
networks:
- dev
environment:
MYSQL_ROOT_PASSWORD: "db_root"
MYSQL_DATABASE: "test_db"
MYSQL_USER: "test_db_user"
MYSQL_PASSWORD: "test_db_pass"
container_name: mysql
wpcli:
image: wordpress:cli
depends_on:
- mysql
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_NAME: test_db
WORDPRESS_DB_USER: test_db_user
WORDPRESS_DB_PASSWORD: test_db_pass
networks:
- dev
command: '--path=`/var/www/html/WP-Site/`'
volumes:
- /WORK/:/var/www/html/
- /MYSQL/:/var/lib/mysql
working_dir: /var/www/html/
container_name: wpcli
networks:
dev:
The Dockerfile for the PHP image contains this.
FROM php:7.2-fpm-alpine
# docker-entrypoint.sh dependencies
RUN apk add --no-cache \
# in theory, docker-entrypoint.sh is POSIX-compliant, but priority is a working, consistent image
bash \
# BusyBox sed is not sufficient for some of our sed expressions
sed
# https://github.com/docker-library/wordpress/blob/master/php7.3/fpm-alpine/Dockerfile
# install the PHP extensions we need
RUN set -ex; \
\
apk add --no-cache --virtual .build-deps \
libjpeg-turbo-dev \
libpng-dev \
libzip-dev \
; \
\
docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; \
docker-php-ext-install gd mysqli opcache zip; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local/lib/php/extensions \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --virtual .wordpress-phpexts-rundeps $runDeps; \
apk del .build-deps
The NGINX conf is this.
upstream php {
server php:9000;
}
server {
listen 80;
server_name localhost;
## Your only path reference.
root /var/www/html;
## This should be in your http block and if it is, it's not needed here.
index index.php index.html index.htm;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# https://nginxlibrary.com/enable-directory-listing/
autoindex on;
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
All services except WP-CLI work. My folder structure is so that under the main WORK folder I have subfolders, each with a project in them.
Such projects can be WordPress but can also be using another CMS or just be small static sites.
Under http://localhost/ I see an overview of the WORK folder with all the subfolders. Clicking on any project folder, e.g. WP-Site, will lead me to e.g. http://localhost/WP-Site/. There I have a working WordPress install that connects to the PHP and MYSQL services just fine.
But when trying to use WP-CLI it either tells me that it cannot find a working WordPress install and when I pass the command in the service and give it the path of one of the WordPress projects I have it will just exit without error message.
Do I really need a WordPress image to be able to use the WP-CLI image. In the WP-CLI service I also gave the same environment variables as for the MYSQL service, so that WP-CLI can know where to connect to.
If there is any way to run a WP-CLI container and have that available under the main WORK folder, so for all projects I might make, well that would be great.
I thought of installing everything into one image but then this defeats the purpose of Docker, to be able to have things separate and exchangeable. So I like to be able to make a WP-CLI container that indeed is able to connect to my PHP and MYSQL containers.

nginx angular 5 reverse-proxy mongo image

I been able to get the reverse proxy working for my angular 5 project. With below files. I am very new to Angular and nginx. Before I dockerized the client and nginx etc. I just installed everything under one path.
So I just ran one npm install and I worked with npm start, ng build --prod and ng serve.
I am just a bit confused about Angular 2 version 5, I thought I was trying to separate the client from the server. Knowing that Angular 2 runs most things client side. However right now it looks like my app.js is still being called from within the same 'client' container.
Am I supposed to separate and containerize the express server and what are the benefits of doing this?
I am also going to run mongo image from a container. Am I correct in linking the Client container to mongo?
nginx default.conf
server {
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://client:4200/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
docker-compose.yml
version: '2'
services:
# Build the container using the client Dockerfile
client:
build: ./
# This line maps the contents of the client folder into the container.
volumes:
- ./:/usr/src/app
links:
- mongo
depends_on:
- mongo
mongo:
image: mongo
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
# Build the container using the nginx Dockerfile
nginx:
build: ./nginx
# Map Nginx port 80 to the local machine's port 80
ports:
- "80:80"
# Link the client container so that Nginx will have access to it
links:
- client
Dockerfile
# Create a new image from the base nodejs 7 image.
FROM node:8.1.4-alpine as builder
# Create the target directory in the imahge
RUN mkdir -p /usr/src/app
# Set the created directory as the working directory
WORKDIR /usr/src/app
# Copy the package.json inside the working directory
COPY package.json /usr/src/app
# Install required dependencies
RUN npm install
# Copy the client application source files. You can use .dockerignore
to exlcude files. Works just as .gitignore does.
COPY . /usr/src/app
# Open port 4200. This is the port that our development server uses
EXPOSE 3000
# Start the application. This is the same as running ng serve.
CMD ["npm", "start"]
Even though you are running your client (angular) and server (node) in the same container, they are still "separate". They are physically located & served on the same server, but run separately. Your api layer runs on node and your angular application runs on the client.
What you have is valid. I have pretty much the same setup. I have 2 containers. A node container that runs express to serve my api layer and my angular application. Then I have the mongo container as the db.

Failing to execute nginx proxy_pass directive for a Dancer2 app inside a Docker container

I have tried to orchestrate a Dancer2 app which runs on starman using Docker-compose. I'm failing to integrate nginx it crashes with 502 Bad Gateway error.
Which inside my server looks like this :
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1,
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000:5000"
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root
My nginx.conf file looks like this :
user root nogroup;
worker_processes auto;
events { worker_connections 512; }
http {
include /etc/nginx/sites-enabled/*;
upstream pb{
# this the localhost that starts starman
#server 127.0.0.1:5000;
#the name of the docker-compose service that creats the app
server pearlbee;
#both return the same error mesage
}
server {
listen *:80;
#root /usr/share/nginx/html/;
#index index.html 500.html favico.ico;
location / {
proxy_pass http://pb;
}
}
}
You're right to use the service name as the upstream server for Nginx, but you need to specify the port:
upstream pb{
server pearlbee:5000;
}
Within the Docker network - which Compose creates for you - services can access each other by name. Also, you don't need to publish ports for other containers to use, unless you also want to access them externally. The Nginx container will be able to access port 5000 on your app container, you don't need to publish it to the host.