Connect nginx on host with wsgi unicorn inside docker container - sockets

Starting to dockerize my Rails application I am facing following problem:
My idea was to have every web application with their Wsgi and dependencies running in an extra docker container and the database also ruiing in seperate containers while using docker-compose to set it up.
Outside the containers Nginx is routing traffic then depending on the domain to the specific container via unix sockets.(Didn't want nginx in a container to reduce the complexity and avoid having multiple nginx running in multiple containers to maintain multiple webapps).
Before starting with docker my wsgi and nginx got connected via unix sockets. But after dockerizing this is not working anymore. Only connecting them with ports works now which I would like to avoid.
Is there anyway way to connect Nginx on the host via unix sockets with the WSGI inside a container? If not what is best practice here?
My approach was to use shared volumes as location for the socket file but nginx cant access the socket created by the wsgi unicorn:
Socket created by unicorn:
srwxrwxrwx 1 root root 0 Nov 14 14:53 unicorn.sock=
Nginx error:
*2 connect() to unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock failed (13: Permission denied) while connecting to upstream
Nginx sites-avaible/myapp:
upstream myapp {
# Path to Unicorn SOCK file, as defined previously
server unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80 default_server;
...
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myapp.de www.myapp.de;
root /ruby-webapps/myapp;
try_files $uri/index.html $uri #MyApp;
location #MyApp {
proxy_pass http://myapp;
#proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-For https;
proxy_redirect off;
}
}
docker-compose.yml:
version:'2'
services:
postgresmyapp:
image: postgres
env_file: .env
myapp:
build: .
env_file: .env
command: supervisord -c /myapp/unicorn_supervisord.conf
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- postgreslberg
config/unicorn.rb:
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
rails_env = ENV['RAILS_ENV'] || 'production'
# Set unicorn options
worker_processes 2
preload_app true
timeout 30
# Set up socket location
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
#listen(3000, backlog: 64)
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
pid "#{shared_dir}/pids/unicorn.pid"

Related

Docker compose connect back and frontend

Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true

Could not connect to Postgres from Symfony 4 + Docker

I run into the strange problem. I've created docker-compose file to build php + nginx + postgres services:
version: '2'
services:
db:
image: orchardup/postgresql
ports:
- "5433:5432"
environment:
LC_ALL: C.UTF-8
POSTGRESQL_USER: postgres
POSTGRESQL_DB: db
POSTGRESQL_PASS: postgres
php:
build: .docker/php-fpm
ports:
- "9002:9002"
volumes:
- .:/var/www/symfony:cached
- ./var/log/symfony:/var/www/symfony/var/log:cached
links:
- db
nginx:
build: .docker/nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/log/nginx/:/var/log/nginx:cached
After that I created DB schema by running bin/console doctrine:schema:update --force . The tables and migrations created just fine. Seems like DB connection is ok. I checked this by connecting to db from my machine through psql with credentials from .env, the tables are there.
But when I go to the web page and trying to authorize, I get an error told me the connection is not ok:
Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?"
I checked in both case I have dev environment - from the web page and from the console. I tried 5433 and 5432 ports with no success. I tried everything I could find for 3 hours.
This is the output from the postgres container:
# netstat -tlpn | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 12/postgres
tcp6 0 0 :::5432 :::* LISTEN 12/postgres
# grep listen /etc/postgresql/9.3/main/postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
The only way for containers to talk to each other is through IPs. By linking multiple containers together through --link (or links in docker-compose), docker creates a secure tunnel between those two containers so that we don't need to expose any ports externally.
If you try to connect to your database from your local environment through a database client, you will be able to connect to it from 127.0.0.1:5433 as the port is exposed to your host through the docker-compose file. This is the reason why your schema update command succeeded.
Docker exposes connectivity information for the source container to
the recipient container in two ways:
Environment variables,
Updating the /etc/hosts file.
Ref: https://docs.docker.com/network/links/#communication-across-links
In order to connect to your database (which is running in the db container) from the php container, you will need to get the host of your db container through the environment variable DB_PORT_5432_TCP_ADDR (I might be wrong on this, but type env in your php container's terminal to verify. You will need to SSH into your php container).
Alternatively, you can use the second method, which is just db as the hostname instead of 127.0.0.1 since docker updated the /etc/hosts file in the php container to map your linked container's name to its IP, and in this case, the value mapped to the hostname db is the same as the value stored in the environment variable DB_PORT_5432_TCP_ADDR.

nginx angular 5 reverse-proxy mongo image

I been able to get the reverse proxy working for my angular 5 project. With below files. I am very new to Angular and nginx. Before I dockerized the client and nginx etc. I just installed everything under one path.
So I just ran one npm install and I worked with npm start, ng build --prod and ng serve.
I am just a bit confused about Angular 2 version 5, I thought I was trying to separate the client from the server. Knowing that Angular 2 runs most things client side. However right now it looks like my app.js is still being called from within the same 'client' container.
Am I supposed to separate and containerize the express server and what are the benefits of doing this?
I am also going to run mongo image from a container. Am I correct in linking the Client container to mongo?
nginx default.conf
server {
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://client:4200/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
docker-compose.yml
version: '2'
services:
# Build the container using the client Dockerfile
client:
build: ./
# This line maps the contents of the client folder into the container.
volumes:
- ./:/usr/src/app
links:
- mongo
depends_on:
- mongo
mongo:
image: mongo
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
# Build the container using the nginx Dockerfile
nginx:
build: ./nginx
# Map Nginx port 80 to the local machine's port 80
ports:
- "80:80"
# Link the client container so that Nginx will have access to it
links:
- client
Dockerfile
# Create a new image from the base nodejs 7 image.
FROM node:8.1.4-alpine as builder
# Create the target directory in the imahge
RUN mkdir -p /usr/src/app
# Set the created directory as the working directory
WORKDIR /usr/src/app
# Copy the package.json inside the working directory
COPY package.json /usr/src/app
# Install required dependencies
RUN npm install
# Copy the client application source files. You can use .dockerignore
to exlcude files. Works just as .gitignore does.
COPY . /usr/src/app
# Open port 4200. This is the port that our development server uses
EXPOSE 3000
# Start the application. This is the same as running ng serve.
CMD ["npm", "start"]
Even though you are running your client (angular) and server (node) in the same container, they are still "separate". They are physically located & served on the same server, but run separately. Your api layer runs on node and your angular application runs on the client.
What you have is valid. I have pretty much the same setup. I have 2 containers. A node container that runs express to serve my api layer and my angular application. Then I have the mongo container as the db.

Failing to execute nginx proxy_pass directive for a Dancer2 app inside a Docker container

I have tried to orchestrate a Dancer2 app which runs on starman using Docker-compose. I'm failing to integrate nginx it crashes with 502 Bad Gateway error.
Which inside my server looks like this :
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1,
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000:5000"
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root
My nginx.conf file looks like this :
user root nogroup;
worker_processes auto;
events { worker_connections 512; }
http {
include /etc/nginx/sites-enabled/*;
upstream pb{
# this the localhost that starts starman
#server 127.0.0.1:5000;
#the name of the docker-compose service that creats the app
server pearlbee;
#both return the same error mesage
}
server {
listen *:80;
#root /usr/share/nginx/html/;
#index index.html 500.html favico.ico;
location / {
proxy_pass http://pb;
}
}
}
You're right to use the service name as the upstream server for Nginx, but you need to specify the port:
upstream pb{
server pearlbee:5000;
}
Within the Docker network - which Compose creates for you - services can access each other by name. Also, you don't need to publish ports for other containers to use, unless you also want to access them externally. The Nginx container will be able to access port 5000 on your app container, you don't need to publish it to the host.

rolling deployment for docker containers behind load balancer

I have a problem with rolling deployments of docker containers behind a load balancer.
Here is my docker compose yml file contents.
nginx:
image: nginx_image
links:
- node1:node1
- node2:node2
- node3:node3
ports:
- "80:80"
node1:
image: nodeapi_image
ports:
- "8001"
node2:
image: nodeapi_image
ports:
- "8001"
node3:
image: nodeapi_image
ports:
- "8001"
and here my nginx.conf
worker_processes 4;
events { worker_connections 1024; }
http {
upstream node-app {
least_conn;
server node1:8001 weight=10 max_fails=3 fail_timeout=30s;
server node2:8001 weight=10 max_fails=3 fail_timeout=30s;
server node3:8001 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
listen 443 ssl;
# ssl on;
ssl_certificate /etc/nginx/ssl/imago.io.chain.crt;
ssl_certificate_key /etc/nginx/ssl/imago.io.key;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
If I have a new built image I want to deploy I have to stop a node container, remove it and recreate it with the new image. The problem here is that the new container will get a new IP and the nginx container doesnt know about that new IP, so if I recreate the 3 containers behind the load balancer once I recreate the last one the app wont serve any more because all IPs in the nginx machines /etc/hosts and environment vairables are not up to date any more.
I could SSH in to each container, update its code by pulling from the git repo and restart the process but that seems just wrong to me. What is the right way to do this?
There is an easier way to achieve this, take the following docker-compose.yml file as an example :
lb:
image: tutum/haproxy
links:
- app
ports:
- "80:80"
app:
image: tutum/hello-world
This docker compose file describes two services :
lb: a load balancer which uses the tutum/haproxy image
app: a sample webapp listening on port 80
If you start those services naïvely with docker-compose up -d, you will end up with only 2 containers (the load balancer and the web app).
But if you run docker-compose scale app=3 then run again docker-compose up -d, you will end up with 4 load-balanced containers.
The key player here is the tutum/haproxy docker image which is able to discover the different containers it is linked to.
A similar solution is to use Jason Wilder's nginx-proxy image which has the advantage of discovering the new nodes live ; so you won't have to restart the lb service.
lb:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
app:
image: tutum/hello-world
environment:
VIRTUAL_HOST: www.mysite.com
The VIRTUAL_HOST environment variable must be set to the domain name that resolves to the IP address of your docker host.
Another one is to use Traefik
lb:
image: traefik
command: --docker
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
image: tutum/hello-world
labels:
traefik.frontend.rule: Host:www.mysite.com
The traefik.frontend.rule label must define a Traefik rule set to the domain name that resolves to the IP address of your docker host.
Traefik also offers different load balancing strategies and circuit breakers.