Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true
Related
I am using the following techs for my project
Backend / API : Nest JS ( Running on port localhost:3001 )
Frontend: Next JS ( Running on port localhost:3000 )
Database: MongoDB
Reverse Proxy Server: Nginx
and Docker
When I am using "docker-compose up -d" the Backend is running properly as well as all the API URLs are working fine by this " localhost:3001 "
Also, in the Frontend, the site is loading properly and also all the API data is showing. But, an error is showing in the console and popup like this screenshot
Error Image 1: click here Error Image 2: click here
Error in text:
In popup:
Unhandled Runtime Error
Error: Network Error
Call Stack
createError
node_modules/axios/lib/core/createError.js (16:0)
XMLHttpRequest.handleError
/_next/static/development/pages/Index.js (16144:14)
In Console:
Access to XMLHttpRequest at 'http://dev:3001/news?page=1&limit=3' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I have called the API from the frontend as "http://dev:3001" as I am loading it from docker image and the service name is dev.
Here is the docker-compose file:
version: "3.9"
services:
dev:
container_name: nest-backend
image: nest-backend:1.0.0
build:
context: ./backend
target: development
dockerfile: ./Dockerfile
expose:
- 3001
ports:
- "3001:3001"
links:
- mongo
volumes:
- ./backend:/backend/app
- /backend/app/node_modules
restart: unless-stopped
command: npm run start:dev
networks:
- app-network
app:
image: bjithp-next
build: frontend
expose:
- 3000
ports:
- 3000:3000
depends_on:
- dev
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next
networks:
- app-network
mongo:
image: mongo
# environment:
# - MONGO_INITDB_ROOT_USERNAME=root
# - MONGO_INITDB_ROOT_PASSWORD=1234
ports:
- "27037:27017"
volumes:
- mongodb_data_container:/data/db
networks:
- app-network
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
restart: always
depends_on:
- app
- dev
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- app-network
volumes:
mongodb_data_container:
mongodb-data:
name: mongodb-data
networks:
app-network:
driver: bridge
I have also used nginx for reverse proxy. Here is the nginx default.conf file:
// nginx.conf
upstream dev {
servdever dev:3001;
}
upstream client {
server app:3000;
}
server {
listen 80;
server_name localhost:3000;
access_log /path/to/access/log/access.log;
error_log /path/to/error/log/error.log;
location / {
proxy_pass http://app;
}
location ~ /dev/(?<section>.*) {
rewrite ^/dev/(.*)$ /$1 break;
proxy_pass http://dev;
}
}
And also in the frontend, I have used .env, which is this:
DEFAULT_PORT=3000
BASE_API_URL=http://dev:3001
NODE_ENV=development
HOST=http://localhost
PRODUCTION_IMAGE_PORT=443
BASE_IMAGE_URL=http://localhost:3001
HIRE_US_PAGE=http://103.197.206.56:3000
BLOG_SHARE_URL_HOST=http://bjitgroup.com
Please suggest me a solution. My site is working fine with that annoying network-error problem.
I am having a docker container with multiple postgresql databases:
version: "3.7"
services:
nginx:
image: nginx:1.18.0-alpine
ports:
- 8028:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
depends_on:
- webapp
networks:
- postgresql_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db1
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
postgresql2:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA2
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db2
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
networks:
postgresql_network:
driver: bridge
Now I can expose two different ports for each of the postgresq instance and access them differently from host (like localhost 5432 and localhost 5445)
But instead of that i want to handle them using different domain names.
like pg1.docker.db, port 8028 (nginx port) and pg.docker.db, port 8028 (nginx port) and using nginx reverse proxy them internally to different docker containers using postgresql and postgresql2
and in the host /etc/hosts I will have
127.0.0.1 pg1.docker.db
127.0.0.1 pg2.docker.db
I am not sure, but something similar. Can someone say I am doing right
## WEBSERVER
upstream webapp {
server webapp:8000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
## POSTGRESQL DBS
upstream postgres {
server postgresql:5432;
}
upstream postgres2 {
server postgresql2:5432;
}
server {
listen 80 so_keepalive=on;
server_name db1.domain.my;
proxy_pass postgres;
}
server {
listen 80 so_keepalive=on;
server_name db2.domain.my;
proxy_pass postgres2;
}
I find people using stream and http. Will the above also work. Or should I do something
Redirecting http requests based on domain name is pretty easy.
You can use something like this:
http {
server {
listen 80;
server_name pg1.docker.db
location / {
...
}
}
server {
listen 80;
server_name pg2.docker.db;
location / {
...
}
}
}
But the problem here is that nginx forward only http request to servers, so unless you have a web-base interface behind your containers, it's not going to work properly.
So I set up the caddy server using docker and have set it to proxy to another docker image that is running nodejs. However, when i hit the url it is routing to caddyserver.com instead of my nodejs. Is there something that I am missing:
myserver {
proxy / 172.17.0.5:8000 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
My docker-compose looks like following:
web:
image: ca9a385372b0
env_file: .env
environment:
AUTO_ALIASES: /src/docker/aliases/auto
TERM: xterm-256color
VIRTUAL_HOST: myserver
VIRTUAL_PORT: 8000
volumes:
- .:/src
ports:
- "8000:8000"
container_name: web
links:
- caddy
caddy:
image: blackglory/caddy-proxy:0.2.1
env_file: .env
environment:
CADDY_OPTIONS: "-email someaddress#myserver.com"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
i added following to my /etc/hosts also
127.0.0.1 myserver
However, when I goto http://myserver it redirects me to caddyserver.com
my .env file looks like this
WEB_VIRTUAL_HOST=myserver
WEB_VIRTUAL_PORT=8000
I have tried to orchestrate a Dancer2 app which runs on starman using Docker-compose. I'm failing to integrate nginx it crashes with 502 Bad Gateway error.
Which inside my server looks like this :
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1,
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000:5000"
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root
My nginx.conf file looks like this :
user root nogroup;
worker_processes auto;
events { worker_connections 512; }
http {
include /etc/nginx/sites-enabled/*;
upstream pb{
# this the localhost that starts starman
#server 127.0.0.1:5000;
#the name of the docker-compose service that creats the app
server pearlbee;
#both return the same error mesage
}
server {
listen *:80;
#root /usr/share/nginx/html/;
#index index.html 500.html favico.ico;
location / {
proxy_pass http://pb;
}
}
}
You're right to use the service name as the upstream server for Nginx, but you need to specify the port:
upstream pb{
server pearlbee:5000;
}
Within the Docker network - which Compose creates for you - services can access each other by name. Also, you don't need to publish ports for other containers to use, unless you also want to access them externally. The Nginx container will be able to access port 5000 on your app container, you don't need to publish it to the host.
I have a problem with rolling deployments of docker containers behind a load balancer.
Here is my docker compose yml file contents.
nginx:
image: nginx_image
links:
- node1:node1
- node2:node2
- node3:node3
ports:
- "80:80"
node1:
image: nodeapi_image
ports:
- "8001"
node2:
image: nodeapi_image
ports:
- "8001"
node3:
image: nodeapi_image
ports:
- "8001"
and here my nginx.conf
worker_processes 4;
events { worker_connections 1024; }
http {
upstream node-app {
least_conn;
server node1:8001 weight=10 max_fails=3 fail_timeout=30s;
server node2:8001 weight=10 max_fails=3 fail_timeout=30s;
server node3:8001 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
listen 443 ssl;
# ssl on;
ssl_certificate /etc/nginx/ssl/imago.io.chain.crt;
ssl_certificate_key /etc/nginx/ssl/imago.io.key;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
If I have a new built image I want to deploy I have to stop a node container, remove it and recreate it with the new image. The problem here is that the new container will get a new IP and the nginx container doesnt know about that new IP, so if I recreate the 3 containers behind the load balancer once I recreate the last one the app wont serve any more because all IPs in the nginx machines /etc/hosts and environment vairables are not up to date any more.
I could SSH in to each container, update its code by pulling from the git repo and restart the process but that seems just wrong to me. What is the right way to do this?
There is an easier way to achieve this, take the following docker-compose.yml file as an example :
lb:
image: tutum/haproxy
links:
- app
ports:
- "80:80"
app:
image: tutum/hello-world
This docker compose file describes two services :
lb: a load balancer which uses the tutum/haproxy image
app: a sample webapp listening on port 80
If you start those services naïvely with docker-compose up -d, you will end up with only 2 containers (the load balancer and the web app).
But if you run docker-compose scale app=3 then run again docker-compose up -d, you will end up with 4 load-balanced containers.
The key player here is the tutum/haproxy docker image which is able to discover the different containers it is linked to.
A similar solution is to use Jason Wilder's nginx-proxy image which has the advantage of discovering the new nodes live ; so you won't have to restart the lb service.
lb:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
app:
image: tutum/hello-world
environment:
VIRTUAL_HOST: www.mysite.com
The VIRTUAL_HOST environment variable must be set to the domain name that resolves to the IP address of your docker host.
Another one is to use Traefik
lb:
image: traefik
command: --docker
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
image: tutum/hello-world
labels:
traefik.frontend.rule: Host:www.mysite.com
The traefik.frontend.rule label must define a Traefik rule set to the domain name that resolves to the IP address of your docker host.
Traefik also offers different load balancing strategies and circuit breakers.