So I set up the caddy server using docker and have set it to proxy to another docker image that is running nodejs. However, when i hit the url it is routing to caddyserver.com instead of my nodejs. Is there something that I am missing:
myserver {
proxy / 172.17.0.5:8000 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
My docker-compose looks like following:
web:
image: ca9a385372b0
env_file: .env
environment:
AUTO_ALIASES: /src/docker/aliases/auto
TERM: xterm-256color
VIRTUAL_HOST: myserver
VIRTUAL_PORT: 8000
volumes:
- .:/src
ports:
- "8000:8000"
container_name: web
links:
- caddy
caddy:
image: blackglory/caddy-proxy:0.2.1
env_file: .env
environment:
CADDY_OPTIONS: "-email someaddress#myserver.com"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
i added following to my /etc/hosts also
127.0.0.1 myserver
However, when I goto http://myserver it redirects me to caddyserver.com
my .env file looks like this
WEB_VIRTUAL_HOST=myserver
WEB_VIRTUAL_PORT=8000
Related
I'm trying to configure my nextcloud on my digitalocean server (debian 11). Using nginx proxy manager and nextcloud under docker
I change the root directory of the docker-compose (since I was out of disk space, I added a volume and mounted it at /var/lib/docker/volumes/volume_nyc1_01)
I created a new folder called nextcloud. Inside that created docker-co
Version: "3"
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
# add this if the network is already existing!
# external: true
backend:
services:
nextcloud-app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/lib/docker/volumes/volume_nyc1_01/var/www/html
environment:
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm
- DB_MYSQL_PASSWORD=raspberrypi
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/var/lib/docker/volumes/volume_nyc1_01/data
- npm-ssl:/var/lib/docker/volumes/volume_nyc1_01/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_DATABASE=npm
- MYSQL_USER=npm
- MYSQL_PASSWORD=raspberrypi
volumes:
- npm-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
networks:
- backend
As you could notice, I created through mkdir, each folder after volume_nyc1_01.
Finally I started the server, from /var/lib/docker/volumes/volume_nyc1_01/nextcloud, using docker-compose up -d
Once logged in the ip-addres-server:81, I created the proxy host with domain name mydomain.com and forward hostname/ip nextcloud-app port 80. Saved
When i check in the domain name, it just doesn't show anything. The same happens when tried to establish the ssl.
I know I'm missing something, but I searched a lot and couldn't find anything. I really appreciate any help or suggestion
Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true
I am using the following techs for my project
Backend / API : Nest JS ( Running on port localhost:3001 )
Frontend: Next JS ( Running on port localhost:3000 )
Database: MongoDB
Reverse Proxy Server: Nginx
and Docker
When I am using "docker-compose up -d" the Backend is running properly as well as all the API URLs are working fine by this " localhost:3001 "
Also, in the Frontend, the site is loading properly and also all the API data is showing. But, an error is showing in the console and popup like this screenshot
Error Image 1: click here Error Image 2: click here
Error in text:
In popup:
Unhandled Runtime Error
Error: Network Error
Call Stack
createError
node_modules/axios/lib/core/createError.js (16:0)
XMLHttpRequest.handleError
/_next/static/development/pages/Index.js (16144:14)
In Console:
Access to XMLHttpRequest at 'http://dev:3001/news?page=1&limit=3' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I have called the API from the frontend as "http://dev:3001" as I am loading it from docker image and the service name is dev.
Here is the docker-compose file:
version: "3.9"
services:
dev:
container_name: nest-backend
image: nest-backend:1.0.0
build:
context: ./backend
target: development
dockerfile: ./Dockerfile
expose:
- 3001
ports:
- "3001:3001"
links:
- mongo
volumes:
- ./backend:/backend/app
- /backend/app/node_modules
restart: unless-stopped
command: npm run start:dev
networks:
- app-network
app:
image: bjithp-next
build: frontend
expose:
- 3000
ports:
- 3000:3000
depends_on:
- dev
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next
networks:
- app-network
mongo:
image: mongo
# environment:
# - MONGO_INITDB_ROOT_USERNAME=root
# - MONGO_INITDB_ROOT_PASSWORD=1234
ports:
- "27037:27017"
volumes:
- mongodb_data_container:/data/db
networks:
- app-network
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
restart: always
depends_on:
- app
- dev
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- app-network
volumes:
mongodb_data_container:
mongodb-data:
name: mongodb-data
networks:
app-network:
driver: bridge
I have also used nginx for reverse proxy. Here is the nginx default.conf file:
// nginx.conf
upstream dev {
servdever dev:3001;
}
upstream client {
server app:3000;
}
server {
listen 80;
server_name localhost:3000;
access_log /path/to/access/log/access.log;
error_log /path/to/error/log/error.log;
location / {
proxy_pass http://app;
}
location ~ /dev/(?<section>.*) {
rewrite ^/dev/(.*)$ /$1 break;
proxy_pass http://dev;
}
}
And also in the frontend, I have used .env, which is this:
DEFAULT_PORT=3000
BASE_API_URL=http://dev:3001
NODE_ENV=development
HOST=http://localhost
PRODUCTION_IMAGE_PORT=443
BASE_IMAGE_URL=http://localhost:3001
HIRE_US_PAGE=http://103.197.206.56:3000
BLOG_SHARE_URL_HOST=http://bjitgroup.com
Please suggest me a solution. My site is working fine with that annoying network-error problem.
My docker-compose file looks like this:
version: '2'
services:
explore:
image: explore
build:
context: ./Explore
dockerfile: VsDockerfile
environment:
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
networks:
- localnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
volumes:
- ./esdata:/usr/share/elasticsearch/data
networks:
- localnet
rabbit:
image: rabbitmq:3.6.7-management
hostname: rabbit
ports:
- 15672:15672
- 5672:5672
networks:
- localnet
networks:
localnet:
external:
name: localnet
If I type http://localhost:15672 in the browser, I get the rabbitmq interface, but if I tries to connect from my Explore project like this:
public SqlToRabbitProcessor(SqlToRabbitRepository sqlToRabbitRepository)
{
_sqlToRabbitRepository = sqlToRabbitRepository;
var factory = new ConnectionFactory
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
var rabbit = factory.CreateConnection();
channel = rabbit.CreateModel();
}
Then it breaks in the line
var rabbit = factory.CreateConnection();
with the error saying
ExtendedSocketException: Connection refused 127.0.0.1:5672
System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
ConnectFailureException: Connection failed
RabbitMQ.Client.EndpointResolverExtensions.SelectOne(IEndpointResolver resolver, Func selector)
BrokerUnreachableException: None of the specified endpoints were reachable
RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, string clientProvidedName)
As my comment under the question suggested, it's because the "localhost" defined in the web application part is it's containers localhost, and not the docker host..
just needed to change
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
to
- "ElasticUrl=http://elasticsearch:9200"
- "RabbitMq/Host=rabbit"
I had the same issue with docker-compose.
I solved it by with hostname:
rabbit:
hostname: rabbit
command: sh -c "rabbitmq-plugins enable rabbitmq_management; rabbitmq-server"
image: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: admin
ports:
- 5672:5672
- 15672:15672
Follow instructions on this post.
Just to benefit people that stumble upon this question. The --link feature is now considered legacy and is a prime candidate to be deprecated by docker.
The easiest way is to use
depends_on:
In order to do this, its recommended to first create a network like so"
docker network create <network_name>
Then use docker-compose up to spawn services that bind with each other. Look at the example below where I've bound my spring-boot app to rabbit-mq. You can clone my repo from here
version: "3.1"
services:
rabbitmq-container:
image: rabbitmq:3.5.3-management
hostname: rabbitmq-container
ports:
- 5673:5673
- 5672:5672
- 15672:15672
networks:
- resolute
resolute-container:
build: .
ports:
- 8080:8080
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
- resolute_rabbitmq_publishQueueName=resolute-run-request
- resolute_rabbitmq_exchange=resolute
depends_on:
- rabbitmq-container
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- resolute
networks:
resolute:
external:
name: resolute
See how I've created a network called resolute and bound the apps to the same network. I've also given my rabbitmq-container a hostname. This is because docker now prepends the container name and that makes it difficult to bind services by name.
I pull https://github.com/tomav/docker-mailserver for setup a mail server.
I would like add Let's encrypt support, so I pull too https://hub.docker.com/r/certbot/certbot/~/dockerfile/
I do a Docker compose file with this 2 container :
version: '2'
services:
nginx:
image: pixelfordinner/nginx
container_name: pixelcloud-nginx_proxy-nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- "./volumes/conf.d:/etc/nginx/conf.d:ro"
- "./volumes/vhost.d:/etc/nginx/vhost.d:ro"
- "./volumes/certs:/etc/nginx/certs:ro"
- "/usr/share/nginx/html"
nginx-proxy:
image: jwilder/docker-gen
container_name: nginx-proxy
depends_on:
- nginx
volumes_from:
- nginx
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "./data/templates:/etc/docker-gen/templates:ro"
- "./volumes/conf.d:/etc/nginx/conf.d:rw"
entrypoint: /usr/local/bin/docker-gen -notify-sighup pixelcloud-nginx_proxy-nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
letsencrypt-nginx-proxy:
restart: always
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: ssl
depends_on:
- nginx
- nginx-proxy
volumes_from:
- nginx
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./volumes/vhost.d:/etc/nginx/vhost.d:rw"
- "./volumes/certs:/etc/nginx/certs:rw"
environment:
- "NGINX_DOCKER_GEN_CONTAINER=nginx-proxy"
mail:
image: tvial/docker-mailserver:2.1
hostname: mail
domainname: example.com
container_name: mail
ports:
- "25:25"
- "143:143"
- "587:587"
- "993:993"
volumes:
- maildata:/var/mail
- mailstate:/var/mail-state
- ./config/:/tmp/docker-mailserver/
- "$PWD/etc/:/etc/letsencrypt/"
- "$PWD/log/:/var/log/letsencrypt/"
environment:
- ENABLE_SPAMASSASSIN=1
- ENABLE_CLAMAV=1
- ENABLE_FAIL2BAN=1
- ENABLE_POSTGREY=1
- ONE_DIR=1
- DMS_DEBUG=0
- SSL_TYPE=letsencrypt
cap_add:
- NET_ADMIN
certbot:
image: certbot/certbot
container_name: certbot
command: certbot certonly --standalone -d mail.example.com
ports:
- "8083:80"
- "4432:443"
volumes:
- /etc/letsencrypt:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
But certbot does not create any certificate.
There is a conflict between nginx and certbot containers with the 443 port.
If I use the 443 port for certbot, my domain is not reachable and so the certbot domain verification fail.
If I use 443 for nginx, certbot is not working.
I don't know what to do...
Let's encrypt (certbot) require existing tld which is accessable via port 80 to actually do something. You need to create some real domain like dev.existingdomain.com and use it.
https://typo3worx.eu/2016/11/lets-encrypt-on-localhost/
For local environment you mostly use self signed certs ...