Issue with location proxy_pass setting "connect() failed (111: Connection refused) while connecting to upstream" - docker-compose

I have nginx and metabase in docker. When I try to access "https://MYDOMAIN/metabase" I get the error - connect() failed (111: Connection refused) while connecting to upstream, client: 60.243.254.30, server: MYDOMAIN, request: "GET /metabase/ HTTP/2.0", upstream: "http://127.0.0.1:3000/metabase/", host: "MYDOMAIN".
This works - "curl http://localhost:3000/metabase". This also works - "curl http://127.0.0.1:3000/metabase".
nginx is running fine - I am able to access other sites that are running.
What am I doing wrong in the configuration?
docker-compose.yml
-------------------
webserver:
image: nginx:1.20.0
restart: "no"
volumes:
- ./public:/var/www/html
- ./conf.d:/etc/nginx/conf.d
- ./sites-available:/etc/nginx/sites-available
- ./sites-enabled:/etc/nginx/sites-enabled
- ./ssl:/etc/nginx/ssl
ports:
- '80:80'
- '443:443'
metabase:
image: metabase/metabase:latest
container_name: metabase
restart: "no"
volumes:
- metabase-data:/LOCATION/metabase
ports:
- '3000:3000'
environment:
MB_SITE_URL: http://localhost:3000/metabase
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: xxxxx
MB_DB_PASS: yyyyyy
nginx conf.d/default.conf
--------------------------
upstream metabase {
server 127.0.0.1:3000;
}
server {
...
location /metabase/ {
proxy_pass http://metabase;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
break;
}
}

Related

can ingress rewrite 405 to the origin url and change the http-errors 405 to 200?

Can ingress rewrite 405 to the origin url and change the http-errors 405 to 200?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
serviceName: front-user
servicePort: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
serviceName: front-admin
servicePort: 80
Ngnix can realize that visit a html page by a post method but I want to know how to realize by ingress.
server {
listen 80;
# ...
error_page 405 =200 #405;
location #405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend;
}
}
This is an e.g. that ngnix realize that visit a html page by a post method to change 405 to 200 and change the method to get
You can use server snippet annotation to achieve it.
Also I rewrote your ingress from extensions/v1beta1 apiVersion to networking.k8s.io/v1, because starting kubernetes v1.22 previous apiVersion is be removed:
$ kubectl apply -f ingress-snippit.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Ingress-snippet-v1.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-snippet: | # adds this block to server
error_page 405 =200 #405;
location #405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend; # tested with IP since I don't have this upstream
}
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
service:
name: front-user
port:
number: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
service:
name: front-admin
port:
number: 80
Applying manifest above and verifying /etc/nginx/nginx.conf in ingress-nginx-controller pod:
$ kubectl exec -it ingress-nginx-controller-xxxxxxxxx-yyyy -n ingress-nginx -- cat /etc/nginx/nginx.conf | less
...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=4096 ;
listen 443 default_server reuseport backlog=4096 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
# Custom code snippet configured for host _
error_page 405 =200 #405;
location #405 {
root /srv/http;
proxy_method GET;
proxy_pass http://127.0.0.1; # IP for testing purposes
}
location ~* "^/page/manager/(.*)" {
set $namespace "default";
set $ingress_name "frontend-ingress";
set $service_name "front-admin";
set $service_port "80";
set $location_path "/page/manager/(.*)";
set $global_rate_limit_exceeding n;
...

No response for GRPC server using Kubernetes Ingress Nginx

I am trying to deploy a GRPC based engine behind a Kubernetes Ingress-Nginx ingress, version 0.34.1 and I have already tested that it is working fine with a regular REST API setup, but I have had no luck in receiving any traffic from the backend GRPC when connecting from the port 50051. The backend GRPC itself contains a container that is listening on the port 50051 with the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-one
spec:
selector:
matchLabels:
app: microservice-one
template:
metadata:
labels:
app: microservice-one
spec:
containers:
- name: microservice
image: azurecr.io/microservice:v1
ports:
- containerPort: 50051
resources:
requests:
memory: "5G"
cpu: 250m
limits:
cpu: 1000m
---
apiVersion: v1
kind: Service
metadata:
name: microservice-one
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 50051
selector:
app: microservice-one
type: LoadBalancer
while the yaml file for my ingress applies the following configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
namespace: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
tls:
- hosts:
- [HOSTNAME]
secretName: aks-ingress-tls
rules:
- host: [HOSTNAME]
- http:
paths:
- backend:
serviceName: microservice-one
servicePort: 50051
path: /(.*)
However, upon testing and looking at the raw generated nginx configuration, with the irrelevant parts omitted below, I realized that the nginx server is only listening on port 443 and 80 as standard for an nginx config. I have read that the ingress only allows one port for https so I tried multiple different annotations (e.g. loadbalancer) that were said to bypass the limit but none of them have worked. Could anyone please advise on what other possible solutions there might be to the problem?
server {
server_name [HOSTNAME] ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "upstream-default-backend";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $remote_addr;
grpc_set_header X-Forwarded-For $remote_addr;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Scheme $pass_access_scheme;
grpc_set_header X-Original-Forwarded-For $http_x_forwarded_for;
grpc_set_header Proxy "";
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
}
## end server [HOSTNAME]
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511 ;
listen 443 default_server reuseport backlog=511 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location /(.*) {
set $namespace "myingress";
set $ingress_name "service1";
set $service_name "";
set $service_port "";
set $location_path "/(.*)";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "myingress-microservice-one-50051";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $remote_addr;
grpc_set_header X-Forwarded-For $remote_addr;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Scheme $pass_access_scheme;
grpc_set_header X-Original-Forwarded-For $http_x_forwarded_for;
grpc_set_header Proxy "";
grpc_pass grpc://upstream_balancer;
proxy_redirect off;
}

SERVICE UNAVAILABLE - No raft leader when trying to create channel in Hyperledger fabric setup in Kubernetes

Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.

NGINX reverse proxy not working to other docker container

I have a setup with two docker containers in a docker compose. Now I want to use proxy_pass feature from nginx to proxy the connection from nginx to other container.
docker compose
version: '3.4'
services:
reverse_proxy:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./error.log:/var/log/nginx/error.log
ports:
- "8081:8081"
apigateway.api:
image: ${REGISTRY:-configurator}/apigateway.api:${TAG:-latest}
build:
context: .
dockerfile: src/Services/ApiGateway/ApiGateway.Api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
ports:
- "58732:80"
nginx conf
worker_processes 1;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
# MIME
include mime.types;
default_type application/octet-stream;
upstream apigateway {
server apigateway.api:58732;
}
server {
listen 8081;
# reverse proxy
location /configurator-api-gw/ {
proxy_pass http://apigateway/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
When I now access http://localhost:8081/configurator-api-gw/swagger I'm getting following error in the error.log. I have tried also different approaches and some other examples but I don't get it why this is not working.
2019/03/08 06:30:29 [error] 7#7: *6 connect() failed (111: Connection
refused) while connecting to upstream, client: 172.28.0.1, server: ,
request: "GET /configurator-api-gw/swagger HTTP/1.1", upstream:
"http://172.28.0.5:58732/swagger", host: "localhost:8081"
I have solved the problem. The problem is with server apigateway.api:58732; Here it needs to be used Port 80 as this inside of the docker network.

Why my docker cannot process php application?

I have the following docker-compose file :
version: '2'
services:
phpfpm:
tty: true # Enables debugging capabilities when attached to this container.
image: 'bitnami/php-fpm:5.6'
labels:
kompose.service.type: nodeport
ports:
- 9000:9000
volumes:
- /usr/share/nginx/html:/app
networks:
- app-tier
nginx:
image: 'bitnami/nginx:latest'
depends_on:
- phpfpm
networks:
- app-tier
links:
- phpfpm
ports:
- '80:8080'
- '443:8443'
volumes:
- ./my_vhost.conf:/bitnami/nginx/conf/vhosts/vhost.conf
networks:
app-tier:
driver: bridge
and here's the contents of my_vhost.conf file :
server {
listen 0.0.0.0:8080;
root /app;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
location /evodms {
root /app/evodms;
try_files $uri $uri /evodms/index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass phpfpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I have my applications within the /usr/share/nginx/html folder.
I have tried the following links :
http://localhost => works, shows nginx homepage
http://localhost/page.html => just an ordinary html file and works
http://localhost/phpinfo.php => shows the php informations and works
http://localhost/evodms, shows the following error messages from my
docker :
nginx_1 | nginx: [warn] conflicting server name "" on 0.0.0.0:8080,
ignored
nginx_1 | 2017/12/08 15:13:29 [warn] 24#0: conflicting
server name "" on 0.0.0.0:8080, ignored
nginx_1 | 2017/12/08 15:15:04 [error] 25#0: *1 FastCGI sent in
stderr: "PHP message: PHP Warning:
include(/usr/share/nginx/html/evodms/lib/LibCakePhp20unit/lib/Cake/Core/CakePlugin.php):
failed to open stream: No such file or directory in
/app/evodms/lib/LibCakePhp20unit/lib/Cake/Core/App.php on line 505
nginx_1 | PHP message: PHP Warning: include(): Failed opening
'/usr/share/nginx/html/evodms/lib/LibCakePhp20unit/lib/Cake/Core/CakePlugin.php'
for inclusion (include_path='.:/opt/bitnami/php/lib/php') in
/app/evodms/lib/LibCakePhp20unit/lib/Cake/Core/App.php on line 505
nginx_1 | PHP message: PHP Fatal error: Class 'CakePlugin' not
found in /app/evodms/app/Config/bootstrap.php on line 66" while
reading response header from upstream, client: 172.26.0.1, server: ,
request: "GET /evodms/ HTTP/1.1", upstream:
"fastcgi://172.26.0.2:9000", host: "localhost"
nginx_1 | 172.26.0.1 - - [08/Dec/2017:15:15:04 +0000] "GET /evodms/
HTTP/1.1" 500 5 "-" "Mozilla/5.0 (X11; Linux x86_64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100
Safari/537.36"
Any clue on what's going on for the issue on the last link?
I think you should try with webdevops images which already include php-fpm and a webserver.
version: '3'
services:
app:
image: webdevops/php-nginx:7.2
ports:
- 9080:80
volumes:
- ./:/app
environment:
- PHP_DEBUGGER=xdebug
- PHP_DISPLAY_ERRORS=1
- WEB_DOCUMENT_ROOT=/app/public