How to test the connection between postgresql, Matrix synapse and Nginx Server on Centos8 - postgresql

I getting problem with the connecting nginx server, postgresql and Matrix Synapse
Postgresql
it is running see the systemctl status below .
-Synapse1 is the database and roshyara is user which I have already added in the postgresql .
hb_pga_conf files are as following
1 # TYPE DATABASE USER ADDRESS METHOD
2 local all all md5
3
4 # The same using local loopback TCP/IP connections.
5 #
6 # TYPE DATABASE USER ADDRESS METHOD
7 host all all 127.0.0.1/32 md5
8 host all all 0.0.0.0/0 md5
9 host all all ::1/128 md5
10 # IPv4 local connections:
11 host all all 127.0.0.1/32 md5
12 host all all 172.19.0.0/16 md5
Synapse homeserver.yaml file is as follwoing
1 # Configuration file for Synapse.
2 #
3 # This is a YAML file: see [1] for a quick introduction. Note in particular
4 # that *indentation is important*: all the elements of a list or dictionary
5 # should have the same indentation.
6 #
7 # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
8 #
9 # For more information on how to configure Synapse, including a complete accounting of
10 # each option, go to docs/usage/configuration/config_documentation.md or
11 # https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html
12
13 #server_name: "192.168.11.88"
14 server_name: 192.168.11.88
15 #
16 pid_file: /root/synapse1/homeserver.pid
17 #web_client: True
18 #soft_file_limit: 0
19 #
20 #type: http
21 #tls: true
22 #x_forwarded: true
23
24 #user_directory:
25 enabled: true
26
27 database:
28 name: psycopg2
29 args:
30 user: roshyara
31 password: 12345678
32 database: synapse1
33 host: 127.0.0.1
34 port: 5432
35 cp_min: 5
36 cp_max: 10
37 #database: /root/synapse1/homeserver.db
38 # seconds of inactivity after which TCP should send a keepalive message to the server
39 keepalives_idle: 10
40
41 # the number of seconds after which a TCP keepalive message that is not
42 # acknowledged by the server should be retransmitted
43 #keepalives_interval: 10
44
45 # the number of TCP keepalives that can be lost before the client's connection
46 # to the server is considered dead
47 # keepalives_count: 3
48
50 log_config: "/root/synapse1/192.168.11.88.log.config"
51 media_store_path: /root/synapse/media_store
52 #registration_shared_secret: ";6NfAHoYP#xt3vQpi-o^4-8rJDeBnujn*rLdk-R7h6:,&~rjm."
53 report_stats: true
54 macaroon_secret_key: "D=:YD_lc_^;QhiKhj.iGV&#AEW3rmcna6rAq9O~.2=b6^lwyr6"
55 form_secret: "r,:c#PA6PEwk3B9e7d=AKjUD--Iw#X+zB4R_C^4aB.zWGZt+K1"
56 signing_key_path: "/root/synapse/matrix.ginmbh.de.signing.key"
57 trusted_key_servers:
58 - server_name: "matrix.org"
59
-synapse is also running
Nginx sever is also runnung
nginx setting is as follwoing
/etc/nginx/nginx.conf
1 #user
2 user nginx;
3 worker_processes auto;
4 # include config file
5
6 #include /etc/nginx/conf.d/*.conf;
7 #
8 #load_module modules/ngx_postgres_module.so;
9
10 #
11 error_log /var/log/nginx/error.log notice;
12 pid /var/run/nginx.pid;
13
14
15 events {
16 worker_connections 1024;
17 }
18
19
20 http {
21 include /etc/nginx/mime.types;
22 default_type application/octet-stream;
23
24 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
25 '$status $body_bytes_sent "$http_referer" '
26 '"$http_user_agent" "$http_x_forwarded_for"';
27
28 access_log /var/log/nginx/access.log main;
29
30 sendfile on;
31 #tcp_nopush on;
32
33 keepalive_timeout 65;
34
35 include /etc/nginx/conf.d/*.conf;
36 }
/etc/nginx/conf.d/matrix.conf file
1 #
2 server {
3 listen 443 ssl http2;
4 listen [::]:443 ssl http2;
5
6 # For the federation port
7 listen 8448 ssl http2 default_server;
8 listen [::]:8448 ssl http2 default_server;
9
10 server_name 192.168.11.88;
11 #ssl on;
12 ssl_certificate /etc/letsencrypt/live/matrix.ginmbh.de/fullchain.pem;
13 ssl_certificate_key /etc/letsencrypt/live/matrix.ginmbh.de/privkey.pem;
14
15 #location ~ ^(/_matrix|/_synapse/static) {
16 location / {
17 # note: do not add a path (even a single /) after the port in `proxy_pass`,
18 # otherwise nginx will canonicalise the URI and cause signature verification
19 # errors.
20 proxy_pass http://localhost:8008;
21 proxy_set_header X-Forwarded-For $remote_addr;
22 proxy_set_header X-Forwarded-Proto $scheme;
23 proxy_set_header Host $host;
24
25 # Nginx by default only allows file uploads up to 1M in size
26 # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
27 client_max_body_size 50M;
28
29 # Synapse responses may be chunked, which is an HTTP/1.1 feature.
30 proxy_http_version 1.1;
31 }
32 }
-tcp connection
(env) [root#matrix-clon synapse1]# netstat -tunpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 822/sshd
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 2459/postmaster
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1105/nginx: master
tcp 0 0 0.0.0.0:8448 0.0.0.0:* LISTEN 1105/nginx: master
tcp6 0 0 :::22 :::* LISTEN 822/sshd
tcp6 0 0 :::443 :::* LISTEN 1105/nginx: master
tcp6 0 0 :::8448 :::* LISTEN 1105/nginx: master
tcp6 0 0 :::9090 :::* LISTEN 1/systemd
(env) [root#matrix-clon synapse1]#
(env) [root#matrix-clon synapse1]# ps aux |grep nginx
root 1105 0.0 0.0 44768 920 ? Ss 11:52 0:00 nginx: master process /usr/sbin/nginx
nginx 1106 0.0 0.1 77860 7688 ? S 11:52 0:02 nginx: worker process
nginx 1107 0.0 0.1 77468 5212 ? S 11:52 0:00 nginx: worker process
root 1202 0.0 0.0 7352 908 pts/1 S+ 11:52 0:00 tail -f /var/log/nginx/error.log
root 2615 0.0 0.0 12136 1152 pts/0 S+ 12:35 0:00 grep --color=auto nginx
port is also open
(env) [root#matrix-clon synapse1]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client http https ssh
ports: 8448/tcp 5432/tcp
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
(env) [root#matrix-clon synapse1]#
However, nginx is showing the follwoing error . What can I do now and how can I test which connection is creating problem?
2023/02/12 12:08:38 [error] 1106#0: *249 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://[::1]:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [warn] 1106#0: *249 upstream server temporarily disabled while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://[::1]:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [error] 1106#0: *249 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://127.0.0.1:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [warn] 1106#0: *249 upstream server temporarily disabled while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://127.0.0.1:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:11:52 [error] 1106#0: *294 connect() failed (111: Connection refused) while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://127.0.0.1:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [warn] 1106#0: *294 upstream server temporarily disabled while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://127.0.0.1:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [error] 1106#0: *294 connect() failed (111: Connection refused) while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://[::1]:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [warn] 1106#0: *294 upstream server temporarily disabled while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://[::1]:8008/_matrix/static/", host: "192.168.11.88"
installed nginx
installed postgresql
installed matrix synapse
created homeserver.yaml
now the nginx server is showing upstream server is not available

Related

K8s Liveness Probe is keeps failing, but CURL from Pod is working

I am having a strange issue with the Liveness Probe constantly failing but connecting into the Pod and checking the endpoint with cURL looks good.
Here is the output of the CURL command.
curl -v localhost:7000/health
* Expire in 0 ms for 6 (transfer 0x5595637270f0)
...
* Expire in 0 ms for 1 (transfer 0x5595637270f0)
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x5595637270f0)
* Expire in 200 ms for 4 (transfer 0x5595637270f0)
* Connected to localhost (127.0.0.1) port 7000 (#0)
> GET /health HTTP/1.1
> Host: localhost:7000
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn
< Date: Mon, 13 Feb 2023 19:41:44 GMT
< Connection: close
< Content-Type: text/html; charset=utf-8
< Content-Length: 24
<
* Closing connection 0
Now here is the section of the YAML that has the probe for the Pod:
containers:
- name: flask-container
image: path
imagePullPolicy: Always
volumeMounts:
- name: cert-and-key
mountPath: /etc/certs
readOnly: true
ports:
- containerPort: 7000
livenessProbe:
httpGet:
path: /health
port: 7000
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: pullsecret
For some reason the Liveness probe keeps failing after creating the Pod:
Liveness probe failed: Get "http://10.224.0.130:7000/health": dial tcp 10.224.0.130:7000: connect: connection refused
Thanks in advance for any pointers!
Fixed. The issue was with the Docker image exposing the container on localhost or 127.0.0.1 instead of correctly exposing on 0.0.0.0.

configmap being mounted as a folder instead of file when trying mount 2 files into a k8s pod

I have a problem when trying to mount 2 file into a pods.
Here's the Volumes part of the manifest file:
# Source: squid/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: squid-dev
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
replicas: 2
updateStrategy:
type: RollingUpdate
serviceName: squid-dev
selector:
matchLabels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
volumeClaimTemplates:
- metadata:
name: squid-dev-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
template:
metadata:
annotations:
checksum: checksum
checksum/config: e51a4d6e552f890604aaa4c47c522653c25cad7ffec5680f67bbaadba6d3c3b2
checksum/secret: secret
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "squid"
containers:
- name: squid
image: "honestica/squid:4-ff434982-c47b-47c3-b705-b2adb2730978"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: squid-dev-config
mountPath: /etc/squid/squid.conf
subPath: squid.conf
- name: squid-dev-config
mountPath: /etc/squid/squid.conf.backup
subPath: squid.conf.backup
- name: squid-dev-cache
mountPath: /var/cache/squid
ports:
- name: port3128
containerPort: 3128
protocol: TCP
- name: port8080
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 3128
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
{}
volumes:
- name: squid-dev-config
configMap:
name: squid-dev
And this is manifest of the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: squid-dev-config
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
data:
squid.conf: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
squid.conf.backup: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
After using helm to install, I execute into pod and list the folder /etc/squid and result is below:
/ # ls -la /etc/squid/
total 388
drwxr-xr-x 1 root root 31 Mar 25 19:09 .
drwxr-xr-x 1 root root 19 Mar 25 19:09 ..
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf.default
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css.default
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf.default
-rw-r--r-- 1 root root 3598 Mar 25 19:09 squid.conf
drwxrwxrwx 2 root root 6 Mar 25 19:09 squid.conf.backup
-rw-r--r-- 1 root root 2526 Oct 30 23:43 squid.conf.default
-rw-r--r-- 1 root root 344566 Oct 30 23:43 squid.conf.documented
Why squid.conf is a file and squid.conf.backup is a folder? I have change the name of squid.conf.backup to anything else but it 's still create a folder instead of a file, and if we choose the name same as a file in this folder ex: cachemgr.conf
Warning Failed 3s (x3 over 15s) kubelet Error: failed to start container "squid": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/411dc966-ed7d-494c-b9b7-4abfe1639f00/volume-subpaths/squid-dev-config/squid/0" to rootfs at "/etc/squid/cachemgr.conf" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Only squid.conf can be mounted as a file, anything else will be mounted as a folder.
How can I fix this? Can anyone explain about this behavior please?
I have search on google and helm chart of fluent-bit can mount 2 files into pods using only 1 configmap: https://github.com/fluent/helm-charts/tree/main/charts/fluent-bit
Kubectl version:
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Helm version:
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

Kubernetes: using container as proxy

I have the following pod setup:
apiVersion: v1
kind: Pod
metadata:
name: proxy-test
namespace: test
spec:
containers:
- name: container-a
image: <Image>
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8083
- name: container-proxy
image: <Image>
ports:
- name: server
containerPort: 7487
protocol: TCP
- name: container-b
image: <Image>
I exec into container-b and execute following curl request:
curl --proxy localhost:7487 -X POST http://localhost:8083/
Due to some reason, http://localhost:8083/ is directly getting called and proxy is ignored. Can someone explain why this can happen ?
Environment
I replicated the scenario on kubeadm and GCP GKE kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.
I created a pod with 3 containers within:
apiVersion: v1
kind: Pod
metadata:
name: proxy-pod
spec:
containers:
- image: ubuntu # client where connection will go from
name: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: proxy-container # proxy - that's obvious
image: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: server # regular nginx server which listens to port 80
image: nginx
For this test stand I installed squid proxy on proxy-container (what is squid and how to install it). By default it listens to port 3128.
As well as curl was installed on ubuntu - client container. (net-tools package as a bonus, it has netstat).
Tests
Note!
I used 127.0.0.1 instead of localhost because squid has some resolving questions, didn't find an easy/fast solution.
curl is used with -v flag for verbosity.
We have proxy on 3128 and nginx on 80 within the pod:
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
curl directly:
# curl 127.0.0.1 -vI
* Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
curl via proxy:
# curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI
* Trying 127.0.0.1:3128... # connecting to proxy!
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy
> HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80`
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
squid logs:
# cat /var/log/squid/access.log
1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html
1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html
NO_PROXY
NO_PROXY environment variable might be set up, however by default it's empty.
I added it manually:
# export NO_PROXY=127.0.0.1
# printenv | grep -i proxy
NO_PROXY=127.0.0.1
Now curl request via proxy will look like:
# curl --proxy 127.0.0.1:3128 127.0.0.1 -vI
* Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar
* Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
It's possible to override NO_PROXY envvar while executing curl command with --noproxy flag.
--noproxy no-proxy-list
Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single *
character, which matches all hosts, and effectively disables the
proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com
would match local.com, local.com:80, and www.local.com, but not
www.notlocal.com. (Added in 7.19.4).
Example:
# curl --proxy 127.0.0.1:3128 --noproxy "" 127.0.0.1 -vI
* Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established
> HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
This proves that proxy works! with localhost.
Another option is something incorrectly configured in proxy which is used in the question. You can get this pod and install squid and curl into both containers and try yourself.

Docker-compose nginx with letsencrypt -> ln: failed to create symbolic link - Not supported

Setup: Docker on OpenSuse-Server on local Intel-NUC
Here is my docker.compose.yml
version: '3.5'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
dns:
- 192.168.178.15
ports:
- 443:443
- 80:80
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:ro
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
dns:
- 192.168.178.15 #need for access the letdyencrypt API
volumes:
- ./proxy/acme:/etc/acme.sh
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:rw
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: "nextcloud-proxy"
DEFAULT_EMAIL: "mymail#pm.me"
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
dns:
- 192.168.178.15
volumes:
- db-data2:/var/lib/mysql:rw
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=Tstrong
- MYSQL_PASSWORD=Tstrong
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
dns:
- 192.168.178.15
# - 8.8.8.8
# ports:
# - 10000:80 -> makes the app available without nginx and ssl
volumes:
- nextcloud-stage:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=mydomain.chickenkiller.com
- LETSENCRYPT_HOST=mydomain.chickenkiller.com
- LETSENCRYPT_EMAIL=mymail#pm.me
- "ServerName=nextcloud"
restart: unless-stopped
volumes:
nextcloud-stage:
db-data2:
networks:
nextcloud_network:
# external:
driver: bridge
name: nginx-proxy
And then it throws this Warning/Info and the SSL does not work.
The application would only be available over port 80 if I open the port on this container - which is clearly wrong.
So is this warning actual a problem or do I miss something else?
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.dhparam.pem': Not supported
I need to specify the DNS so letsencrypt container is able to communicate with the API, so I did point the Docker DNS to my local router 192.168.178.15. Do I need this setting also for the other services? Or is that the problem that breaks the symbolic link?
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Your cert is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/mydomain.chickenkiller.com.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Your cert key is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/mydomain.chickenkiller.com.key
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] The intermediate CA cert is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/ca.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] And the full chain certs is there: /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/fullchain.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing cert to:/etc/nginx/certs/mydomain.chickenkiller.com/cert.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing CA to:/etc/nginx/certs/mydomain.chickenkiller.com/chain.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing key to:/etc/nginx/certs/mydomain.chickenkiller.com/key.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:20 CET 2020] Installing full chain to:/etc/nginx/certs/mydomain.chickenkiller.com/fullchain.pem
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.crt': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.key': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.dhparam.pem': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.chain.pem': Not supported
nextcloud-letsencrypt | Reloading nginx proxy (ac49344ba0acb6026615358abf5568dc6a1df173a308a936b615fa00e413f767)...
nextcloud-letsencrypt | 2020/12/23 09:47:20 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
nextcloud-letsencrypt | 2020/12/23 09:47:20 [notice] 115#115: signal process started
nextcloud-letsencrypt | Sleep for 3600s
Nginx throws Error 503 when accessing the application from WAN using the IP (and not the DynDNS)
So local port forwarding should also be correct, right? Port 80 and 443 forwarded from Router to NUC
Using DynDNS to access the application from WAN leads to SSL-error (HSTS)
So I think it is just the connection (symbolic link) from the certificate folder to the application?
Let me know if I can provide more information/logs
Cheers
UDPATE:
Here is the NGINX config from /proxy/conf.d/default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 443 ssl http2;
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
# mydomain.chickenkiller.com
upstream mydomain.chickenkiller.com {
## Can be connected with "nginx-proxy" network
# nextcloud-app
server 172.23.0.5:80;
}
server {
server_name mydomain.chickenkiller.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://mydomain.chickenkiller.com;
}
}
server {
server_name mydomain.chickenkiller.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
return 500;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
I had this exact issue. For me, the volume the certificates were being save to was a mounted file share in Azure and those don't support symlinks out of the box.
See: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems.
I am using autofs but adding ",mfsymlinks" to the end of the "fstype" part worked fine once restarted.

configure Squid3 proxy server on Ubuntu with caching and logging

I have a ubuntu 11.10 machine.
Installed Squid3.
When i configure the squid as http_access allow all, everything works fine.
my current configuration mostly default is as follows:
2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0)
2012/09/10 13:19:57| Processing: acl manager proto cache_object
2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1
2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
2012/09/10 13:19:57| Processing: acl SSL_ports port 443
2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http
2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp
2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https
2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher
2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais
2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports
2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt
2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http
2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker
2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http
2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT
2012/09/10 13:19:57| Processing: http_access allow manager localhost
2012/09/10 13:19:57| Processing: http_access deny manager
2012/09/10 13:19:57| Processing: http_access deny !Safe_ports
2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports
2012/09/10 13:19:57| Processing: http_access allow localhost
2012/09/10 13:19:57| Processing: http_access deny all
2012/09/10 13:19:57| Processing: http_port 3128
2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3
2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080
2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440
2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320
2012/09/10 13:19:57| Processing: http_access allow all
2012/09/10 13:19:57| Processing: cache_mem 512 MB
2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru
2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3
The problem starts when I enable the following line:
access_log /home/panshul/squidCache/log/access.log
I start to get proxy server is refusing connections error in the browser.
on commenting out the above line in my config, things go back to normal.
The second problem starts when i add the following line to my config:
cache_dir ufs /home/panshul/squidCache/cache 100 16 256
The squid server fails to start.
Any suggestions what am I missing in the config. Please help.!!
Try adding "squid" to the end of your access_log directive.
access_log /home/panshul/squidCache/log/access.log squid
Solution here possibly... Ubuntu Forum - Squid config solution