configure Squid3 proxy server on Ubuntu with caching and logging - ubuntu-11.10

I have a ubuntu 11.10 machine.
Installed Squid3.
When i configure the squid as http_access allow all, everything works fine.
my current configuration mostly default is as follows:
2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0)
2012/09/10 13:19:57| Processing: acl manager proto cache_object
2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1
2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
2012/09/10 13:19:57| Processing: acl SSL_ports port 443
2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http
2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp
2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https
2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher
2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais
2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports
2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt
2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http
2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker
2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http
2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT
2012/09/10 13:19:57| Processing: http_access allow manager localhost
2012/09/10 13:19:57| Processing: http_access deny manager
2012/09/10 13:19:57| Processing: http_access deny !Safe_ports
2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports
2012/09/10 13:19:57| Processing: http_access allow localhost
2012/09/10 13:19:57| Processing: http_access deny all
2012/09/10 13:19:57| Processing: http_port 3128
2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3
2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080
2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440
2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320
2012/09/10 13:19:57| Processing: http_access allow all
2012/09/10 13:19:57| Processing: cache_mem 512 MB
2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru
2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3
The problem starts when I enable the following line:
access_log /home/panshul/squidCache/log/access.log
I start to get proxy server is refusing connections error in the browser.
on commenting out the above line in my config, things go back to normal.
The second problem starts when i add the following line to my config:
cache_dir ufs /home/panshul/squidCache/cache 100 16 256
The squid server fails to start.
Any suggestions what am I missing in the config. Please help.!!

Try adding "squid" to the end of your access_log directive.
access_log /home/panshul/squidCache/log/access.log squid
Solution here possibly... Ubuntu Forum - Squid config solution

Related

How to test the connection between postgresql, Matrix synapse and Nginx Server on Centos8

I getting problem with the connecting nginx server, postgresql and Matrix Synapse
Postgresql
it is running see the systemctl status below .
-Synapse1 is the database and roshyara is user which I have already added in the postgresql .
hb_pga_conf files are as following
1 # TYPE DATABASE USER ADDRESS METHOD
2 local all all md5
3
4 # The same using local loopback TCP/IP connections.
5 #
6 # TYPE DATABASE USER ADDRESS METHOD
7 host all all 127.0.0.1/32 md5
8 host all all 0.0.0.0/0 md5
9 host all all ::1/128 md5
10 # IPv4 local connections:
11 host all all 127.0.0.1/32 md5
12 host all all 172.19.0.0/16 md5
Synapse homeserver.yaml file is as follwoing
1 # Configuration file for Synapse.
2 #
3 # This is a YAML file: see [1] for a quick introduction. Note in particular
4 # that *indentation is important*: all the elements of a list or dictionary
5 # should have the same indentation.
6 #
7 # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
8 #
9 # For more information on how to configure Synapse, including a complete accounting of
10 # each option, go to docs/usage/configuration/config_documentation.md or
11 # https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html
12
13 #server_name: "192.168.11.88"
14 server_name: 192.168.11.88
15 #
16 pid_file: /root/synapse1/homeserver.pid
17 #web_client: True
18 #soft_file_limit: 0
19 #
20 #type: http
21 #tls: true
22 #x_forwarded: true
23
24 #user_directory:
25 enabled: true
26
27 database:
28 name: psycopg2
29 args:
30 user: roshyara
31 password: 12345678
32 database: synapse1
33 host: 127.0.0.1
34 port: 5432
35 cp_min: 5
36 cp_max: 10
37 #database: /root/synapse1/homeserver.db
38 # seconds of inactivity after which TCP should send a keepalive message to the server
39 keepalives_idle: 10
40
41 # the number of seconds after which a TCP keepalive message that is not
42 # acknowledged by the server should be retransmitted
43 #keepalives_interval: 10
44
45 # the number of TCP keepalives that can be lost before the client's connection
46 # to the server is considered dead
47 # keepalives_count: 3
48
50 log_config: "/root/synapse1/192.168.11.88.log.config"
51 media_store_path: /root/synapse/media_store
52 #registration_shared_secret: ";6NfAHoYP#xt3vQpi-o^4-8rJDeBnujn*rLdk-R7h6:,&~rjm."
53 report_stats: true
54 macaroon_secret_key: "D=:YD_lc_^;QhiKhj.iGV&#AEW3rmcna6rAq9O~.2=b6^lwyr6"
55 form_secret: "r,:c#PA6PEwk3B9e7d=AKjUD--Iw#X+zB4R_C^4aB.zWGZt+K1"
56 signing_key_path: "/root/synapse/matrix.ginmbh.de.signing.key"
57 trusted_key_servers:
58 - server_name: "matrix.org"
59
-synapse is also running
Nginx sever is also runnung
nginx setting is as follwoing
/etc/nginx/nginx.conf
1 #user
2 user nginx;
3 worker_processes auto;
4 # include config file
5
6 #include /etc/nginx/conf.d/*.conf;
7 #
8 #load_module modules/ngx_postgres_module.so;
9
10 #
11 error_log /var/log/nginx/error.log notice;
12 pid /var/run/nginx.pid;
13
14
15 events {
16 worker_connections 1024;
17 }
18
19
20 http {
21 include /etc/nginx/mime.types;
22 default_type application/octet-stream;
23
24 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
25 '$status $body_bytes_sent "$http_referer" '
26 '"$http_user_agent" "$http_x_forwarded_for"';
27
28 access_log /var/log/nginx/access.log main;
29
30 sendfile on;
31 #tcp_nopush on;
32
33 keepalive_timeout 65;
34
35 include /etc/nginx/conf.d/*.conf;
36 }
/etc/nginx/conf.d/matrix.conf file
1 #
2 server {
3 listen 443 ssl http2;
4 listen [::]:443 ssl http2;
5
6 # For the federation port
7 listen 8448 ssl http2 default_server;
8 listen [::]:8448 ssl http2 default_server;
9
10 server_name 192.168.11.88;
11 #ssl on;
12 ssl_certificate /etc/letsencrypt/live/matrix.ginmbh.de/fullchain.pem;
13 ssl_certificate_key /etc/letsencrypt/live/matrix.ginmbh.de/privkey.pem;
14
15 #location ~ ^(/_matrix|/_synapse/static) {
16 location / {
17 # note: do not add a path (even a single /) after the port in `proxy_pass`,
18 # otherwise nginx will canonicalise the URI and cause signature verification
19 # errors.
20 proxy_pass http://localhost:8008;
21 proxy_set_header X-Forwarded-For $remote_addr;
22 proxy_set_header X-Forwarded-Proto $scheme;
23 proxy_set_header Host $host;
24
25 # Nginx by default only allows file uploads up to 1M in size
26 # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
27 client_max_body_size 50M;
28
29 # Synapse responses may be chunked, which is an HTTP/1.1 feature.
30 proxy_http_version 1.1;
31 }
32 }
-tcp connection
(env) [root#matrix-clon synapse1]# netstat -tunpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 822/sshd
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 2459/postmaster
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1105/nginx: master
tcp 0 0 0.0.0.0:8448 0.0.0.0:* LISTEN 1105/nginx: master
tcp6 0 0 :::22 :::* LISTEN 822/sshd
tcp6 0 0 :::443 :::* LISTEN 1105/nginx: master
tcp6 0 0 :::8448 :::* LISTEN 1105/nginx: master
tcp6 0 0 :::9090 :::* LISTEN 1/systemd
(env) [root#matrix-clon synapse1]#
(env) [root#matrix-clon synapse1]# ps aux |grep nginx
root 1105 0.0 0.0 44768 920 ? Ss 11:52 0:00 nginx: master process /usr/sbin/nginx
nginx 1106 0.0 0.1 77860 7688 ? S 11:52 0:02 nginx: worker process
nginx 1107 0.0 0.1 77468 5212 ? S 11:52 0:00 nginx: worker process
root 1202 0.0 0.0 7352 908 pts/1 S+ 11:52 0:00 tail -f /var/log/nginx/error.log
root 2615 0.0 0.0 12136 1152 pts/0 S+ 12:35 0:00 grep --color=auto nginx
port is also open
(env) [root#matrix-clon synapse1]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client http https ssh
ports: 8448/tcp 5432/tcp
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
(env) [root#matrix-clon synapse1]#
However, nginx is showing the follwoing error . What can I do now and how can I test which connection is creating problem?
2023/02/12 12:08:38 [error] 1106#0: *249 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://[::1]:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [warn] 1106#0: *249 upstream server temporarily disabled while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://[::1]:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [error] 1106#0: *249 connect() failed (111: Connection refused) while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://127.0.0.1:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:08:38 [warn] 1106#0: *249 upstream server temporarily disabled while connecting to upstream, client: ::1, server: 192.168.11.88, request: "GET /_synapse/admin/v1/register HTTP/1.1", upstream: "http://127.0.0.1:8008/_synapse/admin/v1/register", host: "localhost:8448"
2023/02/12 12:11:52 [error] 1106#0: *294 connect() failed (111: Connection refused) while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://127.0.0.1:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [warn] 1106#0: *294 upstream server temporarily disabled while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://127.0.0.1:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [error] 1106#0: *294 connect() failed (111: Connection refused) while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://[::1]:8008/_matrix/static/", host: "192.168.11.88"
2023/02/12 12:11:52 [warn] 1106#0: *294 upstream server temporarily disabled while connecting to upstream, client: 10.176.8.89, server: 192.168.11.88, request: "GET /_matrix/static/ HTTP/2.0", upstream: "http://[::1]:8008/_matrix/static/", host: "192.168.11.88"
installed nginx
installed postgresql
installed matrix synapse
created homeserver.yaml
now the nginx server is showing upstream server is not available

configmap being mounted as a folder instead of file when trying mount 2 files into a k8s pod

I have a problem when trying to mount 2 file into a pods.
Here's the Volumes part of the manifest file:
# Source: squid/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: squid-dev
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
replicas: 2
updateStrategy:
type: RollingUpdate
serviceName: squid-dev
selector:
matchLabels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
volumeClaimTemplates:
- metadata:
name: squid-dev-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
template:
metadata:
annotations:
checksum: checksum
checksum/config: e51a4d6e552f890604aaa4c47c522653c25cad7ffec5680f67bbaadba6d3c3b2
checksum/secret: secret
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "squid"
containers:
- name: squid
image: "honestica/squid:4-ff434982-c47b-47c3-b705-b2adb2730978"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: squid-dev-config
mountPath: /etc/squid/squid.conf
subPath: squid.conf
- name: squid-dev-config
mountPath: /etc/squid/squid.conf.backup
subPath: squid.conf.backup
- name: squid-dev-cache
mountPath: /var/cache/squid
ports:
- name: port3128
containerPort: 3128
protocol: TCP
- name: port8080
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 3128
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
{}
volumes:
- name: squid-dev-config
configMap:
name: squid-dev
And this is manifest of the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: squid-dev-config
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
data:
squid.conf: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
squid.conf.backup: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
After using helm to install, I execute into pod and list the folder /etc/squid and result is below:
/ # ls -la /etc/squid/
total 388
drwxr-xr-x 1 root root 31 Mar 25 19:09 .
drwxr-xr-x 1 root root 19 Mar 25 19:09 ..
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf.default
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css.default
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf.default
-rw-r--r-- 1 root root 3598 Mar 25 19:09 squid.conf
drwxrwxrwx 2 root root 6 Mar 25 19:09 squid.conf.backup
-rw-r--r-- 1 root root 2526 Oct 30 23:43 squid.conf.default
-rw-r--r-- 1 root root 344566 Oct 30 23:43 squid.conf.documented
Why squid.conf is a file and squid.conf.backup is a folder? I have change the name of squid.conf.backup to anything else but it 's still create a folder instead of a file, and if we choose the name same as a file in this folder ex: cachemgr.conf
Warning Failed 3s (x3 over 15s) kubelet Error: failed to start container "squid": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/411dc966-ed7d-494c-b9b7-4abfe1639f00/volume-subpaths/squid-dev-config/squid/0" to rootfs at "/etc/squid/cachemgr.conf" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Only squid.conf can be mounted as a file, anything else will be mounted as a folder.
How can I fix this? Can anyone explain about this behavior please?
I have search on google and helm chart of fluent-bit can mount 2 files into pods using only 1 configmap: https://github.com/fluent/helm-charts/tree/main/charts/fluent-bit
Kubectl version:
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Helm version:
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

unable to connect mongod from remote and local on IP address

My mongod servers runs on IP 67.219.110.71 and default port 27017
Below is the command to start mongod
mongod --dbpath /data/db --fork --logpath /dev/null
After login to the linux server 67.219.110.71 I'm able to telnet successfully like below:
telnet localhost 27017 ----> SUCCESS
However, when I telnet using the IP address it does not connect from the same host 67.219.110.71 or from a remote hots both fail
telnet 67.219.110.71 27017 ----> FAILS
Note:
have restarted the mongod service several times and after any configuration change.
port 27010 was opened on the firewall using firewall-cmd command.
I'm able to connect on ssh port telnet 67.219.110.71 22 ----> SUCCESS
Below is my mongod configuration file /etc/mongod.conf:
# mongod.conf
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,67.219.110.71,0.0.0.0,:: # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
# bindIpAll: true
# bindIp: 0.0.0.0
security:
authorization: "enabled"
Can you please suggest?

filebeat cannot assign requested address

I am trying to read the syslog information by filebeat. I have my filebeat installed in docker. I get error message
ERROR [syslog] syslog/input.go:150 Error starting the servererrorlisten tcp 192.168.1.142:514: bind: cannot assign requested address
Here is the config file filebeat.yml:
filebeat.inputs:
- type: syslog
format: rfc5424
protocol.tcp:
host: "192.168.1.142:514"
#========================== Elasticsearch output ===============================
output.elasticsearch:
hosts: ["${ELASTICSEARCH_HOST}:9200"]
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
#============================== Dashboards =====================================
setup.dashboards:
enabled: true
#============================== Kibana =========================================
setup.kibana:
host: "${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
#================================== General ===================================
name: test_pc_ecs_log
tags: ["syslog"]
Here is /etc/rsyslog.conf
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
I have checked the connection by and telnet are success:
netstat -4altunp | grep 514
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 1332/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 1332/rsyslogd
I am following the config example from input doc.
I would like to ask if anyone set up filebeat for syslog reading.
Thanks

Mongoose connection error: MongoError: failed to connect to server

I was running mongo 3.4 on centOS. It was using authorization. I needed to upgrade it to mongo 3.6. I upgraded it and now I'm not able to connect it through any means remotely. Neither with the shell nor with the node server itself.
Here is the mongoose connection.
const uri = 'mongodb://admin:12345#host:27017/db?authSource=admin';
mongoose.connect(uri);
Here is mongod.conf
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
# network interfaces
net:
port: 27017
# bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
Probably the upgrade was not successful, and the restart of the mongod service failed.
View the logs in /var/log/mongodb/mongod.log and check for any inconsistency in the mongod.conf.
Check if the service is up and if it is listening on port 27017.
service mongod status
netstat -tl | grep 27017 # or using the ss command
ss -tl | grep 27017
From the official documentation:
Starting in MongoDB 3.6, mongod and mongos instances bind to localhost by default. Remote clients cannot connect to an instance bound only to localhost. To override and bind to other ip addresses, use the net.bindIp configuration file setting or the --bind_ip command-line option to specify a list of ip addresses.
Try the following setting to enable the service to listen on all the interfaces
net:
port: 27017
bindIp: 0.0.0.0