MongoDB: [Errno 111] Connection refused - mongodb

So I have this application that continuously pushes data to my database.
It has been working fine for over a year now. But in the past few days I keep getting this error:
File "/usr/lib/python2.7/site-packages/veristats_backend-2.0.0_0-py2.7.egg/drivertest/utils.py", line 141, in lock
db.find_one_and_update(key, {'$set': {'locked': False}})
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 2454, in find_one_and_update
sort, upsert, return_document, **kwargs)
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 2222, in __find_and_modify
with self._socket_for_writes() as sock_info:
File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/lib64/python2.7/site-packages/pymongo/mongo_client.py", line 868, in _get_socket
server = self._get_topology().select_server(selector)
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 214, in select_server
address))
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 189, in select_servers
self._error_message(selector))
ServerSelectionTimeoutError: l-veristats-01:27017: [Errno 111] Connection refused
My applications runs for a minute or so and then it crashes like this.
EDIT
When this happens i get:
# systemctl status mongodb
Unit mongodb.service could not be found.
I also see in the log this line:
019-08-07T09:15:14.475+0300 I COMMAND [conn137619] warning: log line attempted (2200kB) over max size (10kB), printing beginning and end ...
Could it be that this is what causes the mongod service to stop?
EDIT2
I also see that the OOM killer is what actually killing the service:
# grep -i 'killed process' /var/log/messages
Aug 5 22:33:12 l-veristats-01 kernel: Killed process 18849 (mongod) total-vm:4871984kB, anon-rss:2287364kB, file-rss:0kB, shmem-rss:0kB
Aug 5 22:33:12 l-veristats-01 kernel: Killed process 18862 (mongod) total-vm:4871984kB, anon-rss:2313000kB, file-rss:0kB, shmem-rss:0kB
Aug 5 22:33:12 l-veristats-01 kernel: Killed process 18896 (mongod) total-vm:4871984kB, anon-rss:2312572kB, file-rss:0kB, shmem-rss:0kB
Aug 5 22:33:12 l-veristats-01 kernel: Killed process 19084 (mongod) total-vm:4871984kB, anon-rss:2312600kB, file-rss:0kB, shmem-rss:0kB
Aug 6 09:12:29 l-veristats-01 kernel: Killed process 1586 (mongod) total-vm:4052764kB, anon-rss:1544224kB, file-rss:0kB, shmem-rss:0kB
Aug 6 09:31:13 l-veristats-01 kernel: Killed process 31266 (mongod) total-vm:3517068kB, anon-rss:1589668kB, file-rss:0kB, shmem-rss:0kB
Aug 6 09:31:13 l-veristats-01 kernel: Killed process 31277 (mongod) total-vm:3517068kB, anon-rss:1597916kB, file-rss:0kB, shmem-rss:0kB
Aug 6 09:34:04 l-veristats-01 kernel: Killed process 31613 (veristats_drive) total-vm:3179060kB, anon-rss:1397368kB, file-rss:132kB, shmem-rss:8kB
Aug 6 09:46:13 l-veristats-01 kernel: Killed process 2781 (mongod) total-vm:3265204kB, anon-rss:676164kB, file-rss:0kB, shmem-rss:0kB
Aug 6 11:14:45 l-veristats-01 kernel: Killed process 1616 (mongod) total-vm:3566624kB, anon-rss:1608752kB, file-rss:0kB, shmem-rss:0kB
Aug 6 11:50:55 l-veristats-01 kernel: Killed process 2004 (mongod) total-vm:3220196kB, anon-rss:433892kB, file-rss:0kB, shmem-rss:0kB
Aug 6 13:31:18 l-veristats-01 kernel: Killed process 6914 (mongod) total-vm:2952284kB, anon-rss:509968kB, file-rss:0kB, shmem-rss:0kB
Aug 6 13:39:21 l-veristats-01 kernel: Killed process 7894 (veristats_drive) total-vm:2543372kB, anon-rss:1404448kB, file-rss:0kB, shmem-rss:0kB
Aug 6 13:39:33 l-veristats-01 kernel: Killed process 7895 (veristats_drive) total-vm:2550052kB, anon-rss:1277732kB, file-rss:0kB, shmem-rss:0kB
Aug 6 13:39:43 l-veristats-01 kernel: Killed process 7899 (veristats_drive) total-vm:2853180kB, anon-rss:1854872kB, file-rss:0kB, shmem-rss:0kB
Aug 6 13:41:15 l-veristats-01 kernel: Killed process 7579 (mongod) total-vm:3361372kB, anon-rss:713976kB, file-rss:0kB, shmem-rss:0kB
Aug 6 15:31:59 l-veristats-01 kernel: Killed process 5304 (veristats_drive) total-vm:2645104kB, anon-rss:1579280kB, file-rss:0kB, shmem-rss:0kB
Aug 6 15:32:43 l-veristats-01 kernel: Killed process 5309 (veristats_drive) total-vm:4463076kB, anon-rss:2628364kB, file-rss:0kB, shmem-rss:0kB
Aug 6 15:36:04 l-veristats-01 kernel: Killed process 1656 (mongod) total-vm:3106644kB, anon-rss:996140kB, file-rss:0kB, shmem-rss:0kB
Aug 6 15:36:20 l-veristats-01 kernel: Killed process 5682 (veristats_drive) total-vm:2127152kB, anon-rss:1161560kB, file-rss:0kB, shmem-rss:0kB
Aug 6 17:36:59 l-veristats-01 kernel: Killed process 1911 (mongod) total-vm:3687888kB, anon-rss:907436kB, file-rss:0kB, shmem-rss:0kB
Aug 7 09:15:17 l-veristats-01 kernel: Killed process 9349 (mongod) total-vm:4044552kB, anon-rss:1413928kB, file-rss:0kB, shmem-rss:0kB
Aug 7 09:30:55 l-veristats-01 kernel: Killed process 17966 (mongod) total-vm:3254812kB, anon-rss:1058928kB, file-rss:0kB, shmem-rss:0kB
Aug 7 09:35:15 l-veristats-01 kernel: Killed process 18993 (mongod) total-vm:2405540kB, anon-rss:361824kB, file-rss:0kB, shmem-rss:0kB

Have you ever change mongo version? Try to update your pymongo.

Related

failing k8s mongodb pod after some time - DBPathInUse: Unable to create/open the lock file

I'm running a kubernetes cluster (bare metal) with a mongodb (version 4, as my server cannot handle newer versions) replicaset (2 replicas), which is initially working, but from time to time (sometimes 24 hours, somtimes 10 days) one or more mongodb pods are failing.
Warning BackOff 2m9s (x43454 over 6d13h) kubelet Back-off restarting failed container
The relevant part of the logs should be
DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory
But I do not change anything and initially it is working. Also the second pod is currently running (but which will fail the next days).
I'm using longhorn (before I tried nfs) for the storage and I installed mongodb using bitnami helm chart with these values:
image:
registry: docker.io
repository: bitnami/mongodb
digest: "sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209" # tag: 4.4.15
pullPolicy: IfNotPresent
architecture: replicaset
replicaCount: 2
updateStrategy:
type: RollingUpdate
containerPorts:
mongodb: 27017
auth:
enabled: true
rootUser: root
rootPassword: "password"
usernames: ["user"]
passwords: ["userpass"]
databases: ["db"]
service:
portName: mongodb
ports:
mongodb: 27017
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
logs
mongodb 21:25:05.55 INFO ==> Advertised Hostname: mongodb-1.mongodb-headless.mongodb.svc.cluster.local
mongodb 21:25:05.55 INFO ==> Advertised Port: 27017
mongodb 21:25:05.56 INFO ==> Pod name doesn't match initial primary pod name, configuring node as a secondary
mongodb 21:25:05.59
mongodb 21:25:05.59 Welcome to the Bitnami mongodb container
mongodb 21:25:05.60 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 21:25:05.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 21:25:05.60
mongodb 21:25:05.60 INFO ==> ** Starting MongoDB setup **
mongodb 21:25:05.64 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 21:25:05.78 INFO ==> Initializing MongoDB...
mongodb 21:25:05.82 INFO ==> Deploying MongoDB with persisted data...
mongodb 21:25:05.83 INFO ==> Writing keyfile for replica set authentication...
mongodb 21:25:05.88 INFO ==> ** MongoDB setup finished! **
mongodb 21:25:05.92 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2022-10-29T21:25:05.961+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-10-29T21:25:05.963+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-10-29T21:25:05.969+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-1"}}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.15","gitVersion":"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9","openSSLVersion":"OpenSSL 1.1.1n 15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.15.0-48-generic"}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"rs0"},"security":{"authorization":"disabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"true"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory"}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
Update
I checked the syslog and before the the logs Nov 14 23:07:17 k8s-worker2 kubelet[752]: E1114 23:07:17.749057 752 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mongodb\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mongodb pod=mongodb-2_mongodb(314f2776-ced4-4ba3-b90b-f927dc079770)\"" pod="mongodb/mongodb-2" podUID=314f2776-ced4-4ba3-b90b-f927dc079770
I find these logs:
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341806] sd 2:0:0:1: [sda] tag#42 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=11s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341866] sd 2:0:0:1: [sda] tag#42 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341891] sd 2:0:0:1: [sda] tag#42 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341899] sd 2:0:0:1: [sda] tag#42 CDB: Write(10) 2a 00 00 85 1f b8 00 00 40 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341912] blk_update_request: critical medium error, dev sda, sector 8724408 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.352012] Aborting journal on device sda-8.
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.354980] EXT4-fs error (device sda) in ext4_reserve_inode_write:5726: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.355103] sd 2:0:0:1: [sda] tag#40 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357056] sd 2:0:0:1: [sda] tag#40 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357061] sd 2:0:0:1: [sda] tag#40 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357066] sd 2:0:0:1: [sda] tag#40 CDB: Write(10) 2a 00 00 44 14 88 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357068] blk_update_request: critical medium error, dev sda, sector 4461704 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357088] EXT4-fs error (device sda): ext4_dirty_inode:5922: inode #131080: comm mongod: mark_inode_dirty error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.359566] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 557715)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.361432] EXT4-fs error (device sda) in ext4_dirty_inode:5923: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.362792] Buffer I/O error on device sda, logical block 557713
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.364010] Buffer I/O error on device sda, logical block 557714
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365222] sd 2:0:0:1: [sda] tag#43 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=8s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365228] sd 2:0:0:1: [sda] tag#43 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365230] sd 2:0:0:1: [sda] tag#43 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365233] sd 2:0:0:1: [sda] tag#43 CDB: Write(10) 2a 00 00 44 28 38 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365234] blk_update_request: critical medium error, dev sda, sector 4466744 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367434] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558344)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367442] Buffer I/O error on device sda, logical block 558343
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368593] sd 2:0:0:1: [sda] tag#41 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368597] sd 2:0:0:1: [sda] tag#41 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368599] sd 2:0:0:1: [sda] tag#41 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368602] sd 2:0:0:1: [sda] tag#41 CDB: Write(10) 2a 00 00 44 90 70 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368604] blk_update_request: critical medium error, dev sda, sector 4493424 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370907] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 561680)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370946] sd 2:0:0:1: [sda] tag#39 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] sd 2:0:0:1: [sda] tag#39 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370952] sd 2:0:0:1: [sda] tag#39 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] EXT4-fs error (device sda): ext4_journal_check_start:83: comm kworker/u4:0: Detected aborted journal
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370954] sd 2:0:0:1: [sda] tag#39 CDB: Write(10) 2a 00 00 10 41 98 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.372081] blk_update_request: critical medium error, dev sda, sector 1065368 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374353] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131080 starting block 133172)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374396] Buffer I/O error on device sda, logical block 133171
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.388492] EXT4-fs error (device sda) in __ext4_new_inode:1136: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.390763] EXT4-fs error (device sda) in ext4_create:2786: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.391732] sd 2:0:0:1: [sda] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392941] sd 2:0:0:1: [sda] tag#46 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392944] sd 2:0:0:1: [sda] tag#46 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392948] sd 2:0:0:1: [sda] tag#46 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392950] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.395562] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396945] sd 2:0:0:1: [sda] tag#45 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396953] sd 2:0:0:1: [sda] tag#45 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396955] sd 2:0:0:1: [sda] tag#45 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396958] sd 2:0:0:1: [sda] tag#45 CDB: Write(10) 2a 08 00 84 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396959] blk_update_request: critical medium error, dev sda, sector 8650752 op 0x1:(WRITE) flags 0x20800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396930] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.399771] Buffer I/O error on dev sda, logical block 1081344, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.403897] JBD2: Error -5 detected when updating journal superblock for sda-8.
Nov 14 23:07:01 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.w3zzIL.mount: Deactivated successfully.
Nov 14 23:07:05 k8s-worker2 kubelet[752]: E1114 23:07:05.415798 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 kubelet[752]: E1114 23:07:06.412219 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.nK23K3.mount: Deactivated successfully.
Nov 14 23:07:11 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.L5TkRU.mount: Deactivated successfully.
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411831] sd 2:0:0:1: [sda] tag#44 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411888] sd 2:0:0:1: [sda] tag#44 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411898] sd 2:0:0:1: [sda] tag#44 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411952] sd 2:0:0:1: [sda] tag#44 CDB: Write(10) 2a 00 00 44 28 40 00 00 50 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411965] blk_update_request: critical medium error, dev sda, sector 4466752 op 0x1:(WRITE) flags 0x0 phys_seg 10 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.419273] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558354)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430398] sd 2:0:0:1: [sda] tag#47 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430407] sd 2:0:0:1: [sda] tag#47 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430409] sd 2:0:0:1: [sda] tag#47 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430412] sd 2:0:0:1: [sda] tag#47 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430415] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.433686] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.436088] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444291] sd 2:0:0:1: [sda] tag#32 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=14s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444300] sd 2:0:0:1: [sda] tag#32 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444304] sd 2:0:0:1: [sda] tag#32 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444308] sd 2:0:0:1: [sda] tag#32 CDB: Write(10) 2a 00 00 41 01 18 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444313] blk_update_request: critical medium error, dev sda, sector 4260120 op 0x1:(WRITE) flags 0x3000 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.449491] Buffer I/O error on dev sda, logical block 532515, lost async page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453591] sd 2:0:0:1: [sda] tag#33 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453600] sd 2:0:0:1: [sda] tag#33 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453603] sd 2:0:0:1: [sda] tag#33 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453607] sd 2:0:0:1: [sda] tag#33 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453610] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.459072] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.461189] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.464347] EXT4-fs (sda): Remounting filesystem read-only
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.466527] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131081, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.470833] Buffer I/O error on device sda, logical block 561678
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.473548] Buffer I/O error on device sda, logical block 561679
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.477384] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131083, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.482014] Buffer I/O error on device sda, logical block 558344
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.484881] Buffer I/O error on device sda, logical block 558345
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.487224] Buffer I/O error on device sda, logical block 558346
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.488837] Buffer I/O error on device sda, logical block 558347
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.490543] Buffer I/O error on device sda, logical block 558348
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.492061] Buffer I/O error on device sda, logical block 558349
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.493494] Buffer I/O error on device sda, logical block 558350
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.494931] Buffer I/O error on device sda, logical block 558351
Not sure, if this is really related to the problem.
Generally when you see this error message:
"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system)
It most probably mean that your mongodb pod did not shutted down gracefully and had no time to remove the mongod.lock file so when your pod was re-created in another k8s node the "new" mongod process cannot start because it is finding the previous mongod.lock file.
The easiest way to resolve the current availability issue is to scale up and add immediately one more replicaSet member so the new member to init-sync from the available good member:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=3
and elect again primary.
You can check if mongoDB replicaSet elected PRIMARY from mongo shell inside the pod with the command:
rs.status()
For affected pod with the issue you can do as follow:
You can plan maitenance window and scale down ( scaling down stateFullset do not expect to automatically delete the pvc/pv , but good to make backup just in case.
After you scale down you can start custom helper pod to mount the pv so you can remove the mongod.lock file:
Temporary pod that you will start to mount the affected dbPath and remove the mongodb.lock file:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: mongo-pvc-helper
spec:
securityContext:
runAsUser: 0
containers:
- command:
- sh
- -c
- while true ; do echo alive ; sleep 10 ; done
image: busybox
imagePullPolicy: Always
name: mongo-pvc-helper
resources: {}
securityContext:
capabilities:
drop:
- ALL
volumeMounts:
- mountPath: /mongodata
name: mongodata
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: <your_faulty_pod_pvc_name>
EOF
After you start the pod you can do:
kubectl exec mongo-pvc-helper -it sh
$ chown -R 0:0 /mongodata
$ rm /mongodata/mongod.lock
$ exit
Or you can complete wipe up the entire pv(if you prefer safely to init-sync entirely this member):
rm -rf /mongodata/*
And terminate the pod so you can finish the process:
kubectl delete pod mongo-pvc-helper
And again scale-up:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=2
Btw, good to have at least 3x data members in replicaSet for better redundancy to allow during single member down event election to keep still the PRIMARY up and running...
How to troubleshoot this further:
Ensure your pods have the terminationGracePeriod set (at least 10-20 sec) so it allow some time for the mongod process to flush data to storage and remove the mongod.lock file.
Depending from pod memory limits/requests , you can set some safer value for storage.wiredTiger.engineConfig.cacheSizeGB (if not set it is allocating ~50% from memory ).
Check the kubelet logs from node where pod was killed there maybe more details why pod was killed.
I think #R2D2's extensive answer makes some good points about how to recover from the situation. I very much agree with their recommendation to use 3 data bearing nodes which aligns with fault tolerance considerations. With the additional logs you were able to add, I am arriving at the same conclusion that your storage subsystem is the problem here which is going to be the actual cause of your MongoDB failing.
In your initial query the following log line was specifically highlighted:
DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory
Specifically: (Read-only file system). Now in the new logs you have provided the host itself is reporting:
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.459072] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.461189] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.464347] EXT4-fs (sda): Remounting filesystem read-only
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.466527] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131081, error -30)
Specifically Remounting filesystem to read-only. If mongod is using any of these mount points for its operation then we would expect the system to no longer be able to function properly if it can no longer write to them. The database process itself may terminate, which is something the storage node watchdog could be configured to do (in subsequent versions).
In any case, the issues with the storage look quite serious, they include text like this: failed to convert unwritten extents to written extents -- potential data loss! It seems imperative that you look into this further and resolve any issues as soon as possible.
Relatedly, you mentioned:
I'm using longhorn (before I tried nfs) for the storage
The logs also suggest EXT4-fs is at play here. I think all of these have been known to have issues or otherwise be suboptimal for usage with MongoDB. From their documentation:
With the WiredTiger storage engine, using XFS is strongly recommended for data bearing nodes to avoid performance issues that may occur when using EXT4 with WiredTiger.
From elsewhere on the same page (emphasis added):
With the WiredTiger storage engine, WiredTiger objects may be stored on remote file systems if the remote file system conforms to ISO/IEC 9945-1:1996 (POSIX.1). Because remote file systems are often slower than local file systems, using a remote file system for storage may degrade performance.
I don't have any personal experience with Longhorn, but you can see an example here where instability with that storage system caused the same DBPathInUse error that you observed. There are other reports of people having nothing but problems with storage constantly detaching itself.
In short - instability with the storage subsystem is what is both causing the mongod process/pod to fail as well as preventing it from recovering. The problem is compounded by the fact that you only have 2 members in the replica set which provides no fault tolerance. Once you lose one member the other one will not be able to operate as a PRIMARY since there is no majority. Increasing the replica set to 3 members will at least provide fault tolerance of 1 node. The storage issues are a separate problem that should be pursued further via another question focused more on how that component is configured in your environment.
Some time ago I had something like that. That is always sad experience.
According to answer done by #R2D2. When you see (Read-only file system) in your logs - it can mean many things all not good. For instance when Linux starts file system is read-only, when everything is OK it is switched to read-write. That is not your case - so - just an example.
Please see that file system was marked as read-only due to io-errors. Looks like hard drive is corrupted. Check system, on which Kubernetes is running - fsck for Linux - like described here.
When drive is fixed restart Kubernetes - some data is lost, count on mongo complaining about data integrity... Nothing more than mongod --repair comes to my mind. Aaaand it can be that lock file should also be deleted before repair, but it should complain about it - like - "there is another instance", or "I can't set lock - file exists".
Besides that - use SMART monitoring, also mentioned later at the page.
Newer, faster, bigger drives are also more fragile. That is the price.
If you have backup... Yes I know - I've mentioned about my case - since then I have backup... Good luck!

Unable to start Ceph RGW service

Context:
I'm trying to setup a new ceph-17.2.5 cluster
Initial setup consists of 1 mgr, 1 mon and 1 osd all installed on same box using cephadm.
So far I am able to access http://192.168.3.50:8080/api/auth and get token in the response.
But http://192.168.3.50:8080/api/rgw/bucket returns:
`
{
"status": "500 Internal Server Error",
"detail": "The server encountered an unexpected condition which prevented it from fulfilling the request.",
"request_id": "f08247f9-d9ea-4f88-99b7-d07d5141d5ea"
}
`
This seems because the dashboard shows rgw service is not running, I have tried to setup rgw service using below commands:
sudo cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml
ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml
radosgw.yml file contents:
service_type: rgw
service_id: default
placement:
hosts:
- <machine-hostname>
count_per_host: 1
spec:
rgw_realm: default
rgw_zone: default
rgw_frontend_port: 8123
networks:
- 192.168.3.0/24
After running above the cluster in ceph dashboard does show one rgw service:
But the object gateway daemon still gives error as below:
The RGW API still returns same error.
Looking at the rgw logs I see rgw keeps trying restart on a loop and eventually stops:
Nov 17 05:31:16 <hostname> systemd[1]: Started Ceph rgw.default.<hostname>.rerude for 76ec21dc-4f10-11ed-8fda-0050568dc50c. Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.012+0000 7fc0de7ed5c0 0 deferred set uid:gid to 167:167 (ceph:ceph) Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.012+0000 7fc0de7ed5c0 0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 7 Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.012+0000 7fc0de7ed5c0 0 framework: beast Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.012+0000 7fc0de7ed5c0 0 framework conf key: endpoint, val: 192.168.3.59:8123 Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.012+0000 7fc0de7ed5c0 1 radosgw_Main not setting numa affinity Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.015+0000 7fc0de7ed5c0 1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0 Nov 17 05:31:17 <hostname> bash[2537663]: debug 2022-11-17T05:31:17.015+0000 7fc0de7ed5c0 1 D3N datacache enabled: 0 Nov 17 05:36:17 <hostname> bash[2537663]: debug 2022-11-17T05:36:17.013+0000 7fc0dd36f700 -1 Initialization timeout, failed to initialize Nov 17 05:36:17 <hostname> systemd[1]: ceph-76ec21dc-4f10-11ed-8fda-0050568dc50c#rgw.default.<hostname>.rerude.service: Main process exited, code=exited, status=1/FAILURE Nov 17 05:36:17 <hostname> systemd[1]: ceph-76ec21dc-4f10-11ed-8fda-0050568dc50c#rgw.default.<hostname>.rerude.service: Failed with result 'exit-code'. Nov 17 05:36:27 <hostname> systemd[1]: ceph-76ec21dc-4f10-11ed-8fda-0050568dc50c#rgw.default.<hostname>.rerude.service: Service RestartSec=10s expired, scheduling restart. Nov 17 05:36:27 <hostname> systemd[1]: ceph-76ec21dc-4f10-11ed-8fda-0050568dc50c#rgw.default.<hostname>.rerude.service: Scheduled restart job, restart counter is at 4. Nov 17 05:36:27 <hostname> systemd[1]: Stopped Ceph rgw.default.<hostname>.rerude for 76ec21dc-4f10-11ed-8fda-0050568dc50c. Nov 17 05:36:27 <hostname> systemd[1]: Started Ceph rgw.default.<hostname>.rerude for 76ec21dc-4f10-11ed-8fda-0050568dc50c. Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.465+0000 7f2589bf95c0 0 deferred set uid:gid to 167:167 (ceph:ceph) Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.465+0000 7f2589bf95c0 0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 6 Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.465+0000 7f2589bf95c0 0 framework: beast Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.465+0000 7f2589bf95c0 0 framework conf key: endpoint, val: 192.168.3.50:8123 Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.465+0000 7f2589bf95c0 1 radosgw_Main not setting numa affinity Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.468+0000 7f2589bf95c0 1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0 Nov 17 05:36:27 <hostname> bash[2538424]: debug 2022-11-17T05:36:27.468+0000 7f2589bf95c0 1 D3N datacache enabled: 0 Nov 17 05:41:27 <hostname> bash[2538424]: debug 2022-11-17T05:41:27.466+0000 7f258877b700 -1 Initialization timeout, failed to initialize
> sudo ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 8 GiB 8.0 GiB 9.7 MiB 9.7 MiB 0.12
TOTAL 8 GiB 8.0 GiB 9.7 MiB 9.7 MiB 0.12
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.rgw.root 1 32 0 B 0 0 B 0 2.5 GiB
.mgr 2 1 449 KiB 2 452 KiB 0 7.6 GiB
Not sure what should I look for to get the rgw apis to work in a minimal setup.

Why I got the next errors?

could someone help me with this error please:
Output for command: systemctl start postgresql-13.service
Job for postgresql-13.service failed because the control process exited with error code.
See "systemctl status postgresql-13.service" and "journalctl -xeu postgresql-13.service" for details.
Output for command systemctl status postgresql-13.service
× postgresql-13.service - PostgreSQL 13 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-13.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2022-08-23 12:17:50 CDT; 1min 47s ago
Docs: https://www.postgresql.org/docs/13/static/
Process: 1079 ExecStartPre=/usr/pgsql-13/bin/postgresql-13-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Process: 1110 ExecStart=/usr/pgsql-13/bin/postmaster -D ${PGDATA} (code=exited, status=1/FAILURE)
Main PID: 1110 (code=exited, status=1/FAILURE)
CPU: 40ms
Aug 23 12:17:47 fedora systemd[1]: Starting PostgreSQL 13 database server...
Aug 23 12:17:50 fedora postmaster[1110]: 2022-08-23 12:17:50.144 CDT [1110] LOG: redirecting log output to logging collector process
Aug 23 12:17:50 fedora postmaster[1110]: 2022-08-23 12:17:50.144 CDT [1110] HINT: Future log output will appear in directory "log".
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Main process exited, code=exited, status=1/FAILURE
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Killing process 1132 (postmaster) with signal SIGKILL.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Killing process 1132 (postmaster) with signal SIGKILL.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Failed with result 'exit-code'.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Unit process 1132 (postmaster) remains running after unit stopped.
Aug 23 12:17:50 fedora systemd[1]: Failed to start PostgreSQL 13 database server.
Already I uninstall and reinstall postgresql but nothing works. Also I tried to install postgresql-14 but I get the same error.
I have to install postgresql to work alongs with Ruby on Rails.

Apache Zookeeper: Unable to access data directory

OS: RHEL 8.2
I am trying to create a systemctl service for zookeeper. It fails to access the datadir.
Here is my config for zookeeper,
dataDir=/opt/zookeeper
maxClientCnxns=20
tickTime=2000
dataDir=/var/zookeeper/
initLimit=20
syncLimit=10
server.0=master:2888:3888
clientPort=2181
admin.serverPort=8082
Permission of /opt/zookeeper is set to 777.
[user1#server1 opt]$ ls -lart
total 0
dr-xr-xr-x. 17 root root 244 Jul 3 10:56 ..
drwxr-xr-x 3 root root 27 Jul 10 10:29 rh
drw-r--r-- 2 user2 user2 6 Jul 17 08:48 hsluw_data
drw-r--r-- 2 user2 user2 6 Jul 17 08:58 hsluw_config
drwxr-xr-x. 6 root root 71 Jul 17 08:58 .
drwxrwxrwx 3 user2 user2 23 Jul 17 09:40 zookeeper
If I run the command,
./bin/zookeeper-server-start.sh config/zookeeper.properties
it gives me an error message: Unable to access datadir
[2020-07-30 10:25:50,767] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-07-30 10:25:50,767] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-07-30 10:25:50,769] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-07-30 10:25:50,769] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /var/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:132)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally
However, sudoing the above command works,
sudo ./bin/zookeeper-server-start.sh config/zookeeper.properties
Now I have created a service in /etc/systemd/system/zookeeper.service
I wrote the service in /etc/systemd/system/zookeeper.service in this way,
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=user2
ExecStart=/home/user2/kafka/bin/zookeeper-server-start.sh /home/user2/kafka/config/zookeeper.properties
ExecStop=/home/user2/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The SELinux status is disabled.
user2#server1$ sestatus
SELinux status: disabled
Now if I do the following
sudo systemctl daemon-reload
sudo systemctl start zookeeper
sudo systemctl enable zookeeper
I am getting the the same Unable to access the datadir error like the following,
[user2#server1 /]$ systemctl status zookeeper
\u25cf zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-07-30 10:13:19 CEST; 24s ago
Main PID: 12911 (code=exited, status=3)
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: org.apache.zookeeper.server.persistence.FileTxnSnapLog$Data>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.persistence.FileTxnS>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: Unable to access datadir, exiting abnormally
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Failed with result 'exit-code'.
What am I missing here?
In the configuration file, this line
dataDir=/var/zookeeper/
appears twice. Removing that line solves the issue.

systemd: recover from systemctl hang

I have a custom systemd ".service" file, like this one:
[Unit]
Description=MyStuff
After=syslog.target
After=network.target
After=local-fs.target
After=remote-fs.target
[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/local/MyStuff/start
ExecReload=/bin/kill $MAINPID && /usr/bin/cat /var/run/MyStuff.pid | /usr/bin/xargs --no-run-if-empty /bin/kill
ExecStop=/bin/kill $MAINPID && /usr/bin/cat /var/run/MyStuff.pid | /usr/bin/xargs --no-run-if-empty /bin/kill
ExecStartPost=/usr/local/MyStuff/after_start.sh
PIDFile=/var/run/MyStuff.pid
#RemainAfterExit=yes
TimeoutSec=300
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
It worked fine until minutes ago. A systemctl stop MyStuff hung this way:
[root#Live-1 ffmpeg]# systemctl status WowzaStreamingEngine
● MyStuff.service - MyStuff
Loaded: loaded (/usr/lib/systemd/system/MyStuff.service; enabled; vendor preset: disabled)
Active: deactivating (final-sigkill) (Result: timeout) since vie 2017-07-28 11:12:23 -03; 23min ago
Main PID: 25149
CGroup: /system.slice/MyStuff.service
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "/usr/bin/cat"
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "/var/run/MyStuff.pid"
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "|"
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "/usr/bin/xargs"
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "--no-run-if-empty"
jul 28 11:12:23 Live-1 kill[10382]: kill: cannot find process "/bin/kill"
jul 28 11:12:23 Live-1 systemd[1]: MyStuff.service: control process exited, code=exited status=1
jul 28 11:17:24 Live-1 systemd[1]: MyStuff.service stop-sigterm timed out. Killing.
jul 28 11:22:24 Live-1 systemd[1]: MyStuff.service still around after SIGKILL. Ignoring.
jul 28 11:31:22 Live-1 systemd[1]: MyStuff.service stop-final-sigterm timed out. Killing.
I know there are errors reported there. That is not the problem; I'm not asking about those errors. The problem is: systemd/systemctl hangs in that state, and i'm unable to do start/stop/restart actions because all of them stays waiting forever.
What i need to know is a way around that state. This time I just rebooted the server, because it was development environment. But i can't just deal this way on production.
So: when systemd/systemctl hangs this way, what can I do in order to recover my service from this state without rebooting the server?
Thanks in advance.