Mongo db service is not running - mongodb

mongod.service - High-performance, schema-free document-oriented database
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2018-10-19 11:54:22 BST; 1min 8s ago
Docs: https://docs.mongodb.org/manual
Process: 28567 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=2)
Process: 28565 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 28559 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 28557 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Oct 19 11:54:22 d203.tld systemd[1]: Starting High-performance, schema-free document-oriented database...
Oct 19 11:54:22 d203.tld mongod[28567]: Unrecognized option: security
Oct 19 11:54:22 d203.tld mongod[28567]: try '/usr/bin/mongod --help' for more information
Oct 19 11:54:22 d203.tld systemd[1]: mongod.service: control process exited, code=exited status=2
Oct 19 11:54:22 d203.tld systemd[1]: Failed to start High-performance, schema-free document-oriented database.
Oct 19 11:54:22 d203.tld systemd[1]: Unit mongod.service entered failed state.
Oct 19 11:54:22 d203.tld systemd[1]: mongod.service failed.

Related

Ceph OSD (authenticate timed out) after node restart

A couple of our nodes restarted unexpectedly and since the OSDs on those nodes will no longer authenticate with the MON.
I have tested that the node still has access to all the MON nodes using nc to see if the ports are open.
We can not find anything in the mon logs about authentication errors.
At the moment 50% of the cluster is down due to 2/4 nodes offline.
Feb 06 21:04:07 ceph1 systemd[1]: Starting Ceph osd.7 for d5126e5a-882e-11ec-954e-90e2baec3d2c...
Feb 06 21:04:08 ceph1 podman[520029]: 2023-02-06 21:04:08.056452052 +0100 CET m=+0.123533698 container create 0b396efc0543af48d593d1e4c72ed74d>
Feb 06 21:04:08 ceph1 podman[520029]: 2023-02-06 21:04:08.334525479 +0100 CET m=+0.401607145 container init 0b396efc0543af48d593d1e4c72ed74d30>
Feb 06 21:04:08 ceph1 podman[520029]: 2023-02-06 21:04:08.346028585 +0100 CET m=+0.413110241 container start 0b396efc0543af48d593d1e4c72ed74d3>
Feb 06 21:04:08 ceph1 podman[520029]: 2023-02-06 21:04:08.346109677 +0100 CET m=+0.413191333 container attach 0b396efc0543af48d593d1e4c72ed74d>
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-03539866-06e2-4>
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/ln -snf /dev/ceph-03539866-06e2-4ba6-8809-6a491becb4fe/osd-block-1dd63d2a-9803-4>
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 06 21:04:08 ceph1 bash[520029]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Feb 06 21:04:08 ceph1 bash[520029]: --> ceph-volume lvm activate successful for osd ID: 7
Feb 06 21:04:08 ceph1 podman[520029]: 2023-02-06 21:04:08.635416784 +0100 CET m=+0.702498460 container died 0b396efc0543af48d593d1e4c72ed74d30>
Feb 06 21:04:09 ceph1 podman[520029]: 2023-02-06 21:04:09.036165374 +0100 CET m=+1.103247040 container remove 0b396efc0543af48d593d1e4c72ed74d>
Feb 06 21:04:09 ceph1 podman[520260]: 2023-02-06 21:04:09.299438115 +0100 CET m=+0.070335845 container create d25c3024614dfb0a01c70bd56cf0758e>
Feb 06 21:04:09 ceph1 podman[520260]: 2023-02-06 21:04:09.384256486 +0100 CET m=+0.155154236 container init d25c3024614dfb0a01c70bd56cf0758ef1>
Feb 06 21:04:09 ceph1 podman[520260]: 2023-02-06 21:04:09.393054076 +0100 CET m=+0.163951816 container start d25c3024614dfb0a01c70bd56cf0758ef>
Feb 06 21:04:09 ceph1 bash[520260]: d25c3024614dfb0a01c70bd56cf0758ef16aa67f511ee4add8a85586c67beb0b
Feb 06 21:04:09 ceph1 systemd[1]: Started Ceph osd.7 for d5126e5a-882e-11ec-954e-90e2baec3d2c.
Feb 06 21:09:09 ceph1 conmon[520298]: debug 2023-02-06T20:09:09.394+0000 7f6c10705080 0 monclient(hunting): authenticate timed out after 300
Feb 06 21:14:09 ceph1 conmon[520298]: debug 2023-02-06T20:14:09.395+0000 7f6c10705080 0 monclient(hunting): authenticate timed out after 300
Feb 06 21:19:09 ceph1 conmon[520298]: debug 2023-02-06T20:19:09.397+0000 7f6c10705080 0 monclient(hunting): authenticate timed out after 300
Feb 06 21:24:09 ceph1 conmon[520298]: debug 2023-02-06T20:24:09.398+0000 7f6c10705080 0 monclient(hunting): authenticate timed out after 300
Feb 06 21:29:09 ceph1 conmon[520298]: debug 2023-02-06T20:29:09.399+0000 7f6c10705080 0 monclient(hunting): authenticate timed out after 300
We have restarted the OSD nodes and this did not resolve the issue.
Confirmed that nodes have access to all mon servers.
I have looked in /var/run/ceph and the admin sockets are not there.
Here is output as its starting the OSD.
[2023-02-07 10:38:58,167][ceph_volume.main][INFO ] Running command: ceph-volume lvm list --format json
[2023-02-07 10:38:58,168][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-02-07 10:38:58,213][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-03539866-06e2-4ba6-8809-6a491becb4fe/osd-block-1dd63d2a-9803-452c-a102-3b826e6ef448,ceph.block_uuid=VjbtJW-iiCA-PMvC-TCnV-9xgJ-a8UU-IDo0Pv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d5126e5a-882e-11ec-954e-90e2baec3d2c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=1dd63d2a-9803-452c-a102-3b826e6ef448,ceph.osd_id=7,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-03539866-06e2-4ba6-8809-6a491becb4fe/osd-block-1dd63d2a-9803-452c-a102-3b826e6ef448";"osd-block-1dd63d2a-9803-452c-a102-3b826e6ef448";"ceph-03539866-06e2-4ba6-8809-6a491becb4fe";"VjbtJW-iiCA-PMvC-TCnV-9xgJ-a8UU-IDo0Pv";"16000896466944
[2023-02-07 10:38:58,213][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-1ce58676-9409-4e19-ac66-f63b5025dfb0/osd-block-9949a437-7e8a-489b-ba10-ded82c775c43,ceph.block_uuid=KLNJDx-J1iC-V5GJ-0nw3-YuEA-Q41D-HNIXv8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d5126e5a-882e-11ec-954e-90e2baec3d2c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=9949a437-7e8a-489b-ba10-ded82c775c43,ceph.osd_id=3,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-1ce58676-9409-4e19-ac66-f63b5025dfb0/osd-block-9949a437-7e8a-489b-ba10-ded82c775c43";"osd-block-9949a437-7e8a-489b-ba10-ded82c775c43";"ceph-1ce58676-9409-4e19-ac66-f63b5025dfb0";"KLNJDx-J1iC-V5GJ-0nw3-YuEA-Q41D-HNIXv8";"16000896466944
[2023-02-07 10:38:58,213][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-7053d77a-5d1c-450b-a932-d1590411ea2b/osd-block-29ac0ada-d23c-45c1-ae5d-c8aba5a60195,ceph.block_uuid=NTTkze-YV08-lOir-SJ6W-39un-oUc7-ZvOBra,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d5126e5a-882e-11ec-954e-90e2baec3d2c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=29ac0ada-d23c-45c1-ae5d-c8aba5a60195,ceph.osd_id=14,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-7053d77a-5d1c-450b-a932-d1590411ea2b/osd-block-29ac0ada-d23c-45c1-ae5d-c8aba5a60195";"osd-block-29ac0ada-d23c-45c1-ae5d-c8aba5a60195";"ceph-7053d77a-5d1c-450b-a932-d1590411ea2b";"NTTkze-YV08-lOir-SJ6W-39un-oUc7-ZvOBra";"16000896466944
[2023-02-07 10:38:58,213][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-e0a1e940-dec3-4369-a533-1e88bea5fa5e/osd-block-2d002c14-7751-4037-a070-7538e1264d88,ceph.block_uuid=1Gts1p-KwPO-LnIb-XlP2-zCGQ-92fb-Kvv53H,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d5126e5a-882e-11ec-954e-90e2baec3d2c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=2d002c14-7751-4037-a070-7538e1264d88,ceph.osd_id=11,ceph.osdspec_affinity=all-available-devices,ceph.type=block,ceph.vdo=0";"/dev/ceph-e0a1e940-dec3-4369-a533-1e88bea5fa5e/osd-block-2d002c14-7751-4037-a070-7538e1264d88";"osd-block-2d002c14-7751-4037-a070-7538e1264d88";"ceph-e0a1e940-dec3-4369-a533-1e88bea5fa5e";"1Gts1p-KwPO-LnIb-XlP2-zCGQ-92fb-Kvv53H";"16000896466944
[2023-02-07 10:38:58,214][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -S lv_uuid=VjbtJW-iiCA-PMvC-TCnV-9xgJ-a8UU-IDo0Pv -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2023-02-07 10:38:58,269][ceph_volume.process][INFO ] stdout /dev/sdb";"";"a6T0sC-DeMp-by25-wUjP-wL3R-u6d1-nPXfji";"ceph-03539866-06e2-4ba6-8809-6a491becb4fe";"VjbtJW-iiCA-PMvC-TCnV-9xgJ-a8UU-IDo0Pv
[2023-02-07 10:38:58,269][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -S lv_uuid=KLNJDx-J1iC-V5GJ-0nw3-YuEA-Q41D-HNIXv8 -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2023-02-07 10:38:58,333][ceph_volume.process][INFO ] stdout /dev/sda";"";"63b0j0-o1S7-FHqG-lwOk-0OYj-I9pH-g58TzB";"ceph-1ce58676-9409-4e19-ac66-f63b5025dfb0";"KLNJDx-J1iC-V5GJ-0nw3-YuEA-Q41D-HNIXv8
[2023-02-07 10:38:58,333][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -S lv_uuid=NTTkze-YV08-lOir-SJ6W-39un-oUc7-ZvOBra -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2023-02-07 10:38:58,397][ceph_volume.process][INFO ] stdout /dev/sde";"";"qDEqYa-cgXd-Tc2h-64wQ-zT63-vIBZ-ZfGGO0";"ceph-7053d77a-5d1c-450b-a932-d1590411ea2b";"NTTkze-YV08-lOir-SJ6W-39un-oUc7-ZvOBra
[2023-02-07 10:38:58,398][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --separator=";" -S lv_uuid=1Gts1p-KwPO-LnIb-XlP2-zCGQ-92fb-Kvv53H -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2023-02-07 10:38:58,457][ceph_volume.process][INFO ] stdout /dev/sdd";"";"aqhedj-aUlM-0cl4-P98k-XZRL-1mPG-0OgKLV";"ceph-e0a1e940-dec3-4369-a533-1e88bea5fa5e";"1Gts1p-KwPO-LnIb-XlP2-zCGQ-92fb-Kvv53H
config dump
WHO MASK LEVEL OPTION VALUE RO
global advanced cluster_network 10.125.0.0/24 *
global basic container_image quay.io/ceph/ceph#sha256:a39107f8d3daab4d756eabd6ee1630d1bc7f31eaa76fff41a77fa32d0b903061 *
mon advanced auth_allow_insecure_global_id_reclaim false
mon advanced public_network 10.123.0.0/24 *
mgr advanced mgr/cephadm/container_init True *
mgr advanced mgr/cephadm/migration_current 3 *
mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://10.123.0.21:9093 *
mgr advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY false *
mgr advanced mgr/dashboard/GRAFANA_API_URL https://10.123.0.21:3000 *
mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://10.123.0.21:9095 *
mgr advanced mgr/dashboard/ssl_server_port 8443 *
mgr advanced mgr/orchestrator/orchestrator cephadm
mgr advanced mgr/pg_autoscaler/autoscale_profile scale-up
mds advanced mds_max_caps_per_client 65536
mds.cephfs basic mds_join_fs cephfs
####
ceph status
cluster:
id: d5126e5a-882e-11ec-954e-90e2baec3d2c
health: HEALTH_WARN
8 failed cephadm daemon(s)
2 stray daemon(s) not managed by cephadm
nodown,noout flag(s) set
4 osds down
1 host (4 osds) down
Degraded data redundancy: 195662646/392133183 objects degraded (49.897%), 160 pgs degraded, 160 pgs undersized
6 pgs not deep-scrubbed in time
1 daemons have recently crashed
services:
mon: 3 daemons, quorum ceph5,ceph7,ceph6 (age 2d)
mgr: ceph2.tofizp(active, since 9M), standbys: ceph1.vnkagp
mds: 3/3 daemons up
osd: 19 osds: 15 up (since 11h), 19 in (since 11h); 151 remapped pgs
flags nodown,noout
data:
volumes: 1/1 healthy
pools: 6 pools, 257 pgs
objects: 102.97M objects, 67 TiB
usage: 69 TiB used, 107 TiB / 176 TiB avail
pgs: 195662646/392133183 objects degraded (49.897%)
2620377/392133183 objects misplaced (0.668%)
150 active+undersized+degraded+remapped+backfill_wait
97 active+clean
9 active+undersized+degraded
1 active+undersized+degraded+remapped+backfilling
io:
client: 170 B/s rd, 0 op/s rd, 0 op/s wr
recovery: 9.7 MiB/s, 9 objects/s

Mongod Active: Failed (code=exited, status=217/USER)

I am using ubuntu16.04 on VPS.
It has been migrated to another company's VPS and is in use.
nginix works fine and my node application works fine.
When the mongodb service is run as systemlctl stat
The following error code was shown and it became Active:Failed.
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-03-18 08:57:04 UTC; 3s ago
Docs: https://docs.mongodb.org/manual
Process: 3558 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=217/USER)
Main PID: 3558 (code=exited, status=217/USER)
Mar 18 08:57:04 user systemd[1]: Started MongoDB Database Server.
Mar 18 08:57:04 user systemd[1]: mongod.service: Main process exited, code=exited, status=217/USER
Mar 18 08:57:04 user systemd[1]: mongod.service: Unit entered failed state.
Mar 18 08:57:04 user systemd[1]: mongod.service: Failed with result 'exit-code'.
Below is my mongod.conf setting.
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
In my system
mongod.conf Path
/etc/mongod.conf
mongodb DB path
/var/lib/mongodb
I'd appreciate it if you could let me know what I need to check.
The problem is probably with the account mongo is trying to launch with in your systemd mongod.service file. Check the user and group defined in the [Service] block are valid.

Service mongod does not start on Centos8

I tried to install mongoDB on Centos8, but when I run the command
systemctl status mongod.service
I get this error:
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2020-08-01 14:26:53 CEST; 11min ago
Docs: https://docs.mongodb.org/manual
Process: 1875 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14)
Process: 1873 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 1871 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 1869 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Aug 01 14:26:53 db.localhost systemd[1]: Starting MongoDB Database Server...
Aug 01 14:26:53 db.localhost mongod[1875]: about to fork child process, waiting until server is ready for>
Aug 01 14:26:53 db.localhost mongod[1875]: forked process: 1877
Aug 01 14:26:53 db.localhost mongod[1875]: ERROR: child process failed, exited with 14
Aug 01 14:26:53 db.localhost mongod[1875]: To see additional information in this output, start without th>
Aug 01 14:26:53 db.localhost systemd[1]: mongod.service: Control process exited, code=exited status=14
Aug 01 14:26:53 db.localhost systemd[1]: mongod.service: Failed with result 'exit-code'.
Aug 01 14:26:53 db.localhost systemd[1]: Failed to start MongoDB Database Server.
I tried to check privileges for the folders: /var/lib/mongo and /var/log/mongodb
#/var/lib/
drwxr-xr-x. 4 mongod mongod 4096 Aug 1 14:44 mongo
#/var/log/
drwxr-xr-x. 2 mongod mongod 50 Aug 1 14:14 mongodb
In some other posts, people told to try this command:
sudo chown -R mongodb:mongodb /var/lib/mongodb/
but in don't have the user mongodb and I get the error: invalid user: 'mongodb:mongodb'!
In my /etc/passwd file the only user for mongo is this:
mongod:x:994:992:mongod:/var/lib/mongo:/bin/false
What's the problem?
Why I cannot run mongod.service?
Thanks for your support.
Added more informations:
systemctl cat mongod
[root#db tmp]# systemctl cat mongod
# /usr/lib/systemd/system/mongod.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
mongod.conf
[root#db tmp]# cat /etc/mongod.conf
# ..
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
/usr/bin/mongod -f /etc/mongod.conf
[root#db mongodb]# /usr/bin/mongod -f /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1959
child process started successfully, parent exiting
For some kind of reason, mongod want the folder /data/db and ignore the file mongod.conf
After I created these folders, if I run mongod command (with sudo), programme start "correctly".
But if I reboot the system, the service fails on boot and continue to have problems.

Failure with start mongod.service on Ubuntu

I try to start service with mongo and get an error:
* mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2019-04-12 12:55:29 MSK; 9s ago
Docs: https://docs.mongodb.org/manual
Process: 15162 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
Main PID: 15162 (code=exited, status=1/FAILURE)
Apr 12 12:55:29 mx systemd[1]: Started MongoDB Database Server.
Apr 12 12:55:29 mx systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE
Apr 12 12:55:29 mx systemd[1]: mongod.service: Failed with result 'exit-code'.
If I start mongod by manual, all works good.
mongod.service:
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
[Install]
WantedBy=multi-user.target
mongod.config:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
I've set permissions for /var/lib/mongodb and /var/log/mongodb/mongod.log
chown mongodb:mongodb /var/lib/mongodb -R
chown mongodb:mongodb /var/log/mongodb/mongod.log -R
What's wrong? What should I do? Any ideas?
I've solved my porblem. The solution is very simple, but I had a lot of headace.
Free disk space. I have no anought free disk space in my server. Thats it!
I hope this will be helpfull for anybody...

Mongo db failed to start in centos 07

I followed this link [1] to install mongo db under centos 07, the data base started normally but then I don't know what happen it did not want to start again giving me this error:
[root#localhost ~]# systemctl start mongod
Job for mongod.service failed. See 'systemctl status mongod.service' and 'journalctl -xn' for details.
[root#localhost ~]# systemctl status mongod.service -l
mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since mer. 2015-08-05 17:13:12 CEST; 24s ago
Process: 2872 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=1/FAILURE)
août 05 17:13:12 localhost systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
août 05 17:13:12 localhost runuser[2878]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
août 05 17:13:12 localhost mongod[2872]: Starting mongod: [ÉCHOUÉ]
août 05 17:13:12 localhost systemd[1]: mongod.service: control process exited, code=exited status=1
août 05 17:13:12 localhost systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
août 05 17:13:12 localhost systemd[1]: Unit mongod.service entered failed state.
I configured SELinux to "enforcing" and I enabled the access to the port 27017 using
semanage port -a -t mongod_port_t -p tcp 27017
I can start the data base using this command:
[root#localhost ~]# sudo -u root /usr/bin/mongod --quiet --config /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3058
child process started successfully, parent exiting
But still can't start it as a service :(
Any ideas of what I have missed?
Thanks in advance for your help!
[1] http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat/