I just install Sphinx version sphinx-2.2.11-1.rhel7.x86_64 on Centos7.3
So i success to install it and index the database and when i start it first time the Sphinx is starting but when i try to use service searchd stop or service searchd restart everytime the searchd.pid is auto deleted but never created again so the sphinx can't start again because of the error
[root#ns510209 log]# service searchd start
Redirecting to /bin/systemctl start searchd.service
Job for searchd.service failed because a configured resource limit was exceeded. See "systemctl status searchd.service" and "journalctl -xe" for details.
Any suggestion how i can fix this issue im trying for few days to find a way but still no luck ...
I met the same issue. The root cause is that searchd cannot write the binlog file due to incorrect metadata in the following folder:
# ls -al /var/lib/sphinx/
total 23580
drwxr-xr-x 2 sphinx sphinx 4096 Jul 9 16:52 .
drwxr-xr-x 33 root root 4096 Mar 12 14:18 ..
-rw------- 1 sphinx sphinx 8 Jul 9 16:47 binlog.001
-rw------- 1 sphinx sphinx 8 Jul 9 16:52 binlog.002
-rw------- 1 sphinx sphinx 0 Jul 9 16:52 binlog.lock
-rw------- 1 sphinx sphinx 12 Jul 9 16:52 binlog.meta
-rw------- 1 sphinx sphinx 0 Jun 21 18:53 doc.old.spl
-rw-r--r-- 1 sphinx sphinx 0 Jul 9 16:52 doc.spa
Move all files, except for doc.* (or whatever prefix is used), from this folder. Then start the service:
# systemctl start searchd
# systemctl status searchd
● searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 16:52:43 MSK; 6min ago
Process: 1690 ExecStart=/usr/bin/searchd --config /etc/sphinx/sphinx.conf (code=exited, status=0/SUCCESS)
Process: 1687 ExecStartPre=/bin/chown sphinx.sphinx /var/run/sphinx (code=exited, status=0/SUCCESS)
Process: 1684 ExecStartPre=/bin/mkdir -p /var/run/sphinx (code=exited, status=0/SUCCESS)
Main PID: 1693 (searchd)
CGroup: /system.slice/searchd.service
├─1692 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
└─1693 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
Related
Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?
Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.
OS: RHEL 8.2
I am trying to create a systemctl service for zookeeper. It fails to access the datadir.
Here is my config for zookeeper,
dataDir=/opt/zookeeper
maxClientCnxns=20
tickTime=2000
dataDir=/var/zookeeper/
initLimit=20
syncLimit=10
server.0=master:2888:3888
clientPort=2181
admin.serverPort=8082
Permission of /opt/zookeeper is set to 777.
[user1#server1 opt]$ ls -lart
total 0
dr-xr-xr-x. 17 root root 244 Jul 3 10:56 ..
drwxr-xr-x 3 root root 27 Jul 10 10:29 rh
drw-r--r-- 2 user2 user2 6 Jul 17 08:48 hsluw_data
drw-r--r-- 2 user2 user2 6 Jul 17 08:58 hsluw_config
drwxr-xr-x. 6 root root 71 Jul 17 08:58 .
drwxrwxrwx 3 user2 user2 23 Jul 17 09:40 zookeeper
If I run the command,
./bin/zookeeper-server-start.sh config/zookeeper.properties
it gives me an error message: Unable to access datadir
[2020-07-30 10:25:50,767] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-07-30 10:25:50,767] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-07-30 10:25:50,769] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-07-30 10:25:50,769] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /var/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:132)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally
However, sudoing the above command works,
sudo ./bin/zookeeper-server-start.sh config/zookeeper.properties
Now I have created a service in /etc/systemd/system/zookeeper.service
I wrote the service in /etc/systemd/system/zookeeper.service in this way,
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=user2
ExecStart=/home/user2/kafka/bin/zookeeper-server-start.sh /home/user2/kafka/config/zookeeper.properties
ExecStop=/home/user2/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The SELinux status is disabled.
user2#server1$ sestatus
SELinux status: disabled
Now if I do the following
sudo systemctl daemon-reload
sudo systemctl start zookeeper
sudo systemctl enable zookeeper
I am getting the the same Unable to access the datadir error like the following,
[user2#server1 /]$ systemctl status zookeeper
\u25cf zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-07-30 10:13:19 CEST; 24s ago
Main PID: 12911 (code=exited, status=3)
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: org.apache.zookeeper.server.persistence.FileTxnSnapLog$Data>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.persistence.FileTxnS>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: Unable to access datadir, exiting abnormally
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Failed with result 'exit-code'.
What am I missing here?
In the configuration file, this line
dataDir=/var/zookeeper/
appears twice. Removing that line solves the issue.
One day, My Postgresql server stopped working. Checked log. It was shutdown somehow.
root#ip_address:/# tail /var/log/postgresql/postgresql-10-main.log
2020-02-19 06:47:49.215 CET [23497] LOG: received smart shutdown request
2020-02-19 06:47:49.477 CET [23497] LOG: worker process: logical replication launcher (PID 23512) exited with exit code 1
2020-02-19 06:47:49.482 CET [23507] LOG: shutting down
2020-02-19 06:47:49.546 CET [23497] LOG: database system is shut down
When I run,
root#ip_address:/# psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
It complained that there are no files and directory. so I checked if my postgresql running.
root#ip_address:/# systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2020-03-08 16:19:24 CET; 26min ago
Process: 30136 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 30136 (code=exited, status=0/SUCCESS)
Mar 08 16:19:24 vps584959 systemd[1]: Starting PostgreSQL RDBMS...
Mar 08 16:19:24 vps584959 systemd[1]: Started PostgreSQL RDBMS.
It was running. but, if I check postgresql cluster.
root#ip_address:/# pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
10 main 5432 down postgres /var/lib/postgresql/10/main /var/log/postgresql/postgresql-10-main.log
It was DOWN
so I tried
root#ip_address:/# pg_ctlcluster 10 main start
Error: Config owner (deploy:1003) and data owner (postgres:114) do not match, and config owner is not root
I wasn't able to make it work. then I tried.
sudo chown -R deploy:postgres /var/lib/postgresql/10/ && sudo chmod -R u=rwX,go= /var/lib/postgresql/10/
try again.
root#ip_address:/# pg_ctlcluster 10 main start
Job for postgresql#10-main.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status postgresql#10-main.service" and "journalctl -xe" for details.
root#ip_address:/# systemctl status postgresql#10-main.service
● postgresql#10-main.service - PostgreSQL Cluster 10-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; indirect; vendor preset: enabled)
Active: failed (Result: protocol) since Sun 2020-03-08 16:59:53 CET; 2min 52s ago
Process: 31635 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 10-main start (code=exited, status=1/FAILURE)
Main PID: 23497 (code=exited, status=0/SUCCESS)
Mar 08 16:59:53 vps584959 systemd[1]: Starting PostgreSQL Cluster 10-main...
Mar 08 16:59:53 vps584959 postgresql#10-main[31635]: Error: /usr/lib/postgresql/10/bin/pg_ctl /usr/lib/postgresql/10/bin/pg_ctl start -D /var/lib/postgresql/10/main -l /var/log/postgre
Mar 08 16:59:53 vps584959 systemd[1]: postgresql#10-main.service: Can't open PID file /var/run/postgresql/10-main.pid (yet?) after start: No such file or directory
Mar 08 16:59:53 vps584959 systemd[1]: postgresql#10-main.service: Failed with result 'protocol'.
Mar 08 16:59:53 vps584959 systemd[1]: Failed to start PostgreSQL Cluster 10-main.
Don't know what to do more. Is anybody had the same problem?
More infos.
root#ip_address:/var/run/postgresql# ls
total 0
drwxrwsr-x 3 postgres postgres 60 Feb 19 06:47 .
drwxr-xr-x 28 root root 1060 Mar 8 13:58 ..
drwxr-s--- 2 postgres postgres 40 Feb 19 06:47 10-main.pg_stat_tmp
pg_ctlcluster 10 main start
Error: Config owner (deploy:1003) and data owner (postgres:114) do not match, and config owner is not root
That's pretty clear, isn't it?
The Ubuntu PostgreSQL startup script wants that postgresql.conf and/or pg_hba.conf be owned by user postgres, else it refuses to proceed.
When I'm trying to add magento 2 varnish.vcl file by creating a symbolic link, varnish service stop working with error permission denied, while if I use default varnish configuration file varnish works smooth.
My Stack is ubuntu 16.04, varnish 4.1
ls -al
drwxr-xr-x 2 root root 4096 Mar 21 13:14 .
drwxr-xr-x 96 root root 4096 Mar 21 12:56 ..
lrwxrwxrwx 1 root root 44 Mar 21 13:14 default.vcl -> /var/www/bazaar/varnish.vcl
-rw-r--r-- 1 root root 1225 Aug 22 2017 default.vcl_bak
-rw-r--r-- 1 root root 37 Mar 21 12:56 secret
here is the status for varnish service
● varnish.service - Varnish HTTP accelerator
Loaded: loaded (/lib/systemd/system/varnish.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/varnish.service.d
└─customexec.conf
Active: failed (Result: exit-code) since Wed 2018-03-21 13:59:08 UTC; 2s ago
Docs: https://www.varnish-cache.org/docs/4.1/
man:varnishd
Process: 3093 ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m (code=exited, status=2)
Main PID: 3093 (code=exited, status=2)
Mar 21 13:59:08 bazaar systemd[1]: Stopped Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar systemd[1]: Started Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar varnishd[3093]: Error: Cannot read -f file (/etc/varnish/default.vcl): Permission denied
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Unit entered failed state.
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Failed with result 'exit-code'.
my current user for nginx is bazaar
and permissions for varnish.vcl is as follow
-rw-r--r-- 1 bazaar bazaar 7226 Mar 21 13:24 varnish.vcl
Any hint or help will be highly appreciated.
Thanks.
It is likely that the user (vcache) does not have access to read in the parent directory(s) /var/www/bazaar.
We are setting up a MongoDB server for the production environment on Amazon EC2 instance, but could not able to start the service. I've followed this documentation for setup. Here are the steps, I've taken for setting up the server:
Added following to /etc/yum.repos.d/mongodb-org-3.0.repo
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
And installed MongoDB 3.0.2 using sudo yum install -y mongodb-org-3.0.2
Created three partitions for data, journal & log:
sudo mkdir /mongo
sudo mkdir /mongo/data
sudo mkdir /mongo/log
sudo mkdir /mongo/journal
Created file system for three separate partitions:
sudo mkfs.ext4 /dev/xvdb
sudo mkfs.ext4 /dev/xvdc
sudo mkfs.ext4 /dev/xvdd
Created entry in fstab for reboot:
echo '/dev/xvdb /mongo/data ext4 defaults,auto,noatime,noexec 0 0
/dev/xvdc /mongo/journal ext4 defaults,auto,noatime,noexec 0 0
/dev/xvdd /mongo/log ext4 defaults,auto,noatime,noexec 0 0' | sudo tee -a /etc/fstab
And mounted the partitions:
sudo mount /mongo/data
sudo mount /mongo/journal
sudo mount /mongo/log
Given the permissions and created link
sudo chown mongod:mongod /mongo/data /mongo/journal /mongo/log
sudo ln -s /mongo/journal /mongo/data/journal
Configured ulimit & read ahead settings as given in the documentation link above. Verified permissions and partitions:
[deployer#prod-mongo ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 1.3G 6.8G 16% /
devtmpfs 3.6G 0 3.6G 0% /dev
tmpfs 3.5G 0 3.5G 0% /dev/shm
tmpfs 3.5G 57M 3.4G 2% /run
tmpfs 3.5G 0 3.5G 0% /sys/fs/cgroup
/dev/xvdc 7.8G 36M 7.3G 1% /mongo/journal
/dev/xvdb 150G 51M 149G 1% /mongo/data
/dev/xvdd 3.9G 16M 3.6G 1% /mongo/log
Permissions:
[deployer#prod-mongo ~]$ ll /
total 32
lrwxrwxrwx. 1 root root 7 Sep 29 2014 bin -> usr/bin
dr-xr-xr-x. 4 root root 4096 Sep 29 2014 boot
drwxr-xr-x. 17 root root 2860 May 11 12:11 dev
lrwxrwxrwx. 1 root root 7 Sep 29 2014 lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Sep 29 2014 lib64 -> usr/lib64
drwxr-xr-x. 2 root root 6 Jun 10 2014 mnt
drwxr-xr-x. 5 mongod mongod 41 May 11 05:06 mongo
drwxr-xr-x. 21 root root 660 May 11 12:47 run
lrwxrwxrwx. 1 root root 8 Sep 29 2014 sbin -> usr/sbin
Inside /mongo
[deployer#prod-mongo ~]$ ll /mongo/
total 12
drwxr-xr-x. 3 mongod mongod 4096 May 11 07:33 data
drwxr-xr-x. 3 mongod mongod 4096 May 11 07:31 journal
drwxr-xr-x. 3 mongod mongod 4096 May 11 08:58 log
After changing the configurations inside /etc/mongodb.conf
logpath=/mongo/log/mongod.log
dbpath=/mongo/data
and when I'm doing: sudo service mongod start, I'm getting this error:
Starting mongod (via systemctl): Job for mongod.service failed. See 'systemctl status mongod.service' and 'journalctl -xn' for details.
[FAILED]
Further logging:
[deployer#prod-mongo ~]$ sudo systemctl status mongod.service
mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since Tue 2015-05-12 04:42:10 UTC; 42s ago
Process: 22881 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=1/FAILURE)
May 11 04:42:10 ip-xx-xx-xx-xx.local runuser[22887]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
May 11 04:42:10 ip-xx-xx-xx-xx.localdomain runuser[22887]: pam_unix(runuser:session): session closed for user mongod
May 11 04:42:10 ip-xx-xx-xx-xx.local mongod[22881]: Starting mongod: [FAILED]
May 11 04:42:10 ip-xx-xx-xx-xx.local systemd[1]: mongod.service: control process exited, code=exited status=1
May 11 04:42:10 ip-xx-xx-xx-xx.local systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
May 11 04:42:10 ip-xx-xx-xx-xx.local systemd[1]: Unit mongod.service entered failed state.
I've followed various articles and blog posts and StackExchange answers but didn't get any solution. Am I missing something?
Update: If I'm directly running the mongodb service from the normal user something like this: sudo mongod --logpath ~/mongod.log --dbpath ~/mongodata, then this service is starting properly.
We tried changing the path of the pid file to another directory, that didn't help either.
I'm guessing you're running a flavour of Linux that uses SELinux (RHEL or CentOS 7, perhaps?)
If so, the issue is that you don't have a permissive policy on your /mongo/ directory that permits access to daemons (like the mongod service.)
From Wikipedia:
SELinux can potentially control which activities a system allows each
user, process and daemon, with very precise specifications. However,
it is mostly used to confine daemons[citation needed] like database
engines or web servers that have more clearly defined data access and
activity rights. This limits potential harm from a confined daemon
that becomes compromised. Ordinary user-processes often run in the
unconfined domain, not restricted by SELinux but still restricted by
the classic Linux access rights
To check whether this is the issue, try this at the shell:
sudo setenforce 0
This should disable SELinux policies and allow the service to run.
For a more permanent solution, see https://wiki.centos.org/HowTos/SELinux
I ran into this problem and actually found a solution for me.
In short, mongodb 3.2 uses the user 'mongod' while older versions use 'mongodb'. Some of the files and directories were owned by 'mongodb' (the older user). Once I chmod'd them to the 'mongod' user, I was able to use systemctl to control the mongod process.
More specifically, it was the "/var/log/mongodb/*" files that had the wrong user ownership.
root#<HOST>:# ls -alh /var/log/mongodb
total 664K
drwxr-xr-x 2 mongod mongod 4.0K Oct 27 12:08 .
drwxr-xr-x. 22 root root 4.0K Oct 27 11:51 ..
-rw-r--r-- 1 mongodb mongodb 3.8K Oct 27 11:48 mongod.log
-rw-r--r-- 1 mongodb mongodb 19K Apr 14 2016 mongod.log.2016-04-14T18-29-34
-rw-r--r-- 1 mongodb mongodb 2.8K Apr 14 2016 mongod.log.2016-04-14T18-30-13
-rw-r--r-- 1 mongodb mongodb 12K Apr 14 2016 mongod.log.2016-04-14T22-27-27
-rw-r--r-- 1 mongodb mongodb 11K Apr 14 2016 mongod.log.2016-04-14T22-29-12
-rw-r--r-- 1 mongodb mongodb 5.6K Apr 18 2016 mongod.log-20160418.gz
-rw-r--r-- 1 mongodb mongodb 0 Apr 18 2016 mongod.log.2016-09-09T17-33-48
-rw-r--r-- 1 mongodb mongodb 3.6K Sep 9 11:34 mongod.log.2016-09-09T17-34-52
-rw-r--r-- 1 mongodb mongodb 23K Sep 9 11:49 mongod.log.2016-09-09T17-49-49
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 11:55 mongod.log.2016-09-09T17-55-15
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 12:02 mongod.log.2016-09-09T18-02-26
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 12:13 mongod.log.2016-09-09T18-13-17
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 12:25 mongod.log.2016-09-09T18-25-01
-rw-r--r-- 1 mongodb mongodb 5.2K Sep 9 12:47 mongod.log.2016-09-09T18-47-54
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 12:52 mongod.log.2016-09-09T18-52-16
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 12:54 mongod.log.2016-09-09T18-54-49
-rw-r--r-- 1 mongodb mongodb 5.0K Sep 9 13:01 mongod.log.2016-09-09T19-01-22
-rw-r--r-- 1 mongodb mongodb 3.0K Sep 9 13:03 mongod.log.2016-09-09T19-03-21
-rw-r--r-- 1 mongodb mongodb 215K Sep 9 14:25 mongod.log.2016-09-09T20-25-59
-rw-r--r-- 1 mongodb mongodb 281K Sep 10 03:42 mongod.log-20160910
-rw-r--r-- 1 mongodb mongodb 0 Sep 10 03:42 mongod.log.2016-10-27T17-42-42
-rw-r----- 1 mongod mongod 0 Sep 29 22:03 mongod.log.rpmnew
Notice the owner of the directory is 'mongod' (the new user) while the log files are all owned by 'mongodb' (the old user).
In case, anyone encountered the same issue with MongoDB startup, here is the thread of comments https://jira.mongodb.org/browse/SERVER-18439. This is scheduled to be fixed in 3.1.