Failed at step CHDIR spawning /opt/Informer5/informer5.sh: No such file or directory - service

I have an application called "Informer." I am trying to register it as a service and I'm not sure where I have gone wrong.
Here is informer.service:
[Unit]
Description=Informer Docker
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
User=root
WorkingDirectory=/opt/Informer5 <===Modify for the appropriate directory
ExecStart=/opt/Informer5/informer5.sh start <===Modify for the appropriate directory
ExecStop=/opt/Informer5/informer5.sh stop <===Modify for the appropriate directory
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
This file is in my /etc/systemd/system folder. I have enabled the service via
sudo systemctl enable informer
when I execute
sudo systemctl start informer
The response I get is
Job for informer.service failed because the control process exited with error code. See "systemctl status informer.service" and "journalctl -xe" for details.
So, running, systemctl status informer.service, I see the following:
$sudo systemctl status informer.service
● informer.service - Informer Docker
Loaded: loaded (/etc/systemd/system/informer.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2018-09-18 08:49:07 EDT; 4min 2s ago
Process: 5780 ExecStart=/opt/Informer5/informer5.sh start <===Modify for the appropriate directory (code=exited, status=200/CHDIR)
Main PID: 5780 (code=exited, status=200/CHDIR)
Sep 18 08:49:07 informer5 systemd[1]: Starting Informer Docker...
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Main process exited, code=exited, status=200/CHDIR
Sep 18 08:49:07 informer5 systemd[1]: Failed to start Informer Docker.
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Unit entered failed state.
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Failed with result 'exit-code'.
Running $ sudo journalctl -xe, I get:
$ sudo journalctl -xe
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit informer.service has failed.
--
-- The result is failed.
Sep 18 08:06:36 informer5 systemd[1]: informer.service: Unit entered failed state.
Sep 18 08:06:36 informer5 systemd[1]: informer.service: Failed with result 'exit-code'.
Sep 18 08:06:36 informer5 sudo[5690]: pam_unix(sudo:session): session closed for user root
Sep 18 08:12:40 informer5 sudo[5712]: n_connor : TTY=pts/0 ; PWD=/opt/Informer5 ; USER=root ; COMMAND=/bin/cat /etc/systemd/system/informer.service
Sep 18 08:12:40 informer5 sudo[5712]: pam_unix(sudo:session): session opened for user root by n_connor(uid=0)
Sep 18 08:12:40 informer5 sudo[5712]: pam_unix(sudo:session): session closed for user root
Sep 18 08:15:01 informer5 CRON[5716]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 18 08:15:01 informer5 CRON[5717]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 18 08:15:01 informer5 CRON[5716]: pam_unix(cron:session): session closed for user root
Sep 18 08:17:01 informer5 CRON[5721]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 18 08:17:01 informer5 CRON[5722]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Sep 18 08:17:01 informer5 CRON[5721]: pam_unix(cron:session): session closed for user root
Sep 18 08:25:01 informer5 CRON[5732]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 18 08:25:01 informer5 CRON[5733]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 18 08:25:01 informer5 CRON[5732]: pam_unix(cron:session): session closed for user root
Sep 18 08:35:01 informer5 CRON[5745]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 18 08:35:01 informer5 CRON[5746]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 18 08:35:01 informer5 CRON[5745]: pam_unix(cron:session): session closed for user root
Sep 18 08:45:01 informer5 CRON[5758]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 18 08:45:01 informer5 CRON[5759]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 18 08:45:01 informer5 CRON[5758]: pam_unix(cron:session): session closed for user root
Sep 18 08:47:43 informer5 sudo[5774]: n_connor : TTY=pts/0 ; PWD=/opt/Informer5 ; USER=root ; COMMAND=/bin/cat /etc/systemd/system/informer.service
Sep 18 08:47:43 informer5 sudo[5774]: pam_unix(sudo:session): session opened for user root by n_connor(uid=0)
Sep 18 08:47:43 informer5 sudo[5774]: pam_unix(sudo:session): session closed for user root
Sep 18 08:49:07 informer5 sudo[5777]: n_connor : TTY=pts/0 ; PWD=/opt/Informer5 ; USER=root ; COMMAND=/bin/systemctl start informer
Sep 18 08:49:07 informer5 sudo[5777]: pam_unix(sudo:session): session opened for user root by n_connor(uid=0)
Sep 18 08:49:07 informer5 systemd[5780]: informer.service: Failed at step CHDIR spawning /opt/Informer5/informer5.sh: No such file or directory
-- Subject: Process /opt/Informer5/informer5.sh could not be executed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- The process /opt/Informer5/informer5.sh could not be executed and failed.
--
-- The error number returned by this process is 2.
Sep 18 08:49:07 informer5 systemd[1]: Starting Informer Docker...
-- Subject: Unit informer.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit informer.service has begun starting up.
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Main process exited, code=exited, status=200/CHDIR
Sep 18 08:49:07 informer5 systemd[1]: Failed to start Informer Docker.
-- Subject: Unit informer.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit informer.service has failed.
--
-- The result is failed.
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Unit entered failed state.
Sep 18 08:49:07 informer5 systemd[1]: informer.service: Failed with result 'exit-code'.
Sep 18 08:49:07 informer5 sudo[5777]: pam_unix(sudo:session): session closed for user root
Sep 18 08:53:09 informer5 sudo[5787]: n_connor : TTY=pts/0 ; PWD=/opt/Informer5 ; USER=root ; COMMAND=/bin/systemctl status informer.service
Sep 18 08:53:09 informer5 sudo[5787]: pam_unix(sudo:session): session opened for user root by n_connor(uid=0)
Sep 18 08:53:09 informer5 systemd[1]: Configuration file /etc/systemd/system/informer.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Sep 18 08:53:09 informer5 sudo[5787]: pam_unix(sudo:session): session closed for user root
Sep 18 08:54:01 informer5 sudo[5791]: n_connor : TTY=pts/0 ; PWD=/opt/Informer5 ; USER=root ; COMMAND=/bin/journalctl -xe
Sep 18 08:54:01 informer5 sudo[5791]: pam_unix(sudo:session): session opened for user root by n_connor(uid=0)
I think that the salient error here is informer.service: Failed at step CHDIR spawning /opt/Informer5/informer5.sh: No such file or directory
I have checked that the file does exist and I can manually start the service using that file as root. I have a home directory set in the service file. I have no idea where this error is coming from. I am using ubuntu 16.04 and I have enabled root login with ssh. Any ideas?

I added #!/bin/bash in the bash file at the top and it worked
e.g.
nano server.sh
#!/bin/bash
echo "Serving Web App!"
serve -s build -p 4004
chmod +x server.sh
nano /etc/systemd/system/web.service
[Unit]
Description=Web App
After=network.target
[Service]
WorkingDirectory=/var/www/html/web
User=root
ExecStart=/var/www/html/web/server.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Things to note
Verify the WorkingDirectory e.g cd /var/www/html/web if it does not exist create it e.g. cd /var/www/html/web

Related

Apache Zookeeper: Unable to access data directory

OS: RHEL 8.2
I am trying to create a systemctl service for zookeeper. It fails to access the datadir.
Here is my config for zookeeper,
dataDir=/opt/zookeeper
maxClientCnxns=20
tickTime=2000
dataDir=/var/zookeeper/
initLimit=20
syncLimit=10
server.0=master:2888:3888
clientPort=2181
admin.serverPort=8082
Permission of /opt/zookeeper is set to 777.
[user1#server1 opt]$ ls -lart
total 0
dr-xr-xr-x. 17 root root 244 Jul 3 10:56 ..
drwxr-xr-x 3 root root 27 Jul 10 10:29 rh
drw-r--r-- 2 user2 user2 6 Jul 17 08:48 hsluw_data
drw-r--r-- 2 user2 user2 6 Jul 17 08:58 hsluw_config
drwxr-xr-x. 6 root root 71 Jul 17 08:58 .
drwxrwxrwx 3 user2 user2 23 Jul 17 09:40 zookeeper
If I run the command,
./bin/zookeeper-server-start.sh config/zookeeper.properties
it gives me an error message: Unable to access datadir
[2020-07-30 10:25:50,767] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-07-30 10:25:50,767] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-07-30 10:25:50,769] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-07-30 10:25:50,769] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /var/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:132)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally
However, sudoing the above command works,
sudo ./bin/zookeeper-server-start.sh config/zookeeper.properties
Now I have created a service in /etc/systemd/system/zookeeper.service
I wrote the service in /etc/systemd/system/zookeeper.service in this way,
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=user2
ExecStart=/home/user2/kafka/bin/zookeeper-server-start.sh /home/user2/kafka/config/zookeeper.properties
ExecStop=/home/user2/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The SELinux status is disabled.
user2#server1$ sestatus
SELinux status: disabled
Now if I do the following
sudo systemctl daemon-reload
sudo systemctl start zookeeper
sudo systemctl enable zookeeper
I am getting the the same Unable to access the datadir error like the following,
[user2#server1 /]$ systemctl status zookeeper
\u25cf zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-07-30 10:13:19 CEST; 24s ago
Main PID: 12911 (code=exited, status=3)
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: org.apache.zookeeper.server.persistence.FileTxnSnapLog$Data>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.persistence.FileTxnS>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: Unable to access datadir, exiting abnormally
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Failed with result 'exit-code'.
What am I missing here?
In the configuration file, this line
dataDir=/var/zookeeper/
appears twice. Removing that line solves the issue.

Adding TSL To Mongo causes it to crash

I have a 3 mongodb replication architecture up and running. When I add TSL to the /etc/mongod conf file mongod it crashes right away and writes nothing to mongo log. I put the pem file containing all the certs and key in /etc/ssl/mongo.pem with the key at the bottom of the file. I did a chmod 600 on the pem file. I am adding TSL to the primary first and stopping and starting mongod. My mongod TLS config:
net:
port: 27017
bindIpAll: true
tls:
mode: requireTLS
certificateKeyFile: /etc/ssl/mongo.pem
security:
keyFile: /opt/mongod/keyfile
The error I get when starting:
ec2-user#ip-10-0-16-140 log]$ sudo service mongod start
Starting mongod (via systemctl): Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[FAILED]
The return from the status call:
[ec2-user#ip-10-0-16-140 ~]$ systemctl status mongod.service
● mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2020-02-28 00:43:51 UTC; 17s ago
Docs: man:systemd-sysv-generator(8)
Process: 18327 ExecStop=/etc/rc.d/init.d/mongod stop (code=exited, status=0/SUCCESS)
Process: 18548 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=1/FAILURE)
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal runuser[18559]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal runuser[18559]: pam_unix(runuser:session): session closed for user mongod
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal mongod[18548]: Starting mongod: [FAILED]
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=1
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: mongod.service failed.
[ec2-user#ip-10-0-16-140 ~]$ journalctl -xe
Feb 28 00:42:13 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18523]: pam_unix(sudo:session): session closed for user root
Feb 28 00:42:27 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18525]: ec2-user : TTY=pts/0 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/bin/vi /etc/mongo.pem
Feb 28 00:42:27 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18525]: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
Feb 28 00:42:31 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18525]: pam_unix(sudo:session): session closed for user root
Feb 28 00:42:38 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18527]: ec2-user : TTY=pts/0 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/bin/vi /etc/ssl/mongo.pem
Feb 28 00:42:38 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18527]: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
Feb 28 00:43:38 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18527]: pam_unix(sudo:session): session closed for user root
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18529]: ec2-user : TTY=pts/0 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/sbin/service mongod start
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18529]: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has begun starting up.
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal runuser[18559]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal runuser[18559]: pam_unix(runuser:session): session closed for user mongod
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal mongod[18548]: Starting mongod: [FAILED]
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=1
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal systemd[1]: mongod.service failed.
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal sudo[18529]: pam_unix(sudo:session): session closed for user root
Feb 28 00:43:51 ip-10-0-16-140.us-gov-east-1.compute.internal dhclient[2603]: XMT: Solicit on eth0, interval 113300ms.
It can be an issue with your mongodb.pem file. For testing purposes, you can create a self-signed certificate and key like this:
openssl req -newkey rsa:2048 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key
cat mongodb-cert.key mongodb-cert.crt > mongodb.pem
and then set the permissions on the PEM file, you can use
chmod 600 mongodb.pem
consider the following configuration file for a mongod instance:
net:
tls:
mode: requireTLS
certificateKeyFile: /etc/ssl/mongodb.pem
systemLog:
destination: file
path: "/var/log/mongodb/mongod.log"
logAppend: true
storage:
dbPath: "/var/lib/mongodb"
processManagement:
fork: true
net:
bindIp: 0.0.0.0
port: 27017
Note: bindIP with 0.0.0.0 is not best practice, but it good place to start
Also, you may find logs in /var/log/mongodb/mongod.log as the default path

Celery daemonization: celery.service: Failed at step USER spawning /home/mike/movingcollage/movingcollageenv/bin/celery: No such process

When I do journalctl -f after systemctl start celery.service I get
Mar 21 19:14:21 ubuntu-2gb-nyc3-01 systemd[1]: Reloading.
Mar 21 19:14:21 ubuntu-2gb-nyc3-01 systemd[1]: Started ACPI event daemon.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[21431]: celery.service: Failed at step USER spawning /home/mike/movingcollage/movingcollageenv/bin/celery: No such process
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: Starting celery service...
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Control process exited, code=exited status=217
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: Failed to start celery service.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Unit entered failed state.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Failed with result 'exit-code'.
This is my celery.service configuration:
[Unit]
Description=celery service
After=network.target
[Service]
#PIDFile=/run/celery/pid
Type=forking
User=celery
Group=celery
#RuntimeDirectory=celery
WorkingDirectory=/home/mike/movingcollage
ExecStart=/home/mike/movingcollage/movingcollageenv/bin/celery multi start 3 -A movingcollage "-c 5 -Q celery -l INFO"
ExecReload=/home/mike/movingcollage/movingcollageenv/bin/celery multi restart 3
ExecStop=/home/mike/movingcollage/movingcollageenv/bin/celery multi stopwait 3
[Install]
WantedBy=multi-user.target
Does anyone know what is wrong? Thanks in advance
For celery multi I think it is better to use Type=oneshot. Celery can start many workers processes and each will have its own PID.
I start my celery like this:
celery multi start 2\
-A my_app_name\
--uid=1001 --gid=1001\
-f /var/log/celery/celery.log\
--loglevel="INFO"\
--pidfile:1=/run/celery1.pid\
--pidfile:2=/run/celery2.pid
Of course in your case uid, gid and all paths will be different.
You need to change:
User=celery
Group=celery
to your user and group, in my case:
User=ubuntu
Group=ubuntu

varnish 4.1 default.vcl permissions denied

When I'm trying to add magento 2 varnish.vcl file by creating a symbolic link, varnish service stop working with error permission denied, while if I use default varnish configuration file varnish works smooth.
My Stack is ubuntu 16.04, varnish 4.1
ls -al
drwxr-xr-x 2 root root 4096 Mar 21 13:14 .
drwxr-xr-x 96 root root 4096 Mar 21 12:56 ..
lrwxrwxrwx 1 root root 44 Mar 21 13:14 default.vcl -> /var/www/bazaar/varnish.vcl
-rw-r--r-- 1 root root 1225 Aug 22 2017 default.vcl_bak
-rw-r--r-- 1 root root 37 Mar 21 12:56 secret
here is the status for varnish service
● varnish.service - Varnish HTTP accelerator
Loaded: loaded (/lib/systemd/system/varnish.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/varnish.service.d
└─customexec.conf
Active: failed (Result: exit-code) since Wed 2018-03-21 13:59:08 UTC; 2s ago
Docs: https://www.varnish-cache.org/docs/4.1/
man:varnishd
Process: 3093 ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m (code=exited, status=2)
Main PID: 3093 (code=exited, status=2)
Mar 21 13:59:08 bazaar systemd[1]: Stopped Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar systemd[1]: Started Varnish HTTP accelerator.
Mar 21 13:59:08 bazaar varnishd[3093]: Error: Cannot read -f file (/etc/varnish/default.vcl): Permission denied
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Unit entered failed state.
Mar 21 13:59:08 bazaar systemd[1]: varnish.service: Failed with result 'exit-code'.
my current user for nginx is bazaar
and permissions for varnish.vcl is as follow
-rw-r--r-- 1 bazaar bazaar 7226 Mar 21 13:24 varnish.vcl
Any hint or help will be highly appreciated.
Thanks.
It is likely that the user (vcache) does not have access to read in the parent directory(s) /var/www/bazaar.

MongoDB service not starting

I recently installed mongoDB in Amazon Linux and I am able to start mongod using the service command.
sudo service mongod start
Above works as expected.
Today I installed mongoDB in Centos 7 following the instructions in the mongodb site.
Now when I start the service using the same command as mentioned above, the service is not able to start.
I have done the following checks they look correct, so not sure what is going on here.
the path to data folder ie. /data/db is owned by user mongod:mongod
the /etc/mongod.conf has dbpath set to /data/db
the user in /etc/init.d/mongod script is set as mongod:mongod
Journal entry looks like this:
[centos#ip-172-31-16-240 init.d]$ sudo journalctl -xn
-- Logs begin at Thu 2015-03-26 11:45:57 UTC, end at Thu 2015-03-26 12:33:34 UTC. --
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal mongod[1645]: ******>>>> mongod user is mongod
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal runuser[1654]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal runuser[1654]: pam_unix(runuser:session): session closed for user mongod
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal mongod[1645]: Starting mongod: [FAILED]
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=1
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Mar 26 12:26:49 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1660]: centos : TTY=pts/0 ; PWD=/etc/rc.d/init.d ; USER=root ; COMMAND=/bin/journalctl -xn
Mar 26 12:28:00 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1664]: centos : TTY=pts/1 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/less /var/log/mongodb/mongod.log
Mar 26 12:33:34 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1668]: centos : TTY=pts/0 ; PWD=/etc/rc.d/init.d ; USER=root ; COMMAND=/bin/journalctl -xn
[centos#ip-172-31-16-240 init.d]$
However, if I start using sudo mongod, the mongod process starts up.
Any ideas why the service command is not working?
Just incase anyone encountered this problem, this is how I fixed.
After all it was permission related and SELinux security context which is set to enforced by default.
so, after you attempt to start mongod service and it fails, run this command and this should show you the reason if anything permission related.
sudo ausearch -m avc -ts today | audit2allow
You would see somethign like below for mongod related audits
allow mongod_t default_t:file getattr;
To fix the above error, you do the following:
967 30/03/15 07:06:52 sudo chcon -Rv --type=mongod_var_lib_t /data
Note /data/db is where my mongod data files are located.