Kafka services aren't coming up - apache-kafka

When I start the kafka services on my rhel machine,it fails & prints the following error.Nothing printed in logs aswell.
[root#node01 java]# systemctl start kafka
Job for kafka.service failed because a configured resource limit was exceeded. See "systemctl status kafka.service" and "journalctl -xe" for details.
I have cross verified these kafka configurations files to that of another machine with similar setup.It all looks fine.Also checked few online resources & nothing much turned out to be helpful.
Any thoughts?
Output of journalctl -xe are as follows:
-- Unit kafka.service has begun starting up.
May 28 15:30:09 hostm01 runuser[30740]: pam_unix(runuser:session): session opened for user ossadm by (uid=0)
May 28 15:30:09 hostm01 runuser[30740]: pam_unix(runuser:session): session closed for user ossadm
May 28 15:30:09 hostm01 kafka[30733]: Starting kafka ... [ OK ]
May 28 15:30:11 hostm01 kafka[30733]: [ OK ]
May 28 15:30:11 hostm01 systemd[1]: Started Apache Kafka.
-- Subject: Unit kafka.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kafka.service has finished starting up.
--
-- The start-up result is done.
May 28 15:30:11 hostm01 systemd[1]: kafka.service: main process exited, code=exited, status=1/FAILURE
May 28 15:30:12 hostm01 runuser[31178]: pam_unix(runuser:session): session opened for user ossadm by (uid=0)
May 28 15:30:12 hostm01 kafka[31171]: Stopping kafka ... STOPPED
May 28 15:30:12 hostm01 runuser[31178]: pam_unix(runuser:session): session closed for user ossadm
May 28 15:30:12 hostm01 kafka[31171]: [17B blob data]
May 28 15:30:12 hostm01 systemd[1]: Unit kafka.service entered failed state.
May 28 15:30:12 hostm01 systemd[1]: kafka.service failed.

Related

Ubuntu service stops randomly with "Main Process exited, status 143/n/a"

My apps deployed as debians and started using systemd service.The app is getting crashed randomly. I am unable to find the reason for the crash.
I have 4 applications running[built using java, scala], out of which two are getting killed(named as op and common). All are started using systemd services.
Error on syslog is
Jul 22 11:45:44 misqa mosquitto[2930]: Socket error on client
005056b76983-Common, disconnecting
Jul 22 11:45:44 misqa systemd[1]: commonmod.service: Main process
exited, code=exited, status=143/n/a
Jul 22 11:45:44 misqa systemd[1]: commonmod.service: Unit entered
failed state
Jul 22 11:45:44 misqa systemd[1]: commonmod.service: Failed with
result 'exit-code'
Jul 22 11:45:44 misqa systemd[1]: opmod.service: Main process exited,
code=exited, status=143/n/a
Jul 22 11:45:44 misqa systemd[1]: opmod.service: Unit entered failed
state
Jul 22 11:45:44 misqa systemd[1]: opmod.service: Failed with result
'exit-code'
But I am not getting any error on my application log file for both op and common
When I read more, I understood that the reason for crash is due to SIGTERM command, but unable to find out what is causing it. In any of these applications, I dont have exec commands for killall.
Is there anyway to identify which process is killing my applications.
My systemd service is like this:
[Unit]
Description=common Module
After=common-api
Requires=common-api
[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/common-api
[Install]
WantedBy=multi-user.target
Basically Java programs sometimes don't send back the expected exit status when shutting down in response to SIGTERM.
You should be able to suppress this by adding the exit code into the systemd service file as a "success" exit status:
[Service]
SuccessExitStatus=143
This solution was sucessful applied here (serverfault) and here (stasckoverflow) both with java apps.

fail2ban fails to start ubuntu 16.04

I have used this tutorial to install fail2ban for my Ubuntu 16.04 server.
After going through this I tried to start with: /etc/init.d/fail2ban start
Here was the response:
[....] Starting fail2ban (via systemctl): fail2ban.serviceJob for fail2ban.service failed because the control process exited with error code. See "systemctl status fail2ban.service" and "journalctl -xe" for details.
failed!
When I then run: systemctl status fail2ban.service
I get this:
> fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Tue 2018-05-15 14:01:38 UTC; 1min 40s ago
Docs: man:fail2ban(1)
Process: 4468 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Control process exited, code=exited status=255
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Unit entered failed state.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Failed with result 'exit-code'.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Stopped Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Start request repeated too quickly.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
Some tutorials at DigitalOcean contain errors. Check your /etc/fail2ban/jail.local. Try to keep it as simple as you can, i.e. keep there only those options you want to change.
Otherwise, if you have copied jail.conf to jail.local (according to the guide at DO), then delete or comment out pam section, if you do not use it, in jail.local file.
Go to line 146 of /etc/fail2ban/jail.local
# [pam-generic]
# enabled = false
# pam-generic filter can be customized to monitor specific subset of 'tty's
# filter = pam-generic
# port actually must be irrelevant but lets leave it all for some possible uses
# port = all
# banaction = iptables-allports
# port = anyport
# logpath = /var/log/auth.log
# maxretry = 6
More details are here: https://github.com/fail2ban/fail2ban/issues/1396

Celery daemonization: celery.service: Failed at step USER spawning /home/mike/movingcollage/movingcollageenv/bin/celery: No such process

When I do journalctl -f after systemctl start celery.service I get
Mar 21 19:14:21 ubuntu-2gb-nyc3-01 systemd[1]: Reloading.
Mar 21 19:14:21 ubuntu-2gb-nyc3-01 systemd[1]: Started ACPI event daemon.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[21431]: celery.service: Failed at step USER spawning /home/mike/movingcollage/movingcollageenv/bin/celery: No such process
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: Starting celery service...
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Control process exited, code=exited status=217
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: Failed to start celery service.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Unit entered failed state.
Mar 21 19:14:25 ubuntu-2gb-nyc3-01 systemd[1]: celery.service: Failed with result 'exit-code'.
This is my celery.service configuration:
[Unit]
Description=celery service
After=network.target
[Service]
#PIDFile=/run/celery/pid
Type=forking
User=celery
Group=celery
#RuntimeDirectory=celery
WorkingDirectory=/home/mike/movingcollage
ExecStart=/home/mike/movingcollage/movingcollageenv/bin/celery multi start 3 -A movingcollage "-c 5 -Q celery -l INFO"
ExecReload=/home/mike/movingcollage/movingcollageenv/bin/celery multi restart 3
ExecStop=/home/mike/movingcollage/movingcollageenv/bin/celery multi stopwait 3
[Install]
WantedBy=multi-user.target
Does anyone know what is wrong? Thanks in advance
For celery multi I think it is better to use Type=oneshot. Celery can start many workers processes and each will have its own PID.
I start my celery like this:
celery multi start 2\
-A my_app_name\
--uid=1001 --gid=1001\
-f /var/log/celery/celery.log\
--loglevel="INFO"\
--pidfile:1=/run/celery1.pid\
--pidfile:2=/run/celery2.pid
Of course in your case uid, gid and all paths will be different.
You need to change:
User=celery
Group=celery
to your user and group, in my case:
User=ubuntu
Group=ubuntu

Mongodb fails to start -> presents weird error logs

When I moved my environment from my local (mac) to my server (ubuntu) I unzipped my directory and the server installed with npm install with no errors or warnings, but my database was failing so I decided to reinstall it based on this tutorial (well, apt-remove mongo* first)
https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-16-04
but then I get a
Job for mongodb.service failed because the control process exited with error code. See "systemctl status mongodb.service" and "journalctl -xe" for details.
Does anyone know what any of this means?
-- Unit mongodb.service has begun starting up.
Jun 20 03:54:18 ip-172-31-16-163 mongodb[25271]: * Starting database mongodb
Jun 20 03:54:19 ip-172-31-16-163 mongodb[25271]: ...fail!
Jun 20 03:54:19 ip-172-31-16-163 systemd[1]: mongodb.service: Control process exited, code=exited status=1
Jun 20 03:54:19 ip-172-31-16-163 sudo[25268]: pam_unix(sudo:session): session closed for user root
Jun 20 03:54:19 ip-172-31-16-163 systemd[1]: Failed to start LSB: An object/document-oriented database.
-- Subject: Unit mongodb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongodb.service has failed.
--
-- The result is failed.
Jun 20 03:54:19 ip-172-31-16-163 systemd[1]: mongodb.service: Unit entered failed state.
Jun 20 03:54:19 ip-172-31-16-163 systemd[1]: mongodb.service: Failed with result 'exit-code'.
Looks familiar. Check ownership of files. Files in dbPath, mongod.run -lock file, keyfile...
Basically all those files what are listed at your /etc/mongod.conf
Run the following command and Its works for me
sudo apt-get install --reinstall mongodb

unable to set replica set in mongodb3.2

I am trying to setup a demo replicaset from standalone MongoDB in MongoDB 3.2 with the following options in mongod.conf:
#replication:
oplogSizeMB: 10240
replSetName: "rs0"
But when I am trying to start mongodb it is throwing error:
Starting mongod (via systemctl): Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details. [FAILED]
output of journalctl -xe says:
Jul 02 07:53:56 smartJN3-LTest-Blr1 systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has begun starting up.
Jul 02 07:53:56 smartJN3-LTest-Blr1 runuser[29989]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Jul 02 07:53:56 smartJN3-LTest-Blr1 runuser[29989]: pam_unix(runuser:session): session closed for user mongod
Jul 02 07:53:56 smartJN3-LTest-Blr1 mongod[29982]: Starting mongod: [FAILED]
Jul 02 07:53:56 smartJN3-LTest-Blr1 systemd[1]: mongod.service: control process exited, code=exited status=1
Jul 02 07:53:56 smartJN3-LTest-Blr1 systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Jul 02 07:53:56 smartJN3-LTest-Blr1 systemd[1]: Unit mongod.service entered failed state.
Jul 02 07:53:56 smartJN3-LTest-Blr1 systemd[1]: mongod.service failed.
Jul 02 07:53:56 smartJN3-LTest-Blr1 polkitd[2089]: Unregistered Authentication Agent for unix-process:29977:8668980 (system bus name :1.114, object path /org
This is on CentOS 7.2 . Thanks
Anyone any help. Thanks
fixed it myself. stupid me, I didn't uncomment the section header replication, which is why the settings were never picked.