fail2ban fails to start ubuntu 16.04 - ubuntu-16.04

I have used this tutorial to install fail2ban for my Ubuntu 16.04 server.
After going through this I tried to start with: /etc/init.d/fail2ban start
Here was the response:
[....] Starting fail2ban (via systemctl): fail2ban.serviceJob for fail2ban.service failed because the control process exited with error code. See "systemctl status fail2ban.service" and "journalctl -xe" for details.
failed!
When I then run: systemctl status fail2ban.service
I get this:
> fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Tue 2018-05-15 14:01:38 UTC; 1min 40s ago
Docs: man:fail2ban(1)
Process: 4468 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Control process exited, code=exited status=255
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Unit entered failed state.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Failed with result 'exit-code'.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Stopped Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Start request repeated too quickly.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.

Some tutorials at DigitalOcean contain errors. Check your /etc/fail2ban/jail.local. Try to keep it as simple as you can, i.e. keep there only those options you want to change.
Otherwise, if you have copied jail.conf to jail.local (according to the guide at DO), then delete or comment out pam section, if you do not use it, in jail.local file.
Go to line 146 of /etc/fail2ban/jail.local
# [pam-generic]
# enabled = false
# pam-generic filter can be customized to monitor specific subset of 'tty's
# filter = pam-generic
# port actually must be irrelevant but lets leave it all for some possible uses
# port = all
# banaction = iptables-allports
# port = anyport
# logpath = /var/log/auth.log
# maxretry = 6
More details are here: https://github.com/fail2ban/fail2ban/issues/1396

Related

Fatal error when starting orion context broker

My orion context broker does not start and when I enter the command
/etc/init.d/contextBroker start I get this message
[root#context-broker ~]# /etc/init.d/contextBroker start
Starting contextBroker (via systemctl): Job for contextBroker.service failed because the control process exited with error code. See "systemctl status contextBroker.service" and "journalctl -xe" for details.
[FAILED]
The systemctl status contextBroker.service commannd gives this message
[root#context-broker ~]# systemctl status contextBroker.service
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2019-05-24 11:38:50 UTC; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 9782 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=1/FAILURE)
May 24 11:38:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: contextBroker is stopped
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: Starting...
May 24 11:38:48 context-broker.novalocal su[9788]: (to orion) root on none
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: Starting contextBroker... cat: /var/run/contextBroker/contextBroker.pid...irectory
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service: control process exited, code=exited status=1
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: pidfile not found[FAILED]
May 24 11:38:50 context-broker.novalocal systemd[1]: Failed to start LSB: run contextBroker.
May 24 11:38:50 context-broker.novalocal systemd[1]: Unit contextBroker.service entered failed state.
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Also the /tmp/contextBroker.log file looks like this
time=2019-05-24T11:41:12.971Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1753]:restStart | msg=Fatal Error (error starting REST interface)
I checked if mongodb is running and it is running correctly.
UPDATE
With some searching I realised I had to kill the pid of the process and after I did that the service successfully starts according to the messaags but I find it doesnt actually work. When I ask for the status I get the following:
[root#context-broker centos]# /etc/init.d/contextBroker status
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: active (exited) since Sun 2019-05-26 18:34:49 UTC; 4min 56s ago
Docs: man:systemd-sysv-generator(8)
Process: 16295 ExecStop=/etc/rc.d/init.d/contextBroker stop (code=exited, status=0/SUCCESS)
Process: 16319 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=0/SUCCESS)
May 26 18:34:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: contextBroker is stopped
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: Starting...
May 26 18:34:47 context-broker.novalocal su[16325]: (to orion) root on none
May 26 18:34:49 context-broker.novalocal systemd[1]: Started LSB: run contextBroker.
May 26 18:34:49 context-broker.novalocal contextBroker[16319]: Starting contextBroker... [ OK ]
The log file has the same message as previously.
With some searching again I believe the cause is that the service doesnt have a daemon(??). So if that is the case how do I add one?
Normally when getting the error starting REST interface, it's because there is already a broker running, which means the port is already taken. Make sure there is no broker already running.

firebird3.0.service failed because the control process exited with error code. How to start firebird?

I'm working with firebird3.0 database suddenly my database is stopped working and when i have checked server status by
$ /etc/init.d/firebird3.0 status
i see server is stopped
● firebird3.0.service - Firebird Database Server ( SuperServer )
Loaded: loaded (/lib/systemd/system/firebird3.0.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-05-16 19:01:13 IST; 29s ago
Process: 9628 ExecStart=/usr/sbin/fbguard -pidfile /run/firebird3.0/default.pid -daemon -forever (code=exited, status=252)
May 16 19:00:58 ADMIN-I-61 systemd1: Starting Firebird Database Server ( SuperServer )...
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Control process exited, code=exited status=252
May 16 19:01:13 ADMIN-I-61 systemd1: Failed to start Firebird Database Server ( SuperServer ).
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Unit entered failed state.
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Failed with result 'exit-code'.
when i'm trying following commands to start server
/etc/init.d/firebird3.0 start
/etc/init.d/firebird3.0 restart
it returns me
[....] Starting firebird3.0 (via systemctl): firebird3.0.serviceJob for firebird3.0.service failed because the control process exited with error code. See "systemctl status firebird3.0.service" and "journalctl -xe" for details.
failed!
My today's firebird.log file is looks like this
ADMIN-I-61 Thu May 16 11:06:37 2019
/opt/firebird/bin/fbguard: guardian starting /opt/firebird/bin/firebird
ADMIN-I-61 Thu May 16 11:07:26 2019
INET/inet_error: bind errno = 98
ADMIN-I-61 Thu May 16 11:07:27 2019
startup:INET_connect:
Unable to complete network request to host "ADMIN-I-61".
Error while listening for an incoming connection.
Address already in use
ADMIN-I-61 Thu May 16 11:07:27 2019
/opt/firebird/bin/fbguard: /opt/firebird/bin/firebird terminated due to startup error (2)
ADMIN-I-61 Thu May 16 11:07:27 2019
/opt/firebird/bin/fbguard: /opt/firebird/bin/firebird terminated due to startup error (2)
ADMIN-I-61 Thu May 16 12:22:35 2019
/opt/firebird/bin/fbguard: guardian starting /opt/firebird/bin/firebird
I have check ports
please help...!
When installing the firebird from the package deb, the line in the file /etc/firebird/3.0/firebird.conf was uncommented:
RemoteBindAddress = localhost
Comment out this line:
**#RemoteBindAddress = localhost**
Default:
RemoteBindAddress =
After changes, you must restart the service firebird.

Should systemctl show output when start/stop fails?

I've googled every variation of this question I can think of, but I'm just getting questions about failed services, not about how systemctl treats them. I have a service that I've been running as an init.d script. We're using systemctl now, fine. I created a service file that's a lightly modified version of the file automatically generated by systemd-sysv-generator. For ExecStart and ExecStop, it calls a bash script that returns 0 if the start/stop was successful, and non-zero if it was not.
I understand that there's no output from "systemctl start/stop" if it was successful. But I also don't get any output if either of the calls failed. The return code of the systemctl start/stop command is always 0 even if the return code of the source script is not. It's very clear it did fail because it shows as failed when I run the status command.
Is that expected behavior? Should it not give any indication that something failed unless you run a separate status command? And if that's not how it should behave, how can I make it indicate that a failure occurred?
Service file below.
[Unit]
SourcePath=/my/service/script.sh
[Service]
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/my/service/script.sh start
ExecStop=/my/service/script.sh stop
Working fine here, CentOS 7. Maybe double check that script.sh is really returning non zero?
$pwd
/etc/systemd/system
$cat me.service
[Unit]
[Service]
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/etc/systemd/system/die.sh
$cat die.sh
#!/bin/bash
echo "dying"
exit 1
$sudo systemctl start me
Job for me.service failed because the control process exited with error code. See "systemctl status me.service" and "journalctl -xe" for details.
$sudo systemctl status me.service
● me.service
Loaded: loaded (/etc/systemd/system/me.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2019-09-12 17:55:02 GMT; 8s ago
Process: 19758 ExecStart=/etc/systemd/system/die.sh (code=exited, status=1/FAILURE)
Sep 12 17:55:02 dpsdev-wkr01 systemd[1]: Starting me.service...
Sep 12 17:55:02 dpsdev-wkr01 die.sh[19758]: dying
Sep 12 17:55:02 dpsdev-wkr01 systemd[1]: me.service: control process exited, code=exited status=1
Sep 12 17:55:02 dpsdev-wkr01 systemd[1]: Failed to start me.service.
Sep 12 17:55:02 dpsdev-wkr01 systemd[1]: Unit me.service entered failed state.
Sep 12 17:55:02 dpsdev-wkr01 systemd[1]: me.service failed.

Filebeat Service will not start on RHEL 7

I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards

Job for kube-apiserver.service failed because the control process exited with error code

On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.
It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.