fail2ban not working or not ban ip after login fail - fail2ban

i try fail2ban on my servers atleast 10 times, most of the time it not ban ip
In my jail.local
[ssh]
enabled = true
port = ssh,some_port_number
filter = sshd
logpath = /var/log/auth.log
maxretry = 2
bantime = 180
and on my server i install fail2ban and configure using this in my sh file
sudo apt-get -y install fail2ban
sudo cp custom_jail.local /etc/fail2ban/jail.local
sudo service fail2ban restart
and also i set RepeatedMsgReduction off in rsyslog.conf
and run this service rsyslog restart
after ssh login fail (maxretry limit ) i am still able to login it not ban my ip
auth.log
Jun 20 21:17:29 localhost sshd[4705]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=ip user=username
Jun 20 21:17:32 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:36 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:41 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:41 localhost sshd[4705]: Connection closed by ip [preauth]
fail2ban.log
2015-06-20 21:15:07,186 fail2ban.jail : INFO Jail 'ssh' stopped
2015-06-20 21:15:07,209 fail2ban.jail : INFO Jail 'ssh-ddos' stopped
2015-06-20 21:15:07,210 fail2ban.server : INFO Exiting Fail2ban
2015-06-20 21:15:07,790 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.11
2015-06-20 21:15:07,791 fail2ban.jail : INFO Creating new jail 'ssh'
2015-06-20 21:15:07,821 fail2ban.jail : INFO Jail 'ssh' uses pyinotify
2015-06-20 21:15:07,846 fail2ban.jail : INFO Initiated 'pyinotify' backend
2015-06-20 21:15:07,848 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2015-06-20 21:15:07,849 fail2ban.filter : INFO Set maxRetry = 2
2015-06-20 21:15:07,850 fail2ban.filter : INFO Set findtime = 600
2015-06-20 21:15:07,850 fail2ban.actions: INFO Set banTime = 180
2015-06-20 21:15:07,884 fail2ban.jail : INFO Creating new jail 'ssh-ddos'
2015-06-20 21:15:07,884 fail2ban.jail : INFO Jail 'ssh-ddos' uses pyinotify
2015-06-20 21:15:07,891 fail2ban.jail : INFO Initiated 'pyinotify' backend
2015-06-20 21:15:07,893 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2015-06-20 21:15:07,894 fail2ban.filter : INFO Set maxRetry = 2
2015-06-20 21:15:07,894 fail2ban.filter : INFO Set findtime = 600
2015-06-20 21:15:07,895 fail2ban.actions: INFO Set banTime = 180
2015-06-20 21:15:07,901 fail2ban.jail : INFO Jail 'ssh' started
2015-06-20 21:15:07,907 fail2ban.jail : INFO Jail 'ssh-ddos' started

Finally got the point why fail2ban not ban ip
previously after editing jail.local i restart fail2ban
but now i first stop fail2ban and then start fail2ban this works for me
I am using Ubuntu 14.04

Related

Fail2ban log filled with Warning for DNS

I have just updated my jail.local file. Then I restarted fail2ban in the log file is filling up with Warning about DNS Lookup of the localhost. I removed some jail from the jail.local file and I think that the problem is the MySQL jail. The problem persists even if I stop the MySQL service.
Any suggestions?
Thank you.
2021-01-27 22:06:48,288 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,288 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,289 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,290 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,290 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,291 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']

RethinkDB: /usr/bin/rethinkdb: Permission denied on startup

I'm having a problem with init.d script on my Raspberry PI 4 (4GB) with Raspbian 10.
I've followed the guide on the official docs and compiled RethinkDB without any problem.
Then I've configured as described in the Deployment docs.
Created conf file in /etc/rethinkdb/instances.d/<conf_name>.conf;
Copied init.d script sudo cp /home/pi/rethinkdb-2.4.1/packaging/assets/init/rethinkdb /etc/init.d/rethinkdb
Added Default Runlevel sudo update-rc.d rethinkdb defaults
I can start the server with command rethinkdb --config-file /etc/rethinkdb/instances.d/instance1.config and it gives me no problem
pi#homeserverpi:~ $ rethinkdb --config-file /etc/rethinkdb/instances.d/instance1.conf
WARNING: ignoring --server-name because this server already has a name.
Running rethinkdb 2.4.1 (CLANG 7.0.1 (tags/RELEASE_701/final))...
Running on Linux 5.4.72-v7l+ armv7l
Loading data from directory /home/pi/rethinkdb_data
Listening for intracluster connections on port 29015
Listening for client driver connections on port 28015
Listening for administrative HTTP connections on port 8182
Listening on cluster addresses: 127.0.0.1, 192.168.1.3, ::1, fe80::38b8:6928:e4fd:1a9c%3
Listening on driver addresses: 127.0.0.1, 192.168.1.3, ::1, fe80::38b8:6928:e4fd:1a9c%3
Listening on http addresses: 127.0.0.1, 192.168.1.3, ::1, fe80::38b8:6928:e4fd:1a9c%3
Server ready, "homeserverpi_9x0" 00eb027b-181c-4a15-a170-8ba8299f4f3f
But when I try to start the service it gives me this
sudo /etc/init.d/rethinkdb start rethinkdb: instance1: Starting instance. (logging to '/var/lib/rethinkdb/instance1/data/log_file')
/etc/init.d/rethinkdb: 224: /etc/init.d/rethinkdb: /usr/bin/rethinkdb: Permission denied
Permissions
pi#homeserverpi:~ $ ls -alh /etc/init.d/rethinkdb
-rwxr-xr-x 1 root root 7.5K Nov 30 00:20 /etc/init.d/rethinkdb
pi#homeserverpi:~ $ ls -alh /usr/bin/rethinkdb/
total 40K
drwxr-xr-x 2 root root 4.0K Nov 29 23:06 .
drwxr-xr-x 3 root root 36K Nov 29 23:06 ..
Can someone please help me on this?
Thank you

Is there a way to use ident authentication with pghero or other workaround?

Installed pghero according to the github docs (on CentOS 7), but seeing nothing in web browser (no connection error displayed, but browser is totally blank) and when starting service curl giving response. Looking at the logs I see...
[root#airflowetl ~]# pghero logs
==> /var/log/pghero/production.log <==
Started GET "/" for 127.0.0.1 at 2020-01-28 16:21:47 -1000
Processing by PgHero::HomeController#index as */*
Completed 500 Internal Server Error in 55ms
PG::ConnectionBad (FATAL: password authentication failed for user "airflow"):
...
...
...
Started GET "/" for 127.0.0.1 at 2020-01-28 23:51:28 -1000
Processing by PgHero::HomeController#index as */*
Completed 500 Internal Server Error in 11ms
PG::ConnectionBad (FATAL: Ident authentication failed for user "airflow"):
...
...
...
Jan 28 22:59:10 airflowetl.co.local systemd[1]: pghero-web-1.service holdoff time over, scheduling restart.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Stopped pghero-web-1.service.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: start request repeated too quickly for pghero-web-1.service
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Failed to start pghero-web-1.service.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: Unit pghero-web-1.service entered failed state.
Jan 28 22:59:10 airflowetl.co.local systemd[1]: pghero-web-1.service failed.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Stopping pghero-web.service...
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Stopped pghero-web.service.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero-web.service.
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero-web-1.service.
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] Puma starting in cluster mode...
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Version 4.3.0 (ruby 2.6.3-p62), codename: Mysterious Trave
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Min threads: 1, max threads: 16
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Environment: production
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Process workers: 3
Jan 28 23:09:37 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Preloading application
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] * Listening on tcp://0.0.0.0:3001
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] ! WARNING: Detected 1 Thread(s) started in app boot:
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] ! #<Thread:0x0000561740ea27e0#/opt/pghero/vendor/bundle/ruby
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] Use Ctrl-C to stop
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 0 (pid: 12213) booted, phase: 0
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 1 (pid: 12215) booted, phase: 0
Jan 28 23:09:38 airflowetl.co.local pghero-web-1.service[12134]: [12134] - Worker 2 (pid: 12219) booted, phase: 0
...
and can see the
500 Internal Server Error in 55ms
error. Checking the service status, seeing...
[root#airflowetl ~]# service pghero status
Redirecting to /bin/systemctl status pghero.service
● pghero.service
Loaded: loaded (/etc/systemd/system/pghero.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-01-28 23:09:36 HST; 4s ago
Main PID: 12132 (sleep)
CGroup: /system.slice/pghero.service
└─12132 /bin/sleep infinity
Jan 28 23:09:36 airflowetl.co.local systemd[1]: Started pghero.service.
[root#airflowetl ~]# netstat -tulnp | grep 3001
tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN 12134/puma 4.3.0 (t
[root#airflowetl ~]# curl -v http://localhost:3001/
* About to connect() to localhost port 3001 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3001 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:3001
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/html; charset=UTF-8
< X-Request-Id: 2bad5f50-438e-4cb3-8e79-41c84eb75c2c
< X-Runtime: 0.017069
< Content-Length: 0
<
* Connection #0 to host localhost left intact
No experience with postgresql or db admin stuff, but appears that the error is due to the fact that I use ident authentication (and appears pghero wants to use a password):
[root#airflowetl ~]# cat /var/lib/pgsql/data/pg_hba.conf
# PostgreSQL Client Authentication Configuration File
# ===================================================
...
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
#host all all 127.0.0.1/32 ident
#host all all 0.0.0.0/0 trust
host all all 0.0.0.0/0 md5
#host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*' # for apache-airflow connection
I did this following an article of setting up psql as backend for airflow orchestration tool.
Have tried using multiple urls
sudo pghero config:set DATABASE_URL=postgresql://airflow:xxxx#localhost:5432/airflow
sudo pghero config:set DATABASE_URL=postgresql+psycopg2://airflow:xxxx#localhost:5432/airflow
but same results.
Not sure how to move forward at this point. Anyone with more experience with pghero or postgresql know what could be done here?
No experience with postgresql or db admin stuff, but appears that the error is due to the fact that I use ident authentication (and appears pghero wants to use a password)
It isn't about what pghero wants. It is PostgreSQL which is demanding password authentication.
host all all 0.0.0.0/0 md5
host all all ::1/128 ident
You are using md5 (i.e. password) on all IPv4 connections (including "localhost"), and using ident on only the IPv6 connection from ::1, which is the IPv6 way of spelling "localhost". pghero is coming in over IPv4, not IPv6, so it is getting commanded to use a password.
You can change the "md5" to "ident" for the 0.0.0.0/0 line (but you probably shouldn't as "ident" is not very secure from outside hosts), or add a line before that one to indicate 127.0.0.1/32 specifically should use ident. Or change your pghero config to try to connect over IPv6 rather than IPv4.
Your new log file entry shows that it is trying ident and failing at that too. I don't understand why you are getting both, but they are 7 hours apart so maybe you had changed pg_hba.conf in between. PostgreSQL will create a more complete report and put it in the PostgreSQL server's log file about why the ident authentication failed. (It doesn't sent to the complete report to the unauthenticated client, because that would reveal sensitive information). Find the PostgreSQL server's log file.

Warning: Authentication failure. Retrying

I tried
to spin up a CentOS 7 VM. Below is my settings
Vagrant File
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, ip: "10.11.12.55"
zabbix1.ssh.private_key_path = "~/.ssh/id_rsa"
zabbix1.ssh.forward_agent = true
end
end
Result
vagrant reload
==> zabbix1: Attempting graceful shutdown of VM...
zabbix1: Guest communication could not be established! This is usually because
zabbix1: SSH is not running, the authentication information was changed,
zabbix1: or some other networking issue. Vagrant will force halt, if
zabbix1: capable.
==> zabbix1: Forcing shutdown of VM...
==> zabbix1: Checking if box 'centos/7' is up to date...
==> zabbix1: Clearing any previously set forwarded ports...
==> zabbix1: Fixed port collision for 22 => 2222. Now on port 2204.
==> zabbix1: Clearing any previously set network interfaces...
==> zabbix1: Preparing network interfaces based on configuration...
zabbix1: Adapter 1: nat
zabbix1: Adapter 2: hostonly
==> zabbix1: Forwarding ports...
zabbix1: 22 (guest) => 2204 (host) (adapter 1)
==> zabbix1: Booting VM...
==> zabbix1: Waiting for machine to boot. This may take a few minutes...
zabbix1: SSH address: 127.0.0.1:2204
zabbix1: SSH username: vagrant
zabbix1: SSH auth method: private key
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
vagrant ssh-config
Host zabbix1
HostName 127.0.0.1
User vagrant
Port 2204
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/bheng/.ssh/id_rsa
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
What did I do wrong ? What did I miss ?
I had the same issue with the same box and the way I fixed it was to log into the VM from VirtualBox (vagrant/vagrant as username/password) and change the permission of .ssh/authorized_keys
chmod 0600 .ssh/authorized_keys
Do that after you run vagrant up (while the error repeats) and the VM is up and you will see vagrant up will complete successfully and you will be able to ssh into the VM from vagrant ssh
Private networks can be configured manually or with the VirtualBox built-in DHCP server. This works for me.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, type: "dhcp"
end
end
Next you have to use vagrant destory and vagrant up.

HAProxy not running stats socket

I installed haproxy from aur in Arch Linux and modified the config file a bit:
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 30s
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
defaults
mode http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
frontend www-http
bind 127.0.0.1:80
default_backend www-backend
backend www-backend
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
timeout queue 30s
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
I have made sure that the directory /run/haproxy exists and has permissions for the user haproxy to write to it:
ツ ls -al /run/haproxy
total 0
drwxr-xr-x 2 haproxy root 40 May 13 21:37 .
drwxr-xr-x 27 root root 720 May 13 22:00 ..
When I launch haproxy using systemctl start haproxy.service, it loads fine. I can even go to the /stats page and view stats, however, socat reports the following error:
ツ sudo socat unix-connect:/run/haproxy/haproxy.sock stdio
2016/05/13 22:04:11 socat[24202] E connect(5, AF=1 "/run/haproxy/haproxy.sock", 27): No such file or directory
I am at wits end and not able to understand what is happening. This is what I get from journalctl -xe:
May 13 21:56:31 rohanarch.local systemd[1]: Starting HAProxy Load Balancer...
May 13 21:56:31 rohanarch.local systemd[1]: Started HAProxy Load Balancer.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: haproxy-systemd-wrapper: executing /usr/bin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: [WARNING] 133/215631 (20456) : config : missing timeouts for frontend 'www-http'.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | While not properly invalid, you will certainly encounter various problems
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | with such a configuration. To fix this, please ensure that all following
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Basically, no errors/warnings or not even so much as an indication about the stats socket. Others who have faced a problem with the stats socket fail to get haproxy started. In my case, it starts up fine, but the socket just isn't creating.
You need to manually create the directory yourself. Please ensure
/run/haproxy exists. If it doesn't, then first create it with:
sudo mkdir /run/haproxy
This should resolve your issue.
try to make selinux permissive with the command belowe and restart HAproxy service.
selinux command