I have just updated my jail.local file. Then I restarted fail2ban in the log file is filling up with Warning about DNS Lookup of the localhost. I removed some jail from the jail.local file and I think that the problem is the MySQL jail. The problem persists even if I stop the MySQL service.
Any suggestions?
Thank you.
2021-01-27 22:06:48,288 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,288 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,289 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,290 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,290 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
2021-01-27 22:06:48,291 fail2ban.filter [13594]: WARNING Determined IP using DNS Lookup: localhost = ['127.0.0.1']
Related
Avahi/mDNS is running by default on recent versions of Raspian. Great. Very convenient to just ssh pi#mypi.local.
I am doing development on a Mac and operating a local network of headless Raspberry Pis. Up until now, I was able to use mDNS to access the Pis, and the Pis used mDNS to connect to each other.
Today, I shifted the RPis to a private local network by setting them up on a wireless router unconnected to the internet. Once I join the private network, I am still able to access them via mDNS:
% ssh pi#scheduler.local
Linux scheduler 5.10.63-v7l+ #1459 SMP Wed Oct 6 16:41:57 BST 2021 armv7l
Last login: Mon Aug 1 09:07:43 2022
pi#scheduler:~ $
and
wes#macbook % ssh pi#crossing.local
Linux crossing 5.10.17-v7l+ #1414 SMP Fri Apr 30 13:20:47 BST 2021 armv7l
Last login: Mon Aug 1 09:07:46 2022
pi#crossing:~ $
But when they try to access each other, I get some results I don't understand:
pi#scheduler:~ $ ping crossing.local
PING crossing.local (10.0.0.1) 56(84) bytes of data.
From 192.168.0.1 (192.168.0.1) icmp_seq=1 Destination Net Unreachable
From 192.168.0.1 (192.168.0.1) icmp_seq=2 Destination Net Unreachable
From 192.168.0.1 (192.168.0.1) icmp_seq=3 Destination Net Unreachable
From 192.168.0.1 (192.168.0.1) icmp_seq=4 Destination Net Unreachable
Here's what Avahi reports:
pi#scheduler:~ $ service avahi-daemon status
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-08-01 09:07:37 PDT; 41min ago
Main PID: 388 (avahi-daemon)
Status: "avahi-daemon 0.7 starting up."
Tasks: 2 (limit: 1438)
CGroup: /system.slice/avahi-daemon.service
├─388 avahi-daemon: running [scheduler.local]
└─414 avahi-daemon: chroot helper
Aug 01 09:08:08 scheduler avahi-daemon[388]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 169.
Aug 01 09:08:08 scheduler avahi-daemon[388]: Joining mDNS multicast group on interface wlan0.IPv4 with address 192.
Aug 01 09:48:29 scheduler avahi-daemon[388]: Files changed, reloading.
Aug 01 09:48:29 scheduler avahi-daemon[388]: No service file found in /etc/avahi/services.
Here's my hosts and hostname files:
pi#scheduler:~ $ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 scheduler
pi#scheduler:~ $ cat /etc/hostname
scheduler
What does avahi say about it? Let's see:
pi#brs-scheduler:~ $ avahi-resolve --name brs-crossing.local -4
brs-crossing.local 192.168.0.214
pi#brs-scheduler:~ $ ifconfig | grep "inet 192"
inet 192.168.0.109 netmask 255.255.255.0 broadcast 192.168.0.255
pi#brs-scheduler:~ $ ping brs-crossing.local
PING brs-crossing.local (10.0.0.1) 56(84) bytes of data.
From 192.168.0.1 (192.168.0.1) icmp_seq=1 Destination Net Unreachable
So for some reason, on this private network, mDNS is resolving correctly, but ping and ssh don't resolve properly?
What am I missing?
Unsurprisingly, since the pis worked fine on the local net and stopped working on a private net with a new router, it had to do with the configuration of the new router not mDNS.
mDNS was working fine:
pi#scheduler:~ $ avahi-resolve --name crossing.local -4
crossing.local 192.168.0.214
The new router on the private net had two operating modes "router" and "access point." In "router" mode, the router was pushing a DNS nameserver IP to clients which was somehow hosing ping and ssh and other services, despite mDNS working okay.
pi#scheduler:~ $ cat /etc/resolv.conf
# Generated by resolvconf
nameserver 192.168.0.1
Once the router was placed in "access point" mode, and DHCP was turned on manually, everything worked.
Obscure problem. Obscure solution.
information:
hostnames:
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.49.41 ceph-gw-one
172.16.49.42 ceph-gw-two
shell: ceph orch host add 172.16.49.42
Error EINVAL: New host 172.16.49.42 (172.16.49.42) failed check: ['INFO:cephadm:podman|docker (/bin/docker) is present', 'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present', 'INFO:cephadm:Unit chronyd.service is enabled and running', 'INFO:cephadm:Hostname "172.16.49.42" matches what is expected.', 'ERROR: hostname "ceph-gw-two" does not match expected hostname "172.16.49.42"']
shell: orch host add ceph-gw-two
Error EINVAL: Failed to connect to ceph-gw-two (ceph-gw-two).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to run:
ceph cephadm get-ssh-config > ssh_config
ceph config-key get mgr/cephadm/ssh_identity_key > key
ssh -F ssh_config -i key root#ceph-gw-two
i have checked that wether by ip or hostname, ssh login success;
i read the adm source scripts:
out, err, code = self._run_cephadm(spec.hostname, cephadmNoImage, 'check-host',
['--expect-hostname', spec.hostname],
addr=spec.addr,
error_ok=True, no_fsid=True)
if code:
raise OrchestratorError('New host %s (%s) failed check: %s' % (
spec.hostname, spec.addr, err))
so ,i change the cmd to:
ceph orch host add ceph-gw-two 172.16.49.42;
done, it works well;
Using the last version of openshift origin v3.10.0 i run the following command on centos VM:
oc cluster up --public-hostname=192.168.56.15 --http-proxy=http://proxy.ip:port --https-proxy=https://proxy.ip:port --no-proxy=[192.168.56.0/24,172.0.0. 0/8,192.168.56.15,192.168.56.15,localhost]
In result i get:
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.10 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.10 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.10 ...
I1003 10:58:00.643521 3446 flags.go:30] Running "create-kubelet-flags"
I1003 10:58:01.314805 3446 run_kubelet.go:48] Running "start-kubelet"
I1003 10:58:01.549316 3446 run_self_hosted.go:172] Waiting for the kube-apiserver to be ready ...
E1003 11:03:01.559324 3446 run_self_hosted.go:542] API server error: Get https://127.0.0.1:8443/healthz?timeout=32s: dial tcp 127.0.0.1:8443: getsockopt: connection refused ()
Error: timed out waiting for the condition
And while following the log of docker i notice the following error:
E1003 github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: getsockopt: connection refused
Which is a normal behavior since netstat shows only one port opened:
tcp6 0 0 :::10250 :::* LISTEN 3894/hyperkube
PS:
As you can see i use proxy.
I tried to use a local resolving, using dns instead of ip# and since i don't have a DNS server i used /etc/hosts, same problem.
I was able to enable ipv6 on mongodb.
/etc/mongod.conf file has net.ipv6 set to true.
I can see that mongodb is listening on ipv6:
# netstat -anp | grep 27017
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 17967/mongod
tcp6 0 0 :::27017 :::* LISTEN 17967/mongod
unix 2 [ ACC ] STREAM LISTENING 19750206 17967/mongod /tmp/mongodb-27017.sock
#
ping6 to the IPv6 address is fine.
[root#tesla05 log]# ping6 -I eno33554952 tesla05-2-ipv6.ulticom.com
PING tesla05-2-ipv6.ulticom.com(tesla05) from fe80::250:56ff:feb4:7c43 eno33554952: 56 data bytes
64 bytes from tesla05: icmp_seq=1 ttl=64 time=0.101 ms
64 bytes from tesla05: icmp_seq=2 ttl=64 time=0.093 ms
64 bytes from tesla05: icmp_seq=3 ttl=64 time=0.091 ms
however, mongo shell doesn't seem to understand ipv6 address.
[root#tesla05 log]# mongo --ipv6 [fe80::250:56ff:feb4:7c43]:27017/admin
MongoDB shell version: 3.2.4
connecting to: [fe80::250:56ff:feb4:7c43]:27017/admin
2016-10-25T12:04:50.401-0400 W NETWORK [thread1] Failed to connect to fe80::250:56ff:feb4:7c43:27017, reason: errno:22 Invalid argument
2016-10-25T12:04:50.402-0400 E QUERY [thread1] Error: couldn't connect to server [fe80::250:56ff:feb4:7c43]:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
[root#tesla05 log]# mongo --ipv6 tesla05-2-ipv6.ulticom.com:27017/admin
MongoDB shell version: 3.2.4
connecting to: tesla05-2-ipv6.ulticom.com:27017/admin
2016-10-25T12:15:17.861-0400 W NETWORK [thread1] Failed to connect to fe80::250:56ff:feb4:7c43:27017, reason: errno:22 Invalid argument
2016-10-25T12:15:17.861-0400 E QUERY [thread1] Error: couldn't connect to server tesla05-2-ipv6.ulticom.com:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
You are trying to use a link-local IPv6 address. These are not valid without a scope, but you haven't provided one. Thus you get the error Invalid argument. For this reason, putting a link-local address in the DNS makes no sense, because the address is only valid on a particular LAN, and the scope may be different for every host on that LAN.
To use the address, append the scope to it, e.g. fe80::250:56ff:feb4:7c43%eno33554952
i try fail2ban on my servers atleast 10 times, most of the time it not ban ip
In my jail.local
[ssh]
enabled = true
port = ssh,some_port_number
filter = sshd
logpath = /var/log/auth.log
maxretry = 2
bantime = 180
and on my server i install fail2ban and configure using this in my sh file
sudo apt-get -y install fail2ban
sudo cp custom_jail.local /etc/fail2ban/jail.local
sudo service fail2ban restart
and also i set RepeatedMsgReduction off in rsyslog.conf
and run this service rsyslog restart
after ssh login fail (maxretry limit ) i am still able to login it not ban my ip
auth.log
Jun 20 21:17:29 localhost sshd[4705]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=ip user=username
Jun 20 21:17:32 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:36 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:41 localhost sshd[4705]: Failed password for username from ip port 36472 ssh2
Jun 20 21:17:41 localhost sshd[4705]: Connection closed by ip [preauth]
fail2ban.log
2015-06-20 21:15:07,186 fail2ban.jail : INFO Jail 'ssh' stopped
2015-06-20 21:15:07,209 fail2ban.jail : INFO Jail 'ssh-ddos' stopped
2015-06-20 21:15:07,210 fail2ban.server : INFO Exiting Fail2ban
2015-06-20 21:15:07,790 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.11
2015-06-20 21:15:07,791 fail2ban.jail : INFO Creating new jail 'ssh'
2015-06-20 21:15:07,821 fail2ban.jail : INFO Jail 'ssh' uses pyinotify
2015-06-20 21:15:07,846 fail2ban.jail : INFO Initiated 'pyinotify' backend
2015-06-20 21:15:07,848 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2015-06-20 21:15:07,849 fail2ban.filter : INFO Set maxRetry = 2
2015-06-20 21:15:07,850 fail2ban.filter : INFO Set findtime = 600
2015-06-20 21:15:07,850 fail2ban.actions: INFO Set banTime = 180
2015-06-20 21:15:07,884 fail2ban.jail : INFO Creating new jail 'ssh-ddos'
2015-06-20 21:15:07,884 fail2ban.jail : INFO Jail 'ssh-ddos' uses pyinotify
2015-06-20 21:15:07,891 fail2ban.jail : INFO Initiated 'pyinotify' backend
2015-06-20 21:15:07,893 fail2ban.filter : INFO Added logfile = /var/log/auth.log
2015-06-20 21:15:07,894 fail2ban.filter : INFO Set maxRetry = 2
2015-06-20 21:15:07,894 fail2ban.filter : INFO Set findtime = 600
2015-06-20 21:15:07,895 fail2ban.actions: INFO Set banTime = 180
2015-06-20 21:15:07,901 fail2ban.jail : INFO Jail 'ssh' started
2015-06-20 21:15:07,907 fail2ban.jail : INFO Jail 'ssh-ddos' started
Finally got the point why fail2ban not ban ip
previously after editing jail.local i restart fail2ban
but now i first stop fail2ban and then start fail2ban this works for me
I am using Ubuntu 14.04