Unable to match sendmail "Connection rate limit exceeded" with fail2ban - fail2ban

I can't manage to find the error preventing fail2ban to match these lines:
Apr 19 20:17:12 localhost sm-mta[201892]: ruleset=check_relay, arg1=[12.345.7.789], arg2=12.345.7.789, relay=host.hostname.com [12.345.7.789] (may be forged), reject=421 4.3.2 Connection rate limit exceeded.
Apr 19 20:17:53 localhost sm-mta[201902]: 13JIHpTD201902: [12.345.7.789] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA-v4
Here is the associated fail2ban configuration:
[Definition]
_daemon = (?:(sm-(mta|acceptingconnections)|sendmail))
__prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )?
prefregex = ^<F-MLFID>%(__prefix_line)s</F-MLFID><F-CONTENT>.+</F-CONTENT>$
cmnfailre = ^ruleset=check_relay, arg1=(?P<dom>\S+), arg2=(?:IPv6:<IP6>|<IP4>), relay=((?P=dom) )?\[(\d+\.){3}\d+\](?: \(may be forged\))?, reject=421 4\.3\.2 (Connection rate limit exceeded\.|Too many open connections\.)$
^(?:\S+ )?\[(?:IPv6:<IP6>|<IP4>)\](?: \(may be forged\))? did not issue (?:[A-Z]{4}[/ ]?)+during connection to (?:TLS)?M(?:TA|S[PA])(?:-\w+)?$
I am testing with fail2ban-regex test-mail.log /etc/fail2ban/filter.d/sendmail-reject.conf
Resulting in:
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [5] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 5 lines, 0 ignored, 0 matched, 5 missed
[processed in 0.00 sec]
Any idea ?
Thanks !

The second message (did not issue MAIL/EXPN/VRFY/ETRN) can be found if you would set mode aggressive by sendmail-reject jail (after this fix, e. g. v.0.10.6 and 0.11.2).
There was indeed no exact rule for the first message (rate limit exceeded) matching this kind of message exactly, due to different handling on the arguments, but...
I fixed this now in f0214b3 on github.
Unless not released you can extend it by yourselves either in filter (copy & paste from github filter) or directly in jail:
[sendmail-reject]
enabled = true
mode = aggressive
failregex = %(known/failregex)s
^ruleset=check_relay(?:, arg\d+=\S*)*, relay=(\S+ )?\[?<ADDR>\]?(?: \(may be forged\))?, reject=421 4\.3\.2 (Connection rate limit exceeded\.|Too many open connections\.)$"
^(?:\S+ )?\[<ADDR>\](?: \(may be forged\))? did not issue \S+ during connection

Related

CentOS EPEL fail2ban not processing systemd journal for tomcat

I've installed fail2ban 0.10.5-2.el7 from EPEL on CentOS 7.8. I'm trying to get it to work with systemd for processing a Tomcat log (also systemd).
In jail.local I added:
[guacamole]
enabled = true
port = http,https
backend = systemd
In filter.d/guacamole.conf:
[Definition]
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
journalmatch = _SYSTEMD_UNIT=tomcat.service + _COMM=java
If I run journalctl -u tomcat.service I see all the log lines. The ones I am interested in look like this:
May 18 13:58:26 myhost catalina.sh[42065]: 13:58:26.485 [http-nio-8080-exec-6] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 1.2.3.4 for user "test" failed.
If I redirect journalctl -u tomcat.service to a log file, and process it with fail2ban-regex then it works exactly the way I want it to work, finding all the lines it needs.
% fail2ban-regex /tmp/j9 /etc/fail2ban/filter.d/guacamole.conf
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use log file : /tmp/j9
Use encoding : UTF-8
Results
=======
Failregex: 47 total
|- #) [# of hits] regular expression
| 1) [47] Authentication attempt from <HOST> for user "[^"]*" failed\.$
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [1] ExYear(?P<_sep>[-/.])Month(?P=_sep)Day(?:T| ?)24hour:Minute:Second(?:[.,]Microseconds)?(?:\s*Zone offset)?
| [570] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 571 lines, 0 ignored, 47 matched, 524 missed
[processed in 0.12 sec]
However, if fail2ban reads the journal directly then it does not work:
fail2ban-regex systemd-journal /etc/fail2ban/filter.d/guacamole.conf
It comes back right away, and processes 0 lines!
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Use journal match : _SYSTEMD_UNIT=tomcat.service + _COMM=java
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 0 lines, 0 ignored, 0 matched, 0 missed
[processed in 0.00 sec]
I've tried to remove _COMM=java. It doesn't make a difference.
If I leave out the journal match line altogether, it at least processes all the lines from the journal, but does not find any matches (even though, as I mentioned, it processes a dump of the log file fine):
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 202271 lines, 0 ignored, 0 matched, 202271 missed
[processed in 34.54 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 202271 lines
Either this is a bug, or I'm missing a small detail.
Thanks for any help you can provide.
To make sure the filter definition is properly initialised, it would be good to include the common definition. Your filter definition (/etc/fail2ban/filter.d/guacamole.conf) would therefore look like:
[INCLUDES]
before = common.conf
[Definition]
journalmatch = _SYSTEMD_UNIT='tomcat.service'
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
A small note given that your issue only occurs with systemd but not flat files, could you try the same pattern without $ at the end? Maybe there is an issue with the end of line when printed to the journal?
In your jail definition (/etc/fail2ban/jail.d/guacamole.conf), remember to define the ban time/find time/retries if they haven't already been defined in the default configuration:
[guacamole]
enabled = true
port = http,https
maxretry = 3
findtime = 1h
bantime = 1d
# "backend" specifies the backend used to get files modification.
# systemd: uses systemd python library to access the systemd journal.
# Specifying "logpath" is not valid for this backend.
# See "journalmatch" in the jails associated filter config
backend = systemd
Remember to restart the fail2ban service after doing such changes.

compile postgresql from source

I need to make changes to the mdwrite function in the /src/backend/storage/smgr/md.c file (part of code, because i can't pin screenshot)
seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
**buffer[0] = 'A';**
nbytes = FileWrite(v->mdfd_vfd, **buffer**, BLCKSZ, seekpos, WAIT_EVENT_DATA_FILE_WRITE);
**buffer[0] = 'B';**
TRACE_POSTGRESQL_SMGR_MD_WRITE_DONE(forknum, blocknum,
reln->smgr_rnode.node.spcNode,
reln->smgr_rnode.node.dbNode,
reln->smgr_rnode.node.relNode,
reln->smgr_rnode.backend,
nbytes,
BLCKSZ);
compilation and installation was successful, but when I configure:
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
It give me writing block 0 of relation global/1136 on ubuntu console. How should I work with source code?
What was the point of the change? It seems to be designed to cause havoc, which is what it did.
The full message should be something like this:
LOG: request to flush past end of generated WAL; request 41/28, currpos 0/1523128
CONTEXT: writing block 0 of relation global/1213
FATAL: xlog flush request 41/28 is not satisfied --- flushed only to 0/1523128
CONTEXT: writing block 0 of relation global/1213
So you corrupted the LSN in the page header of the buffer to be written, which then caused a request of a WAL flush which is impossible to perform.

Reversing a hash to find something which works, but hashcat seems to have issues

I saw some unfamiliar code on a project i was working on.
I saw a function which said:
var salt = 1514691869198;
var result hex_hmac_sha1(salt, hmac_sha1(password))
# result is: 462435F34EAD6BB7C70751D90984DADD90EED9A4
I was having some issues with hashcat though. It seems to be getting killed early because of a driver or something.
It seems that option -m160 would be the one I would want to use since 160 = HMAC-SHA1 (key = $salt) in the man page for it.
Given the sha1.js file i was looking at, which gave me the code above, it showed the salt as the key which makes me think the 160 code as the most relevant.
Obviously this is a nested sha, but trying to find something to reverse it would be ideal.
I am aware reversing a hash would not return the actual password, but I figured I could run a wordlist and attempt to find a hash which matches this one.
That being said, I was thinking I can find a string which works. I am having issues though building either the hashcat command or finding this answer in general. I was not sure how i would want to put the hash in the command. I was thinking it would be along the lines of:
hashcat -m160 462435F34EAD6BB7C70751D90984DADD90EED9A4: 1514691869198 mywordlist.txt
but it seems to fail for me with the following:
* Device #1: Not a native Intel OpenCL runtime. Expect massive speed loss.
You can use --force to override, but do not report related errors.
No devices found/left.
Started: Sat Dec 30 22:52:33 2017
Stopped: Sat Dec 30 22:52:33 2017
and if i used --force it would say:
hashcat (pull/1273/head) starting...
OpenCL Platform #1: The pocl project
====================================
* Device #1: pthread-Intel(R) Core(TM) i7-4770HQ CPU # 2.20GHz,
2656/2656 MB allocatable, 1MCU
Hashes: 1 digests; 1 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13
rotates
Rules: 1
Applicable optimizers:
* Zero-Byte
* Not-Iterated
* Single-Hash
* Single-Salt
Watchdog: Hardware monitoring interface not found on your system.
Watchdog: Temperature abort trigger disabled.
Watchdog: Temperature retain trigger disabled.
* Device #1: build_opts '-I /usr/share/hashcat/OpenCL -D VENDOR_ID=64 -D CUDA_ARCH=0 -D VECT_SIZE=1 -D DEVICE_TYPE=2 -D DGST_R0=3 -D DGST_R1=4 -D DGST_R2=2 -D DGST_R3=1 -D DGST_ELEM=5 -D KERN_TYPE=160 -D _unroll -cl-std=CL1.2'
* Device #1: Kernel m00160_a0.0bbec6e5.kernel not found in cache! Building may take a while...
Kernel library file /usr/share/pocl/kernel-i686-pc-linux-gnu.bc doesn't exist.
Try reading How to use hashcat on CPU only
Relevant parts:
Download latest OpenCL Drivers and Runtimes for CPU:
https://software.intel.com/en-us/articles/opencl-drivers#latest_CPU_runtime
Latest release (16.1.1) – at time of writing

sqlcmd not showing RESTORE database stats

The following command in a cmd window
sqlcmd -S. -Usa -Ppass -dmaster -Q "RESTORE DATABASE [MYDATABASE] FROM DISK = 'D:\SQL Server\MYDATABASE.BAK' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10"
displays the following progress output:
10 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
50 percent processed.
60 percent processed.
70 percent processed.
80 percent processed.
90 percent processed.
100 percent processed.
Processed 32320 pages for database 'MYDATABASE', file 'MYDATABASE' on file 1.
Processed 7 pages for database 'MYDATABASE', file 'MYDATABASE_log' on file 1.
But it turns that the progress is shown only after the entire restore, turning the stats during the process useless.
Any advice?
Here is the version of sqlcmd tool:
Microsoft (R) SQL Server Command Line Tool
Version 12.0.2000.8 NT
Copyright (c) 2014 Microsoft. All rights reserved.
Update Dec-2016:
Just including the comment from Microsoft Connect link shared in comments:
SQLCMD was rewritten in SQL 2012 to use ODBC. Here is a small
regression error that appears to have sneaked in.
It's the same effect reported when calling RAISERROR('Hello', 0, 1) WITH NOWAIT along a script.
I believe you can look in the SQL logs to see the progress ongoing.
you can query percent_complete in sys.dm_exec_requests
use start to open a separate window and issue a select percent_complete from sys.dm_exec_requests where percent_complete > 0

Missing error in $# for Perl Net::FTP

After figuring out (via SO, of course) that the error for a bad $ftp = Net::FTP->new() call is in $# while subsequent errors can be obtained by $ftp->message(), I'm striking a small problem.
My code is basically:
while (1) {
# Wait for cycle start, then get file list into #filelist.
foreach $file (#filelist) {
my $ftp = Net::FTP->new ($host);
if (! $ftp) {
logError ("Could not connect to host [$host]: $#");
return;
}
# More FTP stuff below with $ftp->message() error checking.
$ftp->quit();
}
}
Aside: yes, I know I can probably do this in one FTP session, but there are good reasons for leaving it in separate sessions at the moment.
Now this is being called in a loop, once per file, all going to the same host, but I'm getting a slightly different behaviour on the first attempt in most cycles. The script is a long-running one, with each cycle starting on the hour and half hour so it's not some issue with the first ever attempt after program start, since it happens on cycles other than the first as well.
Now I know that these connections should fail, simply because the machines I'm trying to access are not available on my development network.
The trouble is that the errors coming out in the log file are:
E 2012-02-05 18:00:13 Could not connect to host [example.com]:
E 2012-02-05 18:00:13 Could not connect to host [example.com]:
Net::FTP: connect: Connection refused
E 2012-02-05 18:00:14 Could not connect to host [example.com]:
Net::FTP: connect: Connection refused
As you can see, the $# variable seems to be not populated the first file of the cycle. I've edited this question slightly since I've just noticed the latest cycle had all three lines with the error message. Going back over the logs with the command:
grep refused logfile | awk '{print substr($3,1,5)}' | uniq -c
to get the dates and counts, turns up the following statistics:
3 11:00
3 11:30
3 12:00
3 12:30
3 13:00
3 13:30
2 14:00
3 14:30
3 15:00
3 15:30
3 16:00
2 16:30
2 17:00
2 17:30
2 18:00
2 18:30
2 19:00
3 19:30
indicating that some have the correct count of error messages but not all.
I'm wondering if anyone knows why this may be the case.
Try upgrading http://cpansearch.perl.org/src/GBARR/libnet-1.22_01/Changes says
libnet 1.22_01 -- Mon May 31 09:40:25 CDT 2010
*Set $# when ->new returns undef
If you're using a version of libnet prior to 1.22_01, it had a small bug in the new function in regards to responses that didn't start with a code.
For example, FTP.pm 2.77 which is from libnet 1.21 has the following snippet:
unless ($ftp->response() == CMD_OK) {
$ftp->close();
$# = $ftp->message;
undef $ftp;
}
With FTP.pm 2.77_2 from libnet 1.22_01, this is changed to:
unless ($ftp->response() == CMD_OK) {
$ftp->close();
# keep #$ if no message. Happens, when response did not start with a code.
$# = $ftp->message || $#;
undef $ftp;
}
Is there anything going on between the ->new call and printing the $#? It can overwrite the value of $#, so if it is neccesary, store the value for later use:
my $ftp = Net::FTP->new ($host);
my $potential_error = $#;
$whatever_that->can_call(eval => 'inside');
if (! $ftp) {
logError ("Could not connect to host [$host]: $potential_error");
}