Setup Fail2ban for a specifc url - fail2ban

For shits and giggles I created a small honepot php script. If it is called from a webpage, I want to simply put the IP address in jail.
I created a filter that looks like this
filename: apache-specific-url.conf
[INCLUDES]
before = apache-common.conf
[Definition]
failregex = ^<HOST> -.*"(GET|POST).*\/sshlogin.php\/.*$
ignoreregex =
I've also put the following into my jail.local
[apache-specific-url]
enabled = true
port = http,https
filter = apache-specific-url
logpath = %(apache_access_log)s
bantime = 48h
maxretry = 1
Fail2ban shows that my jail is running. However, if I access it via domain.com/sshlogin.php or IPaddress/sshlogin.php... the URL never gets banned.
Is my regex the problem?
Is the filter the problem?
Is it that my mother didn't love me as a child?
Any help appreciated.
Tail of the log
111.111.111.111 - - [13/Jan/2021:15:05:16 -0500] "GET /sshlogin.php HTTP/1.1" 200 3548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15"
111.111.111.111 - - [13/Jan/2021:15:05:19 -0500] "GET /sshlogin.php HTTP/1.1" 200 3548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15"
111.111.111.111 - - [13/Jan/2021:15:05:20 -0500] "GET /sshlogin.php HTTP/1.1" 200 3548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15"
111.111.111.111 - - [13/Jan/2021:15:05:25 -0500] "GET /sshlogin.php HTTP/1.1" 200 3548 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15"

The regex in your comment above definitely won't have any hits because it misses the most important part <HOST> and also specifies the end of the line immediately after the sshlogin.php part. The regex in your post is wrong only because you've included a redundant slash after the sshlogin.php part, otherwise it would match. However you'd also need to set a custom date pattern for that specific log, so use the following:
[INCLUDES]
before = apache-common.conf
[Definition]
failregex = ^<HOST> - - \[[^\]]*\] "(GET|POST) /sshlogin\.php
ignoreregex =
datepattern = %%d/%%b/%%Y:%%H:%%M:%%S \-%%f
I changed the failregex to make it more specific and avoid unnecessary quantifiers which might get you in trouble.
Be sure to restart fail2ban after the changes.
(I'm sure your mother loved you as a child btw.)

Related

how to download large file using Curlwget extension

I have added the CurlWget extension for my browser and tried to download data using jupyter notebook as below:
!wget --header="Host: storage.googleapis.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0;
Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
OPR/66.0.3515.44" --header="Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,
application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --
header="Referer: https://www.kaggle.com/" "https://storage.googleapis.com/kagglesdsdata/competitions
/4117/46665/train.7z?GoogleAccessId=web-data#kaggle-161607.iam.gserviceaccount.com&Expires=1580049706&Signature=kowWRCMZZkqsrEqcwFtNJd4nwGgpE9DLbAcJ2b%2BvaGw1Wie82k3K03bhmHpqnhIKPsloHQJRq%2FHpBxv4kSeINAymvKvJXcpffjMqx%2Baujazoqxbl0aAQUhBs27OTKTqSp5Hzfhpz%2FKd%2Fx6SuYUCxy7x%2BAFOjlzQ8se59vJPwEmRNr4%2BSeOepC%2F%2BWJYzgLIcXDFy%2BUjjH1SrnBdAgRiMEa8pPD%2FZxmRma4ggWIWskLEVyuq4oAyVnaXK%2F39GsCo5lr199KqsPsO7BYJxs2hGv%2FlY6n4PirdQpw68dsSrLvfnSbpQckVVRtqjb9uLWDsQqarWfec1INAmHwaa%2B2Db2yQ%3D%3D&response-content-disposition=attachment%3B+filename%3Dtrain.7z" -O "train.7z" -c
But i am getting below error:
'wget' is not recognized as an internal or external command, operable program or batch file.
i have installed wget using below command:
pip install wget
This probably not a satisfying answer, but the answer is don't do this. CurlWget was flagged as malware and taken from the chrome web store:
https://chrome.google.com/webstore/detail/curlwget/jmocjfidanebdlinpbcdkcmgdifblncg/support?hl=pt-BR&authuser=2

Where can I set the useragent in Strawberry Perl's config?

We have a proxy server here and all internet traffic is going through that. The command: cpan package fails with the following error:
LWP failed with code[403] message[Browserblocked]
I think, only specific browsers are let through the proxy server, so I need to set the useragent for cpan. Where can I set it? I don't see anything similar in o conf.
Rewriting the code of site\lib\LWP\UserAgent.pm
sub _agent { "libwww-perl/$VERSION" }
say to:
sub _agent { 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0' }
solves the problem, but is this really the official solution?

Search after match in dynamic lines [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Dears,
I have the file below in the following format
Success|Filter passed|[invalid field]|[invalid field]|Id-350a875b087965e58cbe1f4a
Accept: text/plain, text/plain, application/json, application/*+json, */*, */*
Host: api2.tim.com.br
User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
Via: 1.1
X-Forwarded-For: 144.22.98.123
X-Forwarded-Host:
X-Forwarded-Server:
Success|Success in calling policy shortcut|[invalid field]|[invalid field]|[invalid field]|Id-350a875b087965e58cbe1f4a|Call 'Set Request Message'|GET
Accept: text/plain, text/plain, application/json, application/*+json, */*, */*
Host: api2.tim.com.br
User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
Via: 1.1 api2.tim.com.br
X-Forwarded-For: 144.22.98.123
X-Forwarded-Host:
X-Forwarded-Server:
Content-Type: text/xml; charset="UTF-8"
I need to perform a search for the line that begins with the string "^ Success" and display all the items until there is a "^ Sucess" string again.
Here is an example of what I need to display:
Success|Filter passed|[invalid field]|[invalid field]|Id-350a875b087965e58cbe1f4a
Accept: text/plain, text/plain, application/json, application/*+json, */*, */*
Host: api2.tim.com.br
User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
Via: 1.1
X-Forwarded-For: 144.22.98.123
X-Forwarded-Host:
X-Forwarded-Server:
What happens is that the amount of lines below after the match is very dynamic.
What happens is that the amount of lines below the match is very dynamic and in the same file there may be several lines
of the same match and I would need to display them also when the file is run.
Could you guys help me?
Perl has a "paragraph mode". You change the input record separator, $/ to read chunks of "multiline" text. This splits up your data on the double newline:
use v5.10;
$/ = "\n\n";
while( <INPUT> ) {
chomp;
say "==========\n$_\n----------\n";
}
Start your program with that and try to do whatever else you're trying to do. In your next question you'll have the small demonstration program you need to get better help.

How can I, using Perl, filter out IP addresses from a log file or any other file?

I am trying to find out how I can use Perl to scan thru a specific file and find all the IP addresses and block them so that I can convert the numbers in the IP address to x's. For example: 194.66.82.11 and after the code runs it is formatted like xxx.xx.xx.11 instead of just removing it completely. This is in unix.
You've stated in the comments that you're working with a log file:
192.168.72.177 - - [22/Dec/2002:23:32:19 -0400] "GET /search.php HTTP/1.1" 400 1997 www.yahoo.com "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; ...)" "-" –
I therefore suggest that you just edit the first field of the log file to achieve your result.
Using a perl one-liner
perl -lane '$F[0] =~ s/\d(?=.*\.)/x/g; print "#F"' file.log
Outputs:
xxx.xxx.xx.177 - - [22/Dec/2002:23:32:19 -0400] "GET /search.php HTTP/1.1" 400 1997 www.yahoo.com "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; ...)" "-" –
To match 4 numbers (with a maximum length of 3) separated with a period you can use the following regex:
(?:[0-9]{1,3}\.){3}([0-9]{1,3})
You can access the last capturing group (the first is a non-capturing group) with \1 when replacing, thus xxx.xxx.xxx.\1, for instance. Note that the result will not have the same number of xs as the original IP had digits. If that's a problem you'd have to tweak the regex.

Why doesn't rrdtool generate any PNG output in my Perl CGI program?

I'm trying to output an image from RRD Tool using Perl. I've posted the relevant part of the CGI script below:
sub graph
{
my $rrd_path = $co->param('rrd_path');
my $RRD_DIR = "../data/";
#generate a PNG from the RRD
my $png_filename = "-"; # a '-' as the filename send the PNG to stdout
my $rrd = "$RRD_DIR/$rrd_path";
my $png = `rrdtool graph $png_filename -a PNG -r -l 0 --base 1024 --start -151200 -- vertical-label 'bits per second' --width 500 --height 200 DEF:bytesInPerSec=$rrd:bytesInPerSec:AVERAGE DEF:bytesOutPerSec=$rrd:bytesOutPerSec:AVERAGE CDEF:sbytesInPerSec=bytesInPerSec,8,* CDEF:sbytesOutPerSec=bytesOutPerSec,8,* AREA:sbytesInPerSec#00cf00:AvgIn LINE1:sbytesOutPerSec#002a97:AvgOut VRULE:1246428000#ff0000:`;
#print the image header
use bytes;
print $co->header(-type=>"image/png",-Content_length=>length($png));
binmode STDOUT;
print $png;
}#end graph
This works fine on the command line (perl graph.cgi > test.png) - commenting out the header, of course, as well as on my Ubuntu 10.04 development machine. However, when I move to the Centos 5 production server, it doesn't, and the browser receives a content-length of 0:
Ubuntu 10.04/Apache:
Request URL:http://noc-student.nmsu.edu/grasshopper/web/graph.cgi
Request Method:GET
Status Code:200 OK
Request Headers
Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Cache-Control:max-age=0
User-Agent:Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.36 Safari/534.7
Response Headers
Connection:Keep-Alive
Content-Type:image/png
Content-length:12319
Date:Fri, 08 Oct 2010 21:40:05 GMT
Keep-Alive:timeout=15, max=97
Server:Apache/2.2.14 (Ubuntu)
And from the Centos 5/Apache Server:
Request URL:http://grasshopper-new.nmsu.edu/grasshopper/branches/michael_dev/web/graph.cgi
Request Method:GET
Status Code:200 OK
Request Headers
Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Cache-Control:max-age=0
User-Agent:Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.36 Safari/534.7
Response Headers
Connection:close
Content-Type:image/png
Content-length:0
Date:Fri, 08 Oct 2010 21:40:32 GMT
Server:Apache/2.2.3 (CentOS)
The use bytes and manual setting of the content length are in there to try to fix the problem, but it's the same without them. Same with setting binmode on STDOUT. The script works fine from the command line on both machines.
See my How can I troubleshoot my Perl CGI program. Typically, the difference between running your program on the command line and from the web server is a matter of difference environments. In this case I'd expect that either rddtool is not in the path or the webserver user can't run it.
The backticks only capture standard output. There is probably some standard error output in the web server error log.
Are you sure your web user has access to your data? Try having the CGI writing the png to the filesystem, so you can make sure it's generated properly. If it is, the problem is in the transmission (headers, encodings, etc). If not, it's unrelated to the web server, and probably related to permissions.