Mod security whitelist, multiple conditions - owasp

I've set up mod_security on my server with the Owasp predefined modsec rules.
However, I'm getting a lot of false positive so I've started to set up whitelist rules.
I have a false positive on this url:
http://example.com/fr/share/?u=http%3A%2F%2Fwww.example.com%2Fen%2Ffiles%2Fimgs%2F%3Fpage%3D100%2
with "Multiple URL Encoding Detected","OWASP_CRS/PROTOCOL_VIOLATION/EVASION"
due to the rule:
SecRule ARGS "\%((?!$|\W)|[0-9a-fA-F]{2}|u[0-9a-fA-F]{4})" "phase:2,rev:'2',ver:'OWASP_CRS/2.2.9',maturity:'6',accuracy:'8',t:none,block,msg:'Multiple URL Encoding Detected',id:'1',tag:'OWASP_CRS/PROTOCOL_VIOLATION/EVASION',severity:'4',setvar:'tx.msg=%{rule.msg}',setvar:tx.anomaly_score=+%{tx.warning_anomaly_score},setvar:tx.%{rule.id}-OWASP_CRS/PROTOCOL_VIOLATION/EVASION-%{matched_var_name}=%{matched_var}"
So the main idea for me is to create a rule that still does the check except for the parameters "u" on url starting by /fr/share/?.
I have some hints with :
SecRule ARGS|!ARGS:u ... but how can I combine the mention where !REQUEST_URI equal to "/fr/share?.*"

So there are several options here.
You could rewrite the rule, and use chaining, to test for multiple conditions (note I've stripped off some of the rule actions for formatting reasons):
SecRule ARGS "\%((?!$|\W)|[0-9a-fA-F]{2}|u[0-9a-fA-F]{4})" \
"phase:2,rev:'2',ver:'OWASP_CRS/2.2.9',maturity:'6',accuracy:'8', \
t:none,block,msg:'Multiple URL Encoding Detected',id:'1',chain"
SecRule REQUEST_URI "!#beginsWith /fr/share/" "t:none"
The "chain" action means the rule on the next line must also pass before the actions are taken, so in this case it's checking the REQUEST_URI does not begin with /fr/share.
However this means you've got your own copy of this rule and makes upgrading to future versions of the Core Rule Set more difficult. It's much preferred to leave the original rule in place (which I've looked up and is actually rule id 950109 rather than rule id 1 that you've given so I presume that rule 1 is your copy).
So, to leave the original rule in place, but not have it false alerting you've a few options, detailed below in increasing complexity:
You could disable the whole rule:
SecRuleRemoveById 950109
This should be specified AFTER the rule is defined.
Obviously that's a bit extreme if it's only giving a false positive for one particular URL, parameter combination and means you lose the protection that rule is giving you for the any other url or parameter.
You could disable that rule for just that 'u' parameter:
SecRuleUpdateTargetById 950109 !ARGS:'u'
I think this can be specified before or after that rule is defined but not 100% sure on that.
But this will disable the for ALL 'u' parameters and you only want to disable it for this particular call, so slightly better but still not what you are looking for.
Therefore the best way is to use the ctl action, on a rule which matches the URL, to alter the original rule for that parameter:
SecRule REQUEST_URI "#beginsWith /fr/share/" \
"t:none,id:1,nolog,pass,ctl:ruleRemoveTargetById=950109;ARGS:u"
An almost identical request to what you are asking for, for rule 981260, is documented here:
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#ctl

Related

How can I use an Azure Front Door Rules Engine match condition to only match requests to the root of a site?

I'm trying to set up a set of rules on my Azure Front door to redirect all requests to the root of a site to a set of language based subfolders based on the location match of the incoming request.
Doing the Geo-location part is fairly straightforward, but I'm not having much success limiting the requests to only the root of the site - or at least when I try to do so, my rules don't appear to match and I don't get the redirect I'm expecting.
I've tried setting the above conditions:
IF "Request Path" EQUAL "/"
AND IF "Remote address" "Geo Match" "Switzerland, CH"
THEN "Routing Configuration" "Redirect" "307"
Host: Preserve;
Destination Path: Replace: "/de-ch/"
However I don't appear to be getting the redirect when requesting the root of the site from a browser based in Switzerland.
I can't find any actual examples for using the Rules Engine with either Path or matching, so I'm wondering if I should be using "Request URL" (and therefore I'll need to put the scheme and host in there, which is less than ideal as ruleset may be working with multiple front end hosts), or should what I'm doing work?
The "Request Path" match condition appears to match on the path after the initial /, for example given a request for:
https://www.example.com/folder/page.html
The following values are used in the match conditions:
Request Path: folder/page.html
Request URL: https://www.example.com/page.html
Request File Extension: html
Request Filename: page.html
I therefore had to use the Request URL condition and limit my rules to the specific domain in the request to ensure that we were only matching the root requests.
I have not tried specifying an operator of Not Any yet, although that could also be a solution (we needed more that 25 rules, which is a further limitation, so ended up using a different solution).
Zhaph said they have not tried the Not Any operator at the time of writing.
I've just used it and I can confirm Not Any works for matching just the root of the domain/subdomain. Definitely takes the hassle out of creating multiple match conditions on Request URL.

ModSecurity: Ignore Array ARGS

I want an exclusion rule for a request to be evaluated at runtime. The body of the request is an array. e.g
["somestring", "someRandomString",....]
This is the rule I have written:
SecRule REQUEST_URI "#beginsWith /my/url" \
              "phase:2,nolog,pass,id:10000,ctl:ruleRemoveTargetById=942100;ARGS"
However, the array ARGS are not excluded. I have not found anything about this in the online docs. Help would be appreciated.
Make sure this is defined before rule 942100.
ctl actions must be specified before the rules they alter, unlike SecRuleUpdateTargetById which must be specified after, confusingly enough.

Application Request Routing on local machine

I installed ARR on my local machine and setup a server farm with a single server in it (localhost). I added two redirect routing rules. However, it doesn't do the redirect. My Default Web Site has ab additional binding like this one: localhost.mycompany.com. I tried putting that in the server farm and it still didn't work. The redirect rules look like this.
Uses wildcards in the pattern
inbound pattern: */path2/*/*/*/method*
Redirect URL: /path1/path2/api/item/method
EDIT: When I use the Test Pattern and enter one of the URLs against my rule it parses it successfully
Also tried putting the full hostname (e.g. http://localhost.mycompany.com/...) in the redirect rule as well as using the alias localServerFarm (which is the name of server farm). Nothing worked.
The module is "working" in some respect because when I had a broken rule it sure told me about it when I tried to load any url on localhost. Once I fixed the rule, I no longer got the error message but it doesn't do any redirection.
This was just a matter of getting the redirect rule correct. In the rules list there is a column named Input and it's setting is URL Path. So, the only input to the pattern match is the path part of the URL not including the / at the beginning. All I had to do was change the */ at the beginning of my pattern to just *, e.g. */path2/*/*/*/method* changed to *path2/*/*/*/method*.
I don't know if there's any other setting for the Input field (it isn't settable in the rule definition screen) but for anyone creating rules remember that only the path without a leading / is what's used for evaluating the pattern match. One note is that if you're matching from the beginning of the path, as I am, you don't need the * at the beginning of the pattern. However, if you go into the test pattern screen and paste a full URL into the Input data it will not just grab the path part of that URL and feed it to the pattern match will use the entire string so it will require an * at the beginning of your pattern to work.

allowing certain urls and deny the rest with robots.txt

I need to allow only some particular directories and deny the rest. It is my understanding that you should allow first then disallow the rest. Is this right what I have setup?
Allow: /word-lists/words-that-start-with/letter/z/
Allow: /word-lists/words-that-end-with/letter/z/
Disallow: /word-lists/words-that-start-with/letter/
Disallow: /word-lists/words-that-end-with/letter/
Your snippet looks OK, just don't forget to add a User-Agent at the top.
The order of the allow/disallow keywords doesn't matter currently, but it's up to the client to make the correct choice. See Order of precedence for group-member records section in our Robots.txt documentation.
[...] for allow and disallow directives, the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule.
The original RFC does state that clients should evaluate rules in the order they're found, however I don't recall any crawler that would actually do that, instead they're playing on the safe side and follow the most restrictive rule.
To evaluate if access to a URL is allowed, a robot must attempt to
match the paths in Allow and Disallow lines against the URL, in the
order they occur in the record. The first match found is used. If no
match is found, the default assumption is that the URL is allowed.

Snort rule to verify content of an http request doesnt work

I am trying to verify the contents of the http response to find a content "abbb" in it.So my rule was
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";to_client; established; content:"abb";sid:XXXXX; rev:x;)
unfortunately this rule seems not to work. Can anyone please tell if there is some issue with my rule.
For starters you need to fix the to_client part of the rule as this is not valid syntax. You will need to change this to be:
flow:to_client,established;
You can find more on flow here.
If you are just looking for the content "abbb" sent from your server to the client then you just need a simple content match like you have. I recommend using the fast pattern matcher here to improve the efficiency of the rule. So your content match would look something like:
content:"abbb"; fast_pattern:only;
Putting this together, your rule might look something like:
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";
flow:to_client,established; content:"abbb"; fast_pattern:only; sid:XXXXX; rev:x;)
If this still isn't triggering then there is probably something else going on. Since you are just looking for this in the content you need to check your inspection depth in the http preprocessor. There is a server_flow_depth and a client_flow_depth. Try setting these to 0 (unlimited) and see if your rule is triggering after. For example if you had a client_flow_depth of 300 and the content "abbb" didn't come until after 500 bytes then the rule is never going to trigger because snort isn't configured to inspect that far into the payload.
If you have adaptive profiling enabled then you need to add the metadata service for http otherwise the rule won't match http traffic. This would look something like:
metadata:service http;
If you don't use adaptive profiling then it will use the ports in the rule header.