Matching packet Content in a specific order with Suricata? - snort

I'm attempting to create a Suricata rule that will match a packet if and only if all content is found and in a specific order.
The problem with my current rule is that it will match even if the packet content is test2 test1.
Is there a way to achieve this functionality without using pcre?
alert tcp $HOME_NET any -> $EXTERNAL_NET [80,443] (msg:"Test Rule"; flow:established,to_server; content:"test1"; fast_pattern; content:"test2"; distance:0; classtype:web-application-activity; sid:5182976; rev:2;)

I figured out that the method I was using to test the Suricata signatures was duplicating the tested data at some point causing for the signature to always fire.
As to answer my own question, content order can be enforced by adding a distance modifier after the first content match.
As seen in:
content:"one"; content:"two"; distance:0; content:"three"; distance:0; . . .
As far as I can tell, the fast_pattern keyword can be omitted.

Related

How do I reduce the number of 301 redirect entries using wildcards and variables in Squarespace?

I recently renamed all of the URLs that make up my blog... and have written redirects for almost every page... using wildcards where I can... keeping in mind... all that I know is the * wildcard at this time...
Here is an example of what I have...
/season-1/2017/1/1/snl-s01e01-host-george-carlin -> /season-1/snl-s01e01-george-carlin 301
I want to write a catch-all that will redirect all 38 seasons of reviews with one redirect entry... but I can't figure out how to get rid of just the word "host" between s01e01- and -george-carlin... and was thinking it would work something like this...
/season-*/*/*/*/snl-s*e*-host-*-* -> /season-*/snl-s*e*[code to remove the word "host"]-*-* 301
Is that even close to being correct? Do I need that many *s
Thanks in advance for any help...
Unfortunately, you won't be able to reduce the number of individual redirect entries using the redirect features that Squarespace has to offer, namely the wildcard (*) and a single variable ([name]). Multiple variables would be needed, but only [name] is supported.
The closest you can get is:
/season-1/*/*/*/snl-s01e01-host-[name] -> /season-1/snl-s01e01-[name] 301
But, if I'm understanding things, while the above redirect appears more general, it would still need to be copy/pasted for each post individually. So although it demonstrates the best that could be achieved, it is not a technical improvement.
Therefore, there are only two alternatives:
Create a Google Sheet (or other spreadsheet) where the old URLs are copy/pasted in column one, a formula using arrayformula and regular expressions to parse the old URL and generate the new URL is added in column two, and in column three a formula is written to join the two cells with -> and 301. With that done, you could click, drag and highlight all cells in column 3, copy, and paste them in the "URL Shortcuts" text area in Squarespace.It can be quite time consuming to figure out, write and test the correct formula, but it does avoid having to manually type out every redirect. Whether it is less time/effort in total depends on the number of redirects and one's proficiency with writing spreadsheet formulas.It could be that using the redirect code above would simplify the formula that'd need to be written in the spreadsheet, which may save some time.
Another alternative would be to remove your redirects and instead handle the redirect via JavaScript added to the 404/Page-not-found page. Because it sounds like you already have all of the redirects in place but are simply trying to reduce the overall number, I wouldn't recommend changing to a JavaScript-based approach. There are other drawbacks to using JavaScript, in any case.

What if there exists no matched rule in a Lex program because of REJECT?

I'm currently reading the documentation on Lex written by Lesk and Schmidt, and get confused by the REJECT action.
Consider the two rules
a[bc]+ { ... ; REJECT;}
a[cd]+ { ... ; REJECT;}
Input:
ab
Only the first matches, and see what we get from the material.
The action REJECT means ``go do the next alternative.'' It causes whatever rule was second choice after the current rule to be executed.
However, there is no second choice, will there comes a error?
There are really very few use cases for REJECT; I don't think I've ever seen an instance of it in use other than in examples.
Anyway, unless you specify %option nodefault (or the -s command-line flag), flex will add a default fallback action to your ruleset, equivalent to
.|\n ECHO;
In your case, that pattern will match after the REJECT.
However, it is possible to override the default action; for example, you could add the rule:
.|\n REJECT;
In that case, flex really will not have an alternative after the two REJECTs, and it will print an error message on stderr ("flex scanner jammed") and then call exit.

How to filter by both text and property in Chrome DevTool's network panel?

I want to filter Chrome DevTool's network panel by the method property and text in the URL. For example, if I am searching for the text chromequestion in the URL and only HTTP GET requests (ignore PUT, POST, DELETE, etc).
I am able to filter by text or by method:
I am not able to combine the filter to search by both text and method:
I read the documentation at https://developers.google.com/web/tools/chrome-devtools/network-performance/reference#filters and I am able to filter by multiple properties (.e.g, domain:*.com method:GET). However, I am unable to filter by text and property (e.g., method:GET chromequestion).
Unfortunately, it's not possible to do this currently. I played around in DevTools originally, but couldn't find a way. I later had a look into how the filtering was implemented, and can confirm there's a limitation preventing you from mixing the pre-defined filters and text filters.
Implementation details
This is a bit long but I thought it might be interesting for some to see how it's implemented. I will probably look into improving the implementation, either myself or I'll log it because it's limited.
There's a _parseFilterQuery function that parses the input field and categorises the entries into two arrays. The first is called filters, and it's the pre-defined filtering options, such as method:GET etc. The second is a text array filter, split up by spaces. The parser determines the difference fairly naively, by checking for the occurrence of :, and - at the start (for negation).
Scenario 1
You only input a pre-defined filter, or multiple filters. For each filter, the specific filter function, which looks at the different properties of the request object, is pushed to a network module filters array (this._filters). Later on, for each request, the function is called on it, and a match returns true, otherwise false. This will determine whether the request is shown. There's obviously a requirement for ALL filters to return true for the row to show.
Scenario 2
This is the interesting one, where you input both a pre-defined filter and a bit of text. This covers the Stack Overflow question. The _parseFilterQuery function looks at the text filters first, before the pre-defined ones. In Scenario 1, this was empty, so it was skipped.
We pass each text word to the _createTextFilter, and push each of the resulting filters to the network module filters array. However, the implementation of this is questionable. The only time the actual word passed in is used is to check whether its a negation filter for a bit of text. If the first character is -, it means the user doesn't want to see a request with the following word in the name. For example -icon means don't show any request with that in the name/page. If there is no negation, it simply returns the WHOLE input text as a regular expression, NOT the word passed in. In my case, it returns /method:GET icon/i.
The pre-defined filters are looked at next. In this case, method:GET is pushed.
Finally, it loops over the requests calling each filter on it. However, since the first filter is /method:GET icon/i, it makes ALL other filters redundant because it will NEVER pass. The text filters only apply to name and path, so method:GET in a text filter will be invalid.

Treeline.io sanitize inputs

I have just started investigating into treeline.io beta, so, I could not find any way in the existing machinepacks that would do the job(sanitizing user inputs). Wondering if i can do it in anyway, best if within treeline.
Treeline automatically does type-checking on all incoming request parameters. If you create a route POST /foo with parameter age and give it 123 as an example, it will automatically display an error message if you try to post to /foo with age set to abc, because it's not a number.
As far as more complex validation, you can certainly do it in Treeline--just add more machines to the beginning of your route. The if machine works well for simple tasks; for example, to ensure that age is < 150, you can use if and set the left-hand value to the age parameter, the right-hand value to 150, and the comparison to "<". For more custom validations you can create your own machine using the built-in editor and add pass and fail exits like the if machine has!
The schema-inspector machinepack allow you to sanitize and validate the inputs in Treeline: machinepack-schemainspector
Here a screenshot how I'm using it in my Treeline project:
The content of the Sanitize element:
The content of the Validate element (using the Sanitize output):
For the next parts, I'm always using the Sanitize output (email trimmed and in lowercase for this example).

Fiddler Autoresponder: Regex replacement not working

I have a regex rule and an action that returns a file from a local cache. The rule captures what I want it to, but the problem is $2 in the action is not handled, so Fiddler tries to return D:\path\$2 (and fails). What could be wrong?
Rule:
regex:(?insx).*(host1.com|host2.com)/folder1/folder2/(.*)\?rev=.*
Action:
D:\path\$2
Any help would be appreciated.
P.S. I'm using Fiddler v2.4.8.0
After loosing an interesting amount of hair with this, I achieve it 'naming' the group replacement, like this:
Rule:
regex:(?insx).*(host1.com|host2.com)/folder1/folder2/(?'mygroup'.*)\?rev=.*
Action:
D:\path\${mygroup}
When you're using group replacements like this, it's important to put ^ at the front of the Rule expression and $ at the end.