Snort rule to verify content of an http request doesnt work - snort

I am trying to verify the contents of the http response to find a content "abbb" in it.So my rule was
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";to_client; established; content:"abb";sid:XXXXX; rev:x;)
unfortunately this rule seems not to work. Can anyone please tell if there is some issue with my rule.

For starters you need to fix the to_client part of the rule as this is not valid syntax. You will need to change this to be:
flow:to_client,established;
You can find more on flow here.
If you are just looking for the content "abbb" sent from your server to the client then you just need a simple content match like you have. I recommend using the fast pattern matcher here to improve the efficiency of the rule. So your content match would look something like:
content:"abbb"; fast_pattern:only;
Putting this together, your rule might look something like:
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";
flow:to_client,established; content:"abbb"; fast_pattern:only; sid:XXXXX; rev:x;)
If this still isn't triggering then there is probably something else going on. Since you are just looking for this in the content you need to check your inspection depth in the http preprocessor. There is a server_flow_depth and a client_flow_depth. Try setting these to 0 (unlimited) and see if your rule is triggering after. For example if you had a client_flow_depth of 300 and the content "abbb" didn't come until after 500 bytes then the rule is never going to trigger because snort isn't configured to inspect that far into the payload.
If you have adaptive profiling enabled then you need to add the metadata service for http otherwise the rule won't match http traffic. This would look something like:
metadata:service http;
If you don't use adaptive profiling then it will use the ports in the rule header.

Related

Istio - Dynamic request routing based on header-values

Dynamic request routing based on header-values
For our QA environment we need to configure a special kind of routing for the incoming (Ingress), but also for the outgoing (Egress) requests. So for outgoing requests the rule should evaluate a header value with a regex and capture a value from the header and build with that value the URL where the request should be redirected. The value in the header is dynamically changed, so the redirect URL can not be hardcoded.
For example if the outgoing requests goes to services-master.anydomain.com, but there's a header value forwarded-for-feature with the value verbu-1234 the request should be redirected services-verbu-1234.anydomain.com.
For incoming requests it's a similar condition. If the origin points to webapp-verbu-1234.anydomain.com, but the request goes to services-master.anydomain.com the regex should extract verbu-1234 from the origin domain and replace master in the URL with the extracted value.
I know, that it's possible to use a regex to match header values, but I'm not sure, if it's possible to use captured values from a match to influence the target URL, at least I couldn't find that in the documentation.
I don't think this is possible
But if your QA system knows the features available, and you need to do that in Istio, you might try creating a VirtualService for each feature. And multiple VirtualServices would be merged by Istio...

CalDAV protocol synchronization and behavior of different clients

i am currently trying to implement a „simple“ readonly CALDAV-interface for a system. But the synchronization protocol and the CALDAV-clients give me some headaches.
The main test client i use is the macos-calendar (sierra).
The initial handshake (DAV principle, calendar lookup) and inital load of data is working. I get some REPORT:calendar-query requests.
The issue is the incremental sync after initial load. There are two approaches:
Via WebSync-extension (REPORT:sync-collection and sync-token prop)
my main issue here is that provisioning the sync-token from the server is not trivial in my system. Changes and New data is not an issue, but physical deletion (not yet logged in the user context) and changes in the scope of group- and/or role-assignments. Maybe i need to consider to invalidate in complex cases the sync-token and let the client resetup without sync-collection?
A nasty workaround could be to retain the calendar item IDs send to the client and check on each request for their existence and responds if necessary with a not found per deleted/out of scope calendar item. But this would mean i store client-state on the server which doesnt sound right and might be error prone.
Via basic protocal synchronization (respond to REPORT:calendar-query and propfind (depth=1) requests no webdav-sync active)
this is also working already in principle for new and changed data. But the macos-calendar doesnt remove items which are not part the collection response (propfind with depth=1). According to the protocol the client should determine the deleted items and remove them, but it doesnt do it in my case. Any ideas here?
For my system currently it would be ideal to use this approach though the performance might be not the ideal one.
With ios-Calendar i face another issue:
Initial handshake is somehow working as the requests in the network are coming and are answered.
But than a MKCALENDAR request is coming (instead of a calendar-query or propfind for items) which answer with 403 as i also dont provide it in the Allow-header of the options response. the request looks like this:
MKCALENDAR /services/cal/_userid/220EDB4A-F00C-41C9-B78F-10781BBA77E4/ HTTP/1.1
Host: 127.0.0.1:8003
Content-Type: text/xml
User-Agent: iOS/10.0.1 (14A403) dataaccessd/1.0
<?xml version="1.0" encoding="UTF-8"?>
<B:mkcalendar xmlns:B="urn:ietf:params:xml:ns:caldav">
<A:set xmlns:A="DAV:">
<A:prop>
<B:calendar-free-busy-set>
<NO/>
</B:calendar-free-busy-set>
<D:calendar-order xmlns:D="http://apple.com/ns/ical/">1</D:calendar-order>
<A:displayname>Kalender</A:displayname>
<B:calendar-timezone>BEGIN:VCALENDAR
 ...deleted....
</B:calendar-timezone>
<B:supported-calendar-component-set>
<B:comp name="VEVENT"/>
</B:supported-calendar-component-set>
</A:prop>
</A:set>
</B:mkcalendar>
Nothing is happening afterwards.
Anyone experiencing this as well? Why ios-calendar tries to do a mkcalendar though i have a calendar-collection as resource-type?
With Thunderbird Lightning:
Initial handshake with the calendar-collection is working
A propfind-and multiget request for items is answered with iCal-Items.
But they are not displayed and in the error log i receive:
Warnung: CalDAV: Get failed: CalDAV: Error: got status 200 fetching calendar data for Debug Proxy, null
(text in german: error code: 0x80004005) Warnung: Fehler beim Lesen von Daten für Kalender: Debug Proxy. Allerdings ist dieser Fehler wahrscheinlich vernachlässigbar, daher versucht das Programm fortzufahren. Fehlercode: 0x80004005. Beschreibung: CalDAV: Error: got status 200 fetching calendar data for Debug Proxy, null
(text in german: error code: READ_FAILED) Warnung: Fehler beim Lesen von Daten für Kalender: Debug Proxy. Allerdings ist dieser Fehler wahrscheinlich vernachlässigbar, daher versucht das Programm fortzufahren. Fehlercode: READ_FAILED. Beschreibung:
http channel Listener OnDataAvailable contract violation
a similiar response is though working in macos-calendar – could it be some encoding issue?
Any hints are highly appreciated!
This is indeed a pretty broad question. But let me try to address some stuff:
Via WebSync-extension (REPORT:sync-collection and sync-token prop) my main issue here is that provisioning the sync-token from the server is not trivial in my system
Even if it is hard for you, you should really try to come up with something here. Even if this means storing some extra info on the server. Sync-collection is way more efficient.
(Idea: Maybe you can at least set a flag when something actually got deleted and only then expire the sync-token?)
Via basic protocal synchronization (respond to REPORT:calendar-query and propfind (depth=1))
Which one, calendar-range-query or PROPFIND? Completely different things ...
this is also working already in principle for new and changed data. But the macos-calendar doesnt remove items which are not part the collection response (propfind with depth=1).
If we are talking about a calendar-range-query, the client cannot proactively delete items since it doesn't know whether they just left the range (vs being deleted).
With PROPFIND it should do this. If you have proof it doesn't, maybe create another question with all the relevant details.
With ios-Calendar i face another issue: ... a MKCALENDAR request is coming ...
This probably means that it can't find the default scheduling calendar, no calendar at all, none with a proper component-type property. Or all the same for todos (Reminders app, same account). What is the payload of the MKCALENDAR?
Hard to diagnose w/o details, if you can't figure it out, ask a specific question on this with all the relevant details included (e.g. the XML you send in response to the home query).
Thunderbird Lightning
Can't say much about this, probably depends a lot on the version and what extensions you are using. AFAIK many people use the ScalableOGo Thunderbird extensions to get proper Cal/CardDAV with Thunderbird.
For Thunderbird/Lightning you may want to turn on calendar.debug.log and calendar.debug.log.verbose in the advanced config editor and restart. You can find it in Options > Advanced > General > Config Editor. This will get you more detailed http requests and information about what failed. You can also hook up the remote debugger and look at the network monitor, or set breakpoints in the code.
With Thunderbird/Lightning please note that we are using a mix of previous and current versions of the webdav-sync draft. I can't say much from the error message as is given it is very general, but it does look like there is something unexpected in the results.
Maybe it makes sense to compare the handshake between an existing server (like sabre/dav) and the client, then see where the difference between your communication and theirs is.
Also, you may be interested in the CalDAVTester from Apple, which checks server interoperability. Note however that it does contain various apple specific tests. The folks at CalConnect are working together with Apple to make it more generally usable and to split out the Apple-specific tests. Given your server is read-only, don't expect everything to work, but you can hunt for fixing specific tests.

Ensure Completeness of HTTP Messages

I am currently working on an application that is supposed to get a web page and extract information from its content.
As I learned from my research (or as it seems to me at least), there is no ideal way to determine the end of an HTTP message.
Generally, I found two different ways to do so:
Set O_NONBLOCK flag for the socket and fetch data with recv() in a while loop. Assume that the message is complete and break if it occurs once that there are no bytes in the stream.
Rely on the HTTP Content-Length header and determine the end of the message with it.
Both ways don't seem to be completely safe to me. Solution (1) could possibly break the recv loop before the message was completed. On the other hand, solution (2) requires the Content-Length header to be set correctly.
What's the best way to proceed in this case? Can I always rely on the Content-Length header to be set?
Let me start here:
Can I always rely on the Content-Length header to be set?
No, you can't. Content-Length is an optional header. However, HTTP messages absolutely must feature a way to determine their body length if they are to be RFC-compliant (cf RFC7230, sec. 3.3.3). That being said, get ready to parse on chunked encoding whenever a content length isn't specified.
As for your original problem: Ensuring the completeness of a message is actually something that should be TCP's job. But as there are such complicated things like message pipelining around, it is best to check for two things in practice:
Have all reads from the network buffer been successful?
Is the number of the received bytes identical to the predicted message length?
Oh, and as #MartinJames noted, non-blocking probably isn't the best idea here.
The end of a HTTP response is defined:
By the final (empty) chunk in case Transfer-Encoding chunked is used.
By reaching the given length if a Content-length header is given and no chunked transfer encoding is used.
By the end of the TCP connection if neither chunked transfer encoding is used not Content-length is given.
In the first two cases you have a well defined end so you can verify that the data were fully received. Only in the last case (end of TCP connection) you don't know if the connection was closed before sending all the data. But usually you get either case 1 or case 2.
To make your life easier, you might want to provide
Connection: close
header when making HTTP request - than web-server will close connection after giving you the full page requested and you will not have to deal with chunks.
It is only a viable option if you only are interested in this single page, and will not request additional resources (script files, images, etc) - in latter case this will be a very inefficient solution for both your app and the server.

How to redirect to a default hostname

I want something like:
"http://www.anyhostname.com" ==> "http://192.168.0.1"
i.e. I want to redirect any request other than "192.168.0.1" to "http://192.168.0.1"
I am using Lighttpd as my webserver and dnsmasq as my DNS server.
I have to wonder if you're doing transparent proxying -- if so, there may be better mechanisms to accomplish what you want to do than literally doing what you outlined as your goal.
But if you want to keep going this route, I think you can use lighttpd's mod_evhost facility to easily use a default site configuration:
General Example:
server.document-root = "/home/user/sites/default/site"
evhost.path-pattern = "/home/user/sites/%0/site/"
If example.org is requested, and
/home/user/sites/example.org/site/ is
found, that path becomes the docroot.
If example.net is requested but no
directory named
/home/user/sites/example.net/site/
exists, then the docroot remains
/home/user/sites/default/site
If you have specific hostnames that you want to handle, you can add them to /etc/hosts and your dnsmasq will serve them. This would work if you had a few hundred hosts/domains that you wanted to handle, but if you wanted to handle everything, then dnsmasq may not be the right tool.
I know that PowerDNS's PipeBackend can be used to easily give the same answer regardless of DNS question; this way, you could easily intercept requests and handle some or all requests specially. This way, you could answer 192.168.0.1 for every request, for some requests, or anything you can program.
Okay, I solved the problem. Posting the solution here back with hope that it helps somebody in the future...
I solved this by modifying the lighttpd.conf file. I added the following inside my lighttpd.conf file:
$HTTP["host"] !~ "mydesiredhostname\.com" {
url.redirect = (
"" => "http://192.168.0.1/"
)
}
Thank you everybody for your time. Cheers!

How to "respond with" HTTP files using Fiddler's AutoResponder?

Is there a way to have Fiddler use an HTTP-delivered file as the response when using the AutoResponder?
I have an AutoResponder rule set up similar to this:
If URI matches...
http://www.liveserver.com/scripts/javascript_file.js
then respond with...
http://internal-dev-server.ext/scripts/javascript_file.js
so that I can QA a different JavaScript before publishing it live.
But responses via HTTP ways return a 404 error. Specifically:
Fiddler - The file C:\Users\me\Documents\Fiddler2\Captures\Responses\http://internal-dev-server.ext/scripts/javascript_file.js was not found.
I get the same thing even if I start the request and response with "EXACT:"
Supported in v2.3.2.5, currently in Alpha form # https://www.fiddler2.com/dl/fiddler2alphasetup.exe
You can either use the HTTP/HTTPS URL directly, or you can use syntax like:
*redir:http://othersite.com/whatever
The advantage of the REDIR syntax is that the server will return a 307 response so that the new outbound request bears the cookies and other information for "othersite.com" instead of "originalsite.com". This may or may not be desirable.
Alternatively, the HOSTS option on the Tools menu is one way to achieve this; the alternative is to type urlreplace partialurlstring1 newpartialurlstring2 in the QuickExec box under the session list. Or write a rule inside Rules > Customize Rules. See www.fiddler2.com/fiddler/dev/scriptsamples.asp for examples.