I'am about to build an automatic intrusion detection system (IDS) behind my FritzBox Router in my home LAN.
I'm using a Raspberry Pi with Raspbian Jessie, but any dist would be ok.
After some searches and tryouts I found ntop (ntopng to be honest, but I guess my questions aims to any version).
ntop can capture network traffic on its own, but thats not what I want because I want to get all the traffic without putting the Pi between the devices or let him act as a gateway (for performance reasons). Fortunately my FritzBox OS has a function to simulate a mirror port. You can download a .pcap which is continously written in realtime. I do it with a script from this link.
The problem is that I can't pipe the wget download to ntop like I could do it with e.g. tshark.
I'm looking for:
wget -O - http://fritz.box/never_ending.pcap | ntopng -f -
While this works fine:
wget -O - http://fritz.box/never_ending.pcap | tshark -i -
Suggestions of other analyzing software is ok (if pretty enough ;) ) but I want to use the FritzBox-pcap-thing...
Thanks for saving another day of mine :)
Edit:
So I'm comming to this approaches:
Make chunks of pcaps an run a script to analyse every pcap after another. Problem ntop do not merge the results, and I could get a storage problem if traffic running hot
Pipe wget to tshark and overwrite one pcap every time. Then analyse it with ntop. Problem again, the storage
Pipe wget to tshark cut some information out and store them to a database. Problem which info should I store and what programm likes dbs more than pcaps ?
The -i option in tshark is to specify an interface, whereas the -f option in ntop is to specify a name for the dump-file.
In ntopng I didn't even know there was a -f option!?
Does this solve your problem?
Related
I'm using RawCap to capture packets sent from my dev machine (one app) to itself (another app). I can get it to write the captures to a file like so:
RawCap.exe 192.168.125.50 piratedPackets.pcap
...and then open the *.pcap file in Wireshark to examine the details. This worked when I called my REST method from Postman, but when using Fiddler's Composer tab to attempt the same, the *.pcap file ends up being empty. I think this may be because my way of shutting down RawCap was rather raw itself - I simply closed the command prompt. Typing "exit" does nothing while it's busy capturing.
How can I make like a modern-day Mansel Alcantra if I the captured packets sink to the bottom of the ocean before I can plunder the booty? How can I gracefully shut RawCap down so that it (hopefully) saves its contents to the log (*.pcap) file?
RawCap is gracefully closed by hitting Ctrl + C. Doing so will flush all packets from memory to disk.
You can also tell RawCap to only capture a certain number if packets (using -c argument) or end sniffing after a certain number of seconds (using -s argument).
Here's one example using -s to sniff for 60 seconds:
RawCap.exe -s 60 192.168.125.50 piratedPackets.pcap
Finally, if none of the above methods is available for you, then you might wanna use the -f switch. By using -f all captured packets will be flushed to disk immediately. However, this has a performance impact, so you run a greater risk of missing/dropping packets when sniffing with the -f switch.
You can run RawCap.exe --help to show the available command line arguments. They are also documented here:
http://www.netresec.com/?page=RawCap
I am on an embedded platform (mipsel architecture, Linux 2.6 kernel) where I need to monitor IPC between two closed-source processes (router firmware) in order to react to a certain event (dynamic IP change because of DSL reconnect). What I found out so far via strace is that whenever the IP changes, the DSL daemon writes a special message into a UNIX domain socket bound to a specific file name. The message is consumed by another daemon.
Now here is my requirement: I want to monitor the data flow through that specific UNIX domain socket and trigger an event (call a shell script) if a certain message is detected. I tried to monitor the file name with inotify, but it does not work on socket files. I know I could run strace all the time, filtering its output and react to changes in the filtered log file, but that would be too heavy a solution because strace really slows down the system. I also know I could just poll for the IP address change via cron, but I want a watchdog, not a polling solution. And I am interested in finding out whether there is a tool which can specifically monitor UNIX domain sockets and react to specific messages flowing through in a predefined direction. I imagine something similar to inotifywait, i.e. the tool should wait for a certain event, then exit, so I can react to the event and loop back into starting the tool again, waiting for the next event of the same type.
Is there any existing Linux tool capable of doing that? Or is there some simple C code for a stand-alone binary which I could compile on my platform (uClibc, not glibc)? I am not a C expert, but capable of running a makefile. Using a binary from the shell is no problem, I know enough about shell programming.
It has been a while since I was dealing with this topic and did not actually get around to testing what an acquaintance of mine, Denys Vlasenko, maintainer of Busybox, proposed as a solution to me several months ago. Because I just checked my account here on StackOverflow and saw the question again, let me share his insights with you. Maybe it is helpful for somebody:
One relatively easy hack I can propose is to do the following:
I assume that you have a running server app which opened a Unix domain listening socket (say, /tmp/some.socket), and client programs connect to it and talk to the server.
rename /tmp/some.socket -> /tmp/some.socket1
create a new socket /tmp/some.socket
listen on it for new client connections
for every such connection, open another connection to /tmp/some.socket1 to original server process
pump data (client<->server) over resulting pairs of sockets (code to do so is very similar to what telnetd server does) until EOF from either side.
While you are pumping data, it's easy to look at it, to save it, and even to modify it if you need to.
The downside is that this sniffer program needs to be restarted every time the original server program is restarted.
This is similar to what Celada also answered. Thanks to him as well! Denys's answer was a bit more concrete, though.
I asked back:
This sounds hacky, yes, because of the restart necessity, but feasible.
Me not being a C programmer, I keep wondering though if you know a
command line tool which could do the pass-through and protocolling or
event-based triggering work for me. I have one guy from our project in
mind who could hack a little C binary for that, but I am unsure if he
likes to do it. If there is something pre-fab, I would prefer it. Can it
even be done with a (combination of) BusyBox applet(s), maybe?
Denys answered again:
You need to build busybox with CONFIG_FEATURE_UNIX_LOCAL=y.
Run the following as intercepting server:
busybox tcpsvd -vvvE local:/tmp/socket 0 ./script.sh
Where script.sh is a simple passthrough connection
to the "original server":
#!/bin/sh
busybox nc -o /tmp/hexdump.$$ local:/tmp/socket1 0
As an example, I added hex logging to file (-o FILE option).
Test it by running an emulated "original server":
busybox tcpsvd -vvvE local:/tmp/socket1 0 sh -c 'echo PID:$$'
and by connecting to "intercepting server":
echo Hello world | busybox nc local:/tmp/socket 0
You should see "PID:19094" message and have a new /tmp/hexdump.19093 file
with the dumped data. Both tcpsvd processes should print some log too
(they are run with -vvv verbosity).
If you need more complex processing, replace nc invocation in script.sh
with a custom program.
I don't think there is anything that will let you cleanly sniff UNIX socket traffic. Here are some options:
Arrange for the sender process to connect to a different socket where you are listening. Also connect to the original socket as a client. On receipt of data, notice the data you want to notice and also pass everything along to the original socket.
Monitor the system for IP address changes yourself using a netlink socket (RTM_NEWADDR, RTM_NEWLINK, etc...).
Run ip monitor as an external process and take action when it writes messages about added & removed IP addresses on its standard output.
I am trying to scrape a website using wget. Here is my command:
wget -t 3 -N -k -r -x
The -N means "don't download file if server version older than local version". But this isn't working. The same files get downloaded over and over again when I restart the above scraping operation - even though the files have no changes.
Many of the downloaded pages report:
Last-modified header missing -- time-stamps turned off.
I've tried scraping several web sites but all tried so far give this problem.
Is this a situation controlled by the remote server? Are they choosing not so send those timestamp headers? If so, there may not be much I can do about it?
I am aware of the -NC (no clobber) option, but that will prevent an existing file not being overwritten even if the server file is newer, resulting in stale local data accumulating.
Thanks
Drew
The wget -N switch does work, but a lot of web servers don't send the Last-Modified header for various reasons. For example, dynamic pages (PHP or any CMS, etc.) have to actively implement the functionality (figure out when the content was last modified, and send the header). Some do, while some don't.
There really isn't another reliable way to check if a file has been changed, either.
Is there a way to get recorder real network traffic to web server, e.g. from web server logs (Apache), and replay this traffic to either profile web application (in Perl) under real load, or benchmark and compare speed of different implementations before choosing one or the other?
If it matters, webapp is written in Perl, and runs under plain CGI, FastCGI, mod_perl (via ModPerl::Registry), PSGI (via Plack::App::WrapCGI).
Crossposted to Pro Webmasters
Similar questions on Server Fault:
How can I replay Apache access logs back at my servers to do real world load testing?
A quick scan on Google for this yielded an interesting blog entry with subsequent, useful comments are at http://www.igvita.com/2008/09/30/load-testing-with-log-replay/. A commenter also mentioned Tsung by Process-One that allows for recording sessions real-time, with the obvious note that you should be able to replay it back. That doesn't help so much with existing Apache access logs though.
Been here lately. I figured that if I dumped tcp traffic with tcpdump I could rewrite the destination of the packages and then replay it to the new app servers. So I started out with something like this:
tcpdump -i eth1 dst -s 0 -w - port 80 | \
tcprewrite --mtu-trunc --infile=- --outfile=- \
--dstipmap=<source_ip>:<destination_ip> | \
tcpslice -w - - | tcpreplay --intf1=eth1 -
It did not work for various reasons, so I started digging some more and found Gor: a small Go project by Leonid Bugaev from Granify, written for exactly what we wanted to accomplish here.
This is how we ended up using Gor: http://devblog.springest.com/testing-big-infrastructure-changes-at-springest/
We have a Chef cookbook for it as well: https://github.com/Springest/gor-chef
Hope this helps.
Short answer was given on the otherside.
Longer answer is that you can't: you will be missing request headers and POST bodies.
Here's a simple perl way to record real http traffic and play it back:
http://patrick.net/sprocket/rwt.html
If only GET requests are needed and there is no session-tracking implemented via query parameters, then this is possible.
One question: do you want to do it this way because (1) you want to emulate real-world distribution of traffic among your pages or (2) there are too many pages to even consider building any sort of test scripts?
I'm using wget to connect to a secure site like this:
wget -nc -i inputFile
where inputeFile consists of URLs like this:
https://clientWebsite.com/TheirPageName.asp?orderValue=1.00&merchantID=36&programmeID=92&ref=foo&Ofaz=0
This page returns a small gif file. For some reason, this is taking around 2.5 minutes. When I paste the same URL into a browser, I get back a response within seconds.
Does anyone have any idea what could be causing this?
The version of wget, by the way, is "GNU Wget 1.9+cvs-stable (Red Hat modified)"
I know this is a year old but this exact problem plagued us for days.
Turns out it was our DNS server but I got around it by disabling IP6 on my box.
You can test it out prior to making the system change by adding "--inet4-only" to the end of the command (w/o quotes).
Try forging your UserAgent
-U "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1"
Disable Ceritificate Checking ( slow )
--no-check-certificate
Debug whats happening by enabling verbostity
-v
Eliminate need for DNS lookups:
Hardcode thier IP address in your HOSTS file
/etc/hosts
123.122.121.120 foo.bar.com
Have you tried profiling the requests using strace/dtrace/truss (depending on your platform)?
There are a wide variety of issues that could be causing this. What version of openssl is being used by wget - there could be an issue there. What OS is this running on (full information would be useful there).
There could be some form of download slowdown being enforced due to the agent ID being passed by wget implemented on the site to reduce the effects of spiders.
Is wget performing full certificate validation? Have you tried using --no-check-certificate?
Is the certificate on the client site valid? You may want to specify --no-certificate-check if it is a self-signed certificate.
HTTPS (SSL/TLS) Options for wget
One effective solution is to delete https:\\.
This accelerate my download for around 100 times.
For instance, you wanna download via:
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
You can use the following command alternatively to speed up.
wget data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2