Perl Most effective way for scanning for a particular web server http banner? - perl

So basically I'm trying to scan web servers that run for example version apache 2.2.4 on their web server, what's the best way of doing this?
Scan for IP's range from blah blah to blah blah, with port 80 open + web server enabled then just make a script that loads ips and checks to see if they have the server banner i want.
Or what's an alternative faster way?
Basically I'm trying to make a script like ShodanHQ.
I'm trying to get a large amount of web servers running a certain version, can anybody give me a direction, thanks hope i was clear.

For doing Internet-wide surveys like Shodan or Scans.io, you need very-high-bandwidth access, legal approval (or at least a blind eye turned) from your ISP, and likely an asynchronous scanner like Zmap or masscan. Nmap is a decent alternative with the --min-rate argument. Anything using the default TCP stack on your OS (e.g. curl, netcat, or Perl solutions) will not be able to keep up with the high packet volume and number of targets required.
If, however, you want to scan a smaller network (say a /16 with 65K addresses), then Nmap is up to the job, requires less setup than the asynchronous scanners (since they require firewall settings to prevent the native TCP stack from responding to returned probes), and is widely available. You could get reasonable performance with this command:
sudo nmap -v -T5 -PS80 -p80 -sS --script http-server-header -oA scan-results-%D%T 10.10.0.0/16
This breaks down to:
-v - verbose output
-T5 - Fastest timing options. This may be too much for some networks; try -T4 if you suspect lost results.
-PS80 - Only consider hosts that respond on port 80 (open or closed).
-p80 - Scan port 80 on alive hosts
-sS - Use Nmap's half-open SYN scan, which has the best timing performance
--script http-server-header - This script will grab the Server header from a basic GET request. Alternatively you could use http-headers to get all headers, or use -sV --version-light to do basic version detection from probe responses.
-oA scan-results-%D%T - Output 3 formats into separate timestamped files. You can process results with one of the many tools that imports Nmap XML output.

You could use curl and sed:
curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'
Call it from perl with:
perl -e '$server=`curl -sI 'http://192.0.2.1' | sed -n 's/^Server:[[:blank:]]*//p'`; print $server'
The -I option in curl prints the http headers using a HEAD request.

Related

What does the -P0 option do when using nmap?

I'm trying to understand the basics of nmap and its functionality. I am using wireshark to check the network flow. I have a question regarding the following option.
What is the difference between the following commands. Is it recommended to use the -P0 option or not?
nmap -p113 scanme.nmap.org
nmap -p113 -P0 scanme.nmap.org
I have been trying to find what the -P0 option does but i can't find it in any nmap options cheat sheet.
From the nmap manual we learn:
In previous versions of Nmap, -Pn was -P0. and -PN..
Therefore, -P0 is now -Pn.
Now what is -Pn?
This option skips the Nmap discovery stage altogether. Normally, Nmap uses this stage to determine active machines for heavier scanning. By default, Nmap only performs heavy probing such as port scans, version detection, or OS detection against hosts that are found to be up. Disabling host discovery with -Pn causes Nmap to attempt the requested scanning functions against every target IP address specified. [...]

Perform Denial of Service attack

I'm learning networking and internet security, and I'm trying a perform a Denial-of-Service attack on a VM(ip-address:192.168.100.1) who act as a gateway.
Following some tutorials,I'm using hping3 to perform this with hping3 -S --flood -V -p 80 192.168.100.1 as command.
Still I'm able to ping to the gateway from another host.
I've tried to add another attacker,and open more terminals,still no success, the one thing I have obtained is an increment of the round-trip-time ( about 90ms).
Are the attackers too few to perform this?
DOS may be illegal (in many countries). I write this just for educational purpose
Yes you will need more attacker instances. It is highly unlikely that the attacker has a single machine with a big enough Internet connection to generate enough traffic on its own. One way to generate that much traffic is through a botnet.
You may refer to the following link as the 1st step:
https://blog.cloudflare.com/65gbps-ddos-no-problem/

After running "opkg install tcpdump" on tp-link router flashed OpenWrt successfully, the tcpdump command doesn't work

I am doing a wireless experiment which used a tp-link router WR1043ND flashed OpenWrt system. Because I need to catch packages through the router, I need to install the tcpdump software.
I just used the command "opkg install tcpdump" to install it, and the terminal showed installation successful.
But when I entered "tcpdump" command, I got a failure prompt which showed that
-ash: tcpdump: not found
So I try to know whether the tcpdump was installed. I entered as following:
opkg list | grep tcpdump
the result after filter showing:
openvswitch-ovs-tcpdump - 2.8.1-1 - Dump traffic from an Open vSwitch port using tcpdump
openvswitch-ovs-tcpundump - 2.8.1-1 - Convert ``tcpdump -xx`` output to hex strings
pcapsipdump - 0.2-1 - pcapsipdump is a tool for dumping SIP sessions (+RTP traffic, if available) to disk in a fashion similar to "tcpdump -w" (format is exactly the same), but one file per sip session (even if there is thousands of concurrect SIP sessions).
tcpdump - 4.9.2-1 - Network monitoring and data acquisition tool
tcpdump-mini - 4.9.2-1 - Network monitoring and data acquisition tool (minimal version)
tcpreplay - 4.2.5-1 - tcpreplay is a tool for replaying network traffic from files saved with tcpdump or other tools which write pcap(3) files.
Obviously, the installation was successful.
I really hope somebody can help me handle this question, thanks!

Curl TCP keepalives on mac

I have multiple servers set up under a load balancer that distributes requests to them by TCP connection. In other words - if I issue many requests in the browser all of them will be sent to one of the servers that are under load balancer as the TCP connection is opened.
However when I issue requests via curl the TCP connections seem not to be reused and the load balancer send each request to a new server (round-robin algorithm).
QUESTIONS:
Is it possible to enable TCP keepalives with CURL? If so - how?
Should I use something from libcurl, like : http://curl.haxx.se/libcurl/c/persistant.html - how should I do it?
Is it related to the fact that I use mac? http://sourceforge.net/p/curl/bugs/1214/
Thanks.
What I have tried:
for i in {1..100}; do curl --keepalive --keepalive-time 50 -s -D - http:URL -o /dev/null; done
while looping I run this and see that new port is used every time:
lsof -i -n -P | grep curl
This is not possible the way you envision. Since you are creating a new curl process for each URL this will result in a new TCP connection which will end with process close. So even if curl itself would use TCP keep-alive it would not matter because it would be active only until the process is done. Curl by itself will already try to re-use the same connection for multiple requests as long as these requests are inside the same process (like with redirects).
What you would need instead is a way to process multiple URLs inside the same process, so that they could reuse the same TCP connection for multiple requests. This is not possible with the curl command line tool, since this can only process a single URL per run. You have to use instead a tool which could handle multiple URLs within the same process.
Is it possible to enable TCP keepalives with CURL? If so - how?
Yes it is possible but it will not help with your problem.
Should I use something from libcurl, like : http://curl.haxx.se/libcurl/c/persistant.html - how should I do it?
Yes, this could help because you could do multiple requests this way from within the same process. Bindings are available for different programming languages. You could also use instead the native and comfortable HTTP handling of various script languages like python, perl, ruby...
Is it related to the fact that I use mac? http://sourceforge.net/p/curl/bugs/1214/
No, since the problem itself can not be handled with TCP keep alive at all.

Proxy for command line utilities in Win XP

How do I get command line utilities like ping to use the default proxy in Windows XP.
proxycfg -u sets the proxy to the default (IE) proxy alright, but it doesn't seem to be working.
Update: I am behind a proxy and would like a way to check if a site is up or not hence trying to use ping! Also would like a way to telnet (without using Putty) to a specific site and port to check connectivity.
A proxy is usually used for web (HTTP) traffic, ping uses ICMP, which is a completely separate protocol. What, exactly are you trying to do?
So, standard ping doesn't go via an HTTP proxy, as everyone's already mentioned. What you probably want is to tunnel your TCP connections (e.g., HTTP, telnet, ssh) via your HTTP proxy using the CONNECT method. For instance, using netcat (telnet will also work, but netcat's better) you'll do the following:
$ nc yourproxy 3128
CONNECT yourtelnetserver:23 HTTP/1.0
then press enter twice.
There are also tools that can do this for you. Keep in mind that some HTTP proxies are configured to allow CONNECT connections only to certain destinations, for example, to port 443 ony (for TLS/SSL/HTTPS).
Ping doesn't use TCP - it uses ICMP, so using a proxy doesn't really make sense.
Do you have another command line utility in mind?
Your best bet will probably be a command line browser for Windows.
You can try out lynx, which is nearly a full browser, or you can go something simpler and use wget. I would recommend wget myself.
Both programs have some way of configuring a proxy, and the documentation should be the same for both Linux and Windows versions.