How can I implement server-side rate limiting for a Perl web service? - perl

I have a Perl-based CGI/Fast CGI web service and want to rate-limit clients by IP address to stop aggressive clients causing too much work.
I have looked around for some code and found Algorithm::TokenBucket in CPAN but that is for client requests; it has no persistence and has no per-user config so is not really useful for server-side rate limiting.
I am looking for suggestions for something that already exists, otherwise I'll need to roll my own based on some simple persistence such as tie to DB_File per-IP address and some batch job that does the token management.

I've used Cache::FastMmap for rate-limiting by tracking hits per IP address. It's a cache so data will expire over time, but if you set the size and expire time right, this shouldn't be an issue.
The IP address is the hash key and the hash value is an array of timestamps. I have a second data structure (also backed by Cache::FastMMap) which is a hash of banned IP addresses, updated according to the data from the first structure.

I know it's not what you asked, but have you considered handling this elsewhere in the stack where it's already been done for you? Clearly I don't know your deployment stack, but if it's apache you could use mod_evasive. Alternately if you're on Linux you could let iptables do its job using something like:
#Allow only 12 connections per IP
/sbin/iptables -A INPUT -p tcp --dport 80 -m conn-limit --connlimit-above 12 -j REJECT --reject-with tcp-reset
certainly more complicated rules are possible.

Related

Perform Denial of Service attack

I'm learning networking and internet security, and I'm trying a perform a Denial-of-Service attack on a VM(ip-address:192.168.100.1) who act as a gateway.
Following some tutorials,I'm using hping3 to perform this with hping3 -S --flood -V -p 80 192.168.100.1 as command.
Still I'm able to ping to the gateway from another host.
I've tried to add another attacker,and open more terminals,still no success, the one thing I have obtained is an increment of the round-trip-time ( about 90ms).
Are the attackers too few to perform this?
DOS may be illegal (in many countries). I write this just for educational purpose
Yes you will need more attacker instances. It is highly unlikely that the attacker has a single machine with a big enough Internet connection to generate enough traffic on its own. One way to generate that much traffic is through a botnet.
You may refer to the following link as the 1st step:
https://blog.cloudflare.com/65gbps-ddos-no-problem/

Localtunnel is not setting up the requested subdomain from the command 'lt --port 4000 --subdomain xyz'

I have been trying to set the subdomain in localtunnel, but it keeps throwing me different subdomains.
Port number is 4000 and it's running.
The command which I used :
lt --port 4000 --subdomain xyz (I changed subdomain name for the security reason).
Where am I doing wrong?
I know it is a very late answer, but for the help of others searchers who get to this link, and are not able to find a valid answer, for those users I am writing this answer
The command which I used : lt --port 4000 --subdomain xyz (I changed
subdomain name for the security reason).
The first thing is that the command is ok but before local tunnel assigns you a subdomain it must be available first.
Now you may be thinking that I am using a private very unique domain name which should have available, yes you are right but remember local tunnel keeps the record of subdomains provided by you and builds his private database which contains enough pool for random subdomain assigning feature.
Which now clears that after one, two or even more (non-consecutive) attempts it is possible that your domain assigned to someone else so that for that period you can obviously not use that domain, however whenever that domain will be freed, you will be assigned the requested domain for sure.
I'm not familiar enough with localtunnel to tell you what's wrong there, but I can tell you how to accomplish your same goal using Telebit:
(p.s. Did you figure this out? If so, I'd love to hear how you did it and I'm sure others would too)
Install
curl https://get.telebit.io | bash
You can also install via npm... but that isn't the preferred install method at this time. There may be some caveats.
The random domain you get is attached to your account (hence the need for email) and it's encrypted end-to-end with Greenlock via Let's Encrypt.
Configure
./telebit http 4000 xyz
The general format is
./telebit <protocol> <port> [subdomain]
It's not just https, you can use it to tunnel anything over tls/ssl (plain tcp, ssh, openvpn, etc).
Custom domains are not yet a generally available feature, but they're on the horizon.

Block facebook.com using openwrt router

I am using OpenWRT router. I need to block a URL or multiple URLs (Not IP) for specific time. for example, I want to block facebook.com so that clients of this router cant access the website. firewall rules should have the option to do that but I dont know how to do that.
Here is one way to block by domain name rather than by IP address.
The main reason of why you need such a complicated method is that each domain name (e.g. facebook.com) may be resolved as different IP address at any given time. So, we need to keep a list of resolved IP addresses and add iptables rules based on this list.
First, you should enable logging in dnsmasq config:
uci set dhcp.#dnsmasq[0].logqueries=1
uci commit dhcp
/etc/init.d/dnsmasq restart
This will give you log entries like:
daemon.info dnsmasq[2066]: reply facebook.com is 31.13.72.36
Now, you just have to constantly parse syslog and add corresponding iptables rules like this (note that you most likely need a more versatile script and ipset for better performance):
logread -f | awk '/facebook.com is .*/{print $11}' | while read IP; do iptables -I OUTPUT -d $IP -j DROP; done

Haproxy Health Check port

I'm trying to think through the advantages and disadvantages of haproxy health checks happening on a different port from regular traffic.
If a server becomes overloaded having health checks on a different port may mark the server as being up even when overloaded. I think this is a good thing because taking servers offline may make an overloading problem worse, but want to confirm that that makes sense. I can't seem to find any good docs on the tradeoffs though and was wondering if someone has a good analysis on the tradeoffs.
The port keyword is often used with address to send health checks somewhere else than directly to the service you are checking. One example might be enabling option httpchk to monitor a non-HTTP service. What you then do is have a HTTP-compatible service that when queried can execute complex health checks against the service you are actually testing.
The above is often done with agent-check nowdays, but some people prefer to use an HTTP interface.
This also has nothing to do with server load, the only idea is to send health checks to some other service, not the one directly monitored, which is more capable of testing the actual service (possibly by using a more-complex logic) and returning a result. As an example, one could have a MySQL backend which instead of being tested just for authentication by option mysql-check, could be tested by a PHP script that, for example, checks if backup is running and if it is returns a 5xx HTTP error. The configuration could be something like:
backend mysql
mode tcp
option httpchk GET /mysql-status.php
server mysqlserver 10.0.0.1:3306 check port 80

Get Azure public IP address from deployed app

I'm implementing the PASV mode in a FTP server, and I send to the client the IP address and port of the data end point. This is stupid because the IP is actually where the client is already connecting, so there ire two options:
How could I get the public IP
address from a given instance? Not
the VIP, but the public one.
How could I get the original target
IP address that the user used from
a Socket object? Considering routers and load balancers in the middle :P
An answer to any of this questions would do, although there is another way that could work... may I get the public IP address doing a DNS look up of myapp.cloudapp.net?
A fourth option would be use the Azure Management API library... but, too much trouble :P.
Cheers.
Not sure if you ever figured this out, but here's my take on it. The individual role instances are all behind the Windows Azure load balancer and have no idea what the original, outward-facing IP address is. Also, there's no Management API call that returns IP address - Get Deployment returns the URL but not the IP address. I think the only option is going to be a dns lookup.
Having said that: I don't think you can host a passive ftp server in your role instance (at least not elegantly). You may open up to 25 input endpoints on your role (up from 5 - see my recent blog post about this update), but there's manual work involved in the configuration. I don't know if your ftp application lets you limit your port range to such a small number of ports. Also:
You'd have to define each port as its own input endpoint (this is the manual labor part I mentioned) - input endpoints don't allow a port range to be specified, unlike the internal endpoints.
You'd have to specify the port number that's used internally, and the port numbers would need to be sequential
One last thing on ftp: you should be able to host an sftp server with no trouble, since all traffic comes through one port.
The hack that I'm contemplating right now is to retrieve http://www.icanhazip.com/. It isn't elegant and is subject to the availability of that service, but it gets the job done. A better solution would be appreciated!