iptables / cherrypy redirection changes request mid-processing - redirect

Sorry for the vague title, but my issue is a bit complicated to explain.
I have written a "captive portal" for a WLAN access point in cherrypy, which is just a server that blocks MAC addresses from accessing the internet before they have registered at at certain page. For this purpose, I wrote some iptables rules that redirect all HTTP traffic to me
sudo iptables -t mangle -N internet
sudo iptables -t mangle -A PREROUTING -i $DEV_IN -p tcp -m tcp --dport 80 -j internet
sudo iptables -t mangle -A internet -j MARK --set-mark 99
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp -m mark --mark 99 -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1
(the specifics of this setup are not really important for my question, just note that an "internet" chain is created which redirects HTTP to port 80 on the access point)
At port 80 on the AP, a cherrypy server serves a static landing page with a "register" button that issues a POST request to http://10.0.0.1/agree . To process this request, I have created a method like this:
#cherrypy.expose
def agree(self, **kwargs):
#retrieve MAC address of client by checking ARP table
ip = cherrypy.request.remote.ip
mac = str(os.popen("arp -a " + str(ip) + " | awk '{ print $4 }' ").read())
mac = mac.rstrip('\r\n')
#add an iptables rule to whitelist the client, rmtrack to remove previous connection information
os.popen("sudo iptables -I internet 1 -t mangle -m mac --mac-source %s -j RETURN" %mac)
os.popen("sudo rmtrack %s" %ip)
return open('welcome.html')
So this method retrieves the client's MAC address from the arp table, then adds an iptables exception to remove that specific MAC from the "internet" chain that redirects traffic to the portal.
Now when I test this setup, something interesting happens. Adding the exception in iptables works - i.e. the client can now access web pages without getting redirected to me. The problem is that the initial request doesn't come through to my server , i.e. the page welcome.html is never opened - instead, right after the iptables and rmtrack calls are executed, the client tries to open the "agree" path on the page they requested before the redirect to my portal.
For example, if they hit "google.com" in the address bar, then got sent to my portal and agreed, they would now try to open http://google.com/agree . As a result, they get an error after a while. It appears that the iptables or the rmtrack call changes the request to go for the original destination while it is still being processed at my server, which doesn't make any sense to me. Consequently, it doesn't matter which static page I return or which redirects I make after those terminal commands have been issued - the return value of my function isn't used by the client.
How could I fix this problem? Every piece of useful information is appreciated.

Today I managed to solve my problem, so I'm gonna put the solution here although I kinda doubt that there's a lot of people running into the same problem.
Basically, all that was needed was an absolute-path redirect somewhere during the request processing on the captive portal server. For example, in my case, the form on the index page where you agreed to my T&C was calling action /agree . This meant that the client was left believing he was accessing those paths on his original destination server (eg google.com/agree).
Using the absolute-form 10.0.0.1/agree instead, the client will follow the correct redirect after the iptables call.

Related

iptables rules deleted after reboot on Kubernetes nodes

After manually adding some iptables rules and rebooting the machine, all of the rules are gone (no matter the type of rule ).
ex.
$ iptables -A FUGA-INPUT -p tcp --dport 23 -j DROP
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
DROP tcp -- anywhere anywhere tcp dpt:telnet
After the reboot:
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
If I am not mistaken, kube-proxy running on every node is dynamically modifying the iptables. If that is correct how can I add rules that are permanent but still enable kubernetes/kube-proxy to do it's magic and not delete all the INPUT, FORWARD and OUTPUT rules that both Kubernetes and Weave plugin network dynamically generate?
Running iptables on any system is not a persistent action and would be forgotten on reboot, a k8s node is not an exception. I doubt that k8s will erase the IPTABLES rules when it starts, so you could try this:
create your rules (do this starting with empty iptables, with iptables -A commands, as you need them)
run iptables-save >/etc/my-iptables-rules (NOTE you could create a rules file manually, too).
create a system service script that runs on boot (or use /etc/rc.local) and add iptables-restore -n </etc/my-iptables-rules to it. This would load your rules on reboot. Note if you use rc.local, your 'iptables-restore' command may well run after k8s starts, check that your iptables -A commands are not sensitive to being loaded after those of k8s; if needed replace the -A commands in the file with -I (to place your commands first in the tables).
(be aware that some OS installations might include a boot-time service that loads iptables as well; there are some firewall packages that install such a service - if you have one on your server, the best approach is to add your rules to that firewall's config, not write and load your own custom config).

installing MailHog on Linux virtual box to capture outgoing emails

I wanted to ease the development by installing MailHog on my centos linux development environment in my virtual box. The php mail() function doesn't report any issues (that is, it returns TRUE) but the outgoing mails did not appear in MailHog. How should I set it up correctly?
Follow these steps:
Download the appropriate MailHog version from https://github.com/mailhog/MailHog/releases. I use MailHog_linux_amd64 in this example but you may need a different version. I assume you use your home directory to store your files. In the likely case you don't do this, please, make the required modifications accordingly.
If your VM uses ip filtering then you should allow the communication through port 8025 with adding a line to the iptable config and restarting it:
vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8025 -j ACCEPT
service iptables restart
Launch MailHog with the following command:
./MailHog_linux_amd64 -hostname=mylocal.vbox:8025
where mylocal.vbox is the domain name how the host sees the VM. Now you should see some lines detailing which IP addresses and ports it uses.
Download mhsendmail from here: https://github.com/mailhog/mhsendmail/releases.
Change it to be executable (adjust the path of the file accordingly):
chmod 777 /home/you/mhsendmail_linux_amd641
Change your php.ini to use mhsendmail instead of sendmail:
vim /etc/php.ini
sendmail_path = "/home/you/mhsendmail_linux_amd64"
service httpd restart
View the MailHog web interface from your host computer (use the host name we used above): http://mylocal.vbox:8025/. The webmail interface of MailHog should appear.
Test mail sending from the command line of the VM with this oneliner:
php -r "\$from = \$to = 'your.emailaddress#gmail.com'; \$x = mail(\$to, 'subject'.time(), 'Hello World', 'From: '. \$from); var_dump(\$x);"
It should display true and the web interface of the MailHog should display the new email.
Have fun, send as many emails via the mail() function of php as you want.
Some more ideas:
If you want to override the default IP address and port settings then you should use the following syntax:
./MailHog_linux_amd64 -ui-bind-addr=192.168.56.104:8026 -api-bind-addr=192.168.56.104:8026 -hostname=mylocal.vbox:8026 -smtp-bind-addr=192.168.56.104:8025
In this case you will have to escape the settings in php.ini this way:
sendmail_path = "/home/you/mhsendmail_linux_amd64 --smtp-addr=""192.168.56.104:8025"""

PostgreSQL still able to make connection after updating UFW

I am trying to test how an application handles network instability. The client application makes connections and runs queries on a database server. To simulate network instability, I am trying to make ufw rules to deny traffic going out while the client application makes a connection to the database server. I start up the application and it is able to run queries on the database. I then update the UFW rules. The following two rules are the top 2 rules.
[ 1] 5432/udp DENY OUT Anywhere (out)
[ 2] 5432/tcp DENY OUT Anywhere (out)
After the ufw rules have been updated, the client is still able to make calls the the database server. However, If I reboot the client application it is then unable to make a connection to the database server.
Does anyone know why this is occurring? Is there a better way to do what I am trying to do? Any help would be much appreciated.
More Details: The client application is using postgresql-9.4-1207.jdbc4 to connect to the database. The database is running postgresql 9.4.5.
UFW comes with some default configuration options in place. On my Ubuntu server they are located in /etc/ufw. In the before.rules file there are two rules ...
#-A ufw-before-input -m state --state RELATED,ESTABLISHED -j ACCEPT
#-A ufw-before-output -m state --state RELATED,ESTABLISHED -j ACCEPT
... that allow in established connections. Since these rules are read in before the user specified rules, they take precidence. I commented out these two lines in the configuration file and my issue is resolved.
However, the comment above these two lines reads "# quickly process packets for which we already have a connection". Not sure what kind of performance impact this would have, but I am not particularly concerned about that in my case. This might be a concern for someone else though.

MongoDB cannot remote access

I'm new to linux server. I install mongodb on centos 6.3. And I run the mongodb server in this command:
mongod -config /etc/mongodb.conf &
And i'm sure that I have make bind_ip to listen all ip:
# mongodb.conf
# Where to store the data.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
rest = true
bind_ip = 0.0.0.0
port = 27017
But, I cannot make mongodb remote access either. my server ip is 192.168.2.24,and I run mongo in my local pc to access this mongodb, it show me this error:
Error: couldn't connect to server 192.168.2.24:2701
7 (192.168.2.24), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
But, I can access this mongodb in server where mongodb install using this command:
mongo --host 192.168.2.24
So, I think it may success to make mongo remote access, but maybe something wrong with linux server,maybe firewall? So,I try to use the command to check the port whether open for remote access:
iptables -L -n | grep 27017
nothing is returned, then I add port to iptalbes using this command:
iptables -A INPUT -p tcp --dport 27017 -j ACCEPT
iptables -A OUTPUT -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPT
and save the iptables & restart it:
iptables-save | sudo tee /etc/sysconfig/iptables
service iptables restart
I can see port of 27017 is added to iptables list, but it still not work at all. I think it may not success in opening the port of 27017. How should I do for it? I'm new to linux server,by the way my linux server pc is offline. So it can't use the command about "yum". please give me solution in detail. Thanks so much.
It seems like the firewall is not configured correctly.
Disclaimer: Fiddling with firewall settings has security implications. DO NOT USE THE FOLLOWING PROCEDURE ON PRODUCTION SYSTEMS UNLESS YOU KNOW WHAT YOU ARE DOING!!! If in the slightest doubt, get back to a sysadmin or DBA.
The problem
Put simply, a firewall limits the access to services like MongoDB running on the protected machine by unauthorized parties.
CentOS only allows access to ssh by default. We need to configure the firewall so that you can access the MongoDB service.
The solution
We will install a small tool provided by CentOS < 7 (version 7 provides different means), which simplifies the use of iptables, which in turn configures netfilter, the framework of the Linux kernel allowing manipulation of network packets – thus providing firewall functionality (amongst other cool things).
Then, we will use said tool to configure the firewall functionality so that MongoDB is accessible from everywhere. I can't give you a more secure configuration, since I do not know your network setup. Again, use this procedure on production systems at your own risk. You have been warned!
Installation of system-config-firewall-tui
First, you have to log into your CentOS box as root, which allows installation and deinstallation of packages and change system-wide configurations.
Then, you need to issue (the dollar sign denotes the shell prompt)
$ yum -y install system-config-firewall-tui
The result should look something like this
Configuration of the firewall
Next, you need to start the tool we just installed
$ system-config-firewall-tui
which will create a small command line GUI:
Do not simply disable the firewall!.
Press Tab or →| respectively, until the "Customize" button is highlighted. Now press ↵. In the next screen, highlight "Forward" and press ↵. You now should be in a screen called "Other Ports",
in which you highlight "Add" and press↵. This brings you to a screen "Port and Protocol" which you fill like shown below
The configuration explained: MongoDB uses TCP for communicating with the clients and it listens on port 27017 by default for a standalone instance. Note that you might need to change the port according to the referenced list in case you do not run a standalone instance or replica set.
The next step is to highlight "OK" and press ↵, which will seemingly clear the inputs. However, the configuration we just made is saved. So we will press "Cancel" and return to the "Other Ports" screen, which should now look like this:
Now, we press "Close" and return to the main screen of "system-config-firewall-tui". Here, we press "Ok" and the tool asks you if you really want to apply those the changes you just made. Take the time to really think about that. ;)
Pressing "Yes" will now modify the firewall rules executed by the Linux kernel.
We can verify that by issuing
$ iptables -L -n | grep 27017
which should result in the output below:
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
Now you should be able to connect to your MongoDB server.

how to add the text record in the bonjour dns-sd

I am able to register a service using bonjour dns-sd on my linux pc.
$dns-sd -P SMARTCAM _ftp._tcp. . 80 AIR 14.99.8.77
Now I am unable to add text record with registration. Can some body tell me how to add the text record.
How about:
$dns-sd -P SMARTCAM _ftp._tcp. . 80 air.local 14.99.8.77 "u=test" "path=/pub"
I'm just not sure about the .local part of the name, compared to the apparently non-local IP address. What are you trying to do, exactly? I'd normally expect to see this registering a local IP address, e.g.:
$dns-sd -P SMARTCAM _ftp._tcp. . 80 air.local 10.1.1.58 "u=test" "path=/pub"
If you want to register a sub-type, for example, a printer, then you add the sub-type name after the main type name, comma-separated (thanks to this post for showing how to do it):
$dns-sd -P "Test Print" _http._tcp,_printer . 8080 air.local 10.1.1.58 "path=whatever"