iptables rules deleted after reboot on Kubernetes nodes - kubernetes

After manually adding some iptables rules and rebooting the machine, all of the rules are gone (no matter the type of rule ).
ex.
$ iptables -A FUGA-INPUT -p tcp --dport 23 -j DROP
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
DROP tcp -- anywhere anywhere tcp dpt:telnet
After the reboot:
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
If I am not mistaken, kube-proxy running on every node is dynamically modifying the iptables. If that is correct how can I add rules that are permanent but still enable kubernetes/kube-proxy to do it's magic and not delete all the INPUT, FORWARD and OUTPUT rules that both Kubernetes and Weave plugin network dynamically generate?

Running iptables on any system is not a persistent action and would be forgotten on reboot, a k8s node is not an exception. I doubt that k8s will erase the IPTABLES rules when it starts, so you could try this:
create your rules (do this starting with empty iptables, with iptables -A commands, as you need them)
run iptables-save >/etc/my-iptables-rules (NOTE you could create a rules file manually, too).
create a system service script that runs on boot (or use /etc/rc.local) and add iptables-restore -n </etc/my-iptables-rules to it. This would load your rules on reboot. Note if you use rc.local, your 'iptables-restore' command may well run after k8s starts, check that your iptables -A commands are not sensitive to being loaded after those of k8s; if needed replace the -A commands in the file with -I (to place your commands first in the tables).
(be aware that some OS installations might include a boot-time service that loads iptables as well; there are some firewall packages that install such a service - if you have one on your server, the best approach is to add your rules to that firewall's config, not write and load your own custom config).

Related

When nat is set with iptables on the KVM Host machine, routing to the VM set to start automatically as Host starts is not possible

Issue:
Both the Host machine and VM built with CentOS 6.10.
The ExternalMachine⇔VM is routed by using the nat function of Host iptables.
As a problem, iptables have started("service iptables status") after restarting the Host machine or turning on the power,
but it is not possible for us to route to the VM that has been automatically started.
After this phenomenon, restarting iptables("service iptables restart") passes all routing.
Both iptables and VM are running and iptables settings are as expected.
I have no idea why its not possible to route to the VM.
I would be grateful If you could teach me what is the problem.
---------AutostartSetting/StopSetting------------
# vi /etc/sysconfig/libvirt-guests
START_DELAY=30
ON_SHUTDOWN=shutdown
SHUTDOWN_TIMEOUT=180
# virsh autostart <VM NAME>
-----OS-------
cat /etc/redhat-release
CentOS release 6.10 (Final)
----kvm----
qemu-kvm-0.12.1.2-2.506.el6_10.5.x86_64
additional info:
---------------
#virsh net-edit default
<network>
<name>default</name>
<uuid>1d4f2476-0da2-45d5-b97f-xxxxxxxxxxx</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='off' delay='0' />
<mac address='XX:XX:XX:XX:XX:XX'/>
<ip address='1.2.3.4' netmask='255.255.255.0'>
</ip>
</network>
-----------------
After confirming it, the startup order of Host daemons are as below.
1.iptables
2.network
3.qemu-ga
4.libvirtd
5.libvirt-guest
libvirt depends on network and network depends on iptables
The order of chkconfig could not be changed.
In this case, should I have the iptables restart script run at the end of chkconfig, or have anacron restart iptables? or Do you have any other way to archieve it?
How is the libvirt/qemu network configured? If it is tap networking (or macvtap, same for this matter), then the actual tap device (from ip addr output) only exists while the VM is paused or running. And iptables rules use interfaces so if the interface did not exist when iptables (re)started, then something needs to re-add the rule(s) when the VM is cteated. Simple iptables restart would do too.

PostgreSQL still able to make connection after updating UFW

I am trying to test how an application handles network instability. The client application makes connections and runs queries on a database server. To simulate network instability, I am trying to make ufw rules to deny traffic going out while the client application makes a connection to the database server. I start up the application and it is able to run queries on the database. I then update the UFW rules. The following two rules are the top 2 rules.
[ 1] 5432/udp DENY OUT Anywhere (out)
[ 2] 5432/tcp DENY OUT Anywhere (out)
After the ufw rules have been updated, the client is still able to make calls the the database server. However, If I reboot the client application it is then unable to make a connection to the database server.
Does anyone know why this is occurring? Is there a better way to do what I am trying to do? Any help would be much appreciated.
More Details: The client application is using postgresql-9.4-1207.jdbc4 to connect to the database. The database is running postgresql 9.4.5.
UFW comes with some default configuration options in place. On my Ubuntu server they are located in /etc/ufw. In the before.rules file there are two rules ...
#-A ufw-before-input -m state --state RELATED,ESTABLISHED -j ACCEPT
#-A ufw-before-output -m state --state RELATED,ESTABLISHED -j ACCEPT
... that allow in established connections. Since these rules are read in before the user specified rules, they take precidence. I commented out these two lines in the configuration file and my issue is resolved.
However, the comment above these two lines reads "# quickly process packets for which we already have a connection". Not sure what kind of performance impact this would have, but I am not particularly concerned about that in my case. This might be a concern for someone else though.

Disable iptables permanently in CentOS

I used the following commands
service iptables save
service iptables stop
chkconfig iptables off
But after sometime, when I run the command service iptables status,
I shows me a list of rules.
How to disable iptables permanently?
you can try it .
iptables -F .
flush all rules.

MongoDB cannot remote access

I'm new to linux server. I install mongodb on centos 6.3. And I run the mongodb server in this command:
mongod -config /etc/mongodb.conf &
And i'm sure that I have make bind_ip to listen all ip:
# mongodb.conf
# Where to store the data.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
rest = true
bind_ip = 0.0.0.0
port = 27017
But, I cannot make mongodb remote access either. my server ip is 192.168.2.24,and I run mongo in my local pc to access this mongodb, it show me this error:
Error: couldn't connect to server 192.168.2.24:2701
7 (192.168.2.24), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
But, I can access this mongodb in server where mongodb install using this command:
mongo --host 192.168.2.24
So, I think it may success to make mongo remote access, but maybe something wrong with linux server,maybe firewall? So,I try to use the command to check the port whether open for remote access:
iptables -L -n | grep 27017
nothing is returned, then I add port to iptalbes using this command:
iptables -A INPUT -p tcp --dport 27017 -j ACCEPT
iptables -A OUTPUT -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPT
and save the iptables & restart it:
iptables-save | sudo tee /etc/sysconfig/iptables
service iptables restart
I can see port of 27017 is added to iptables list, but it still not work at all. I think it may not success in opening the port of 27017. How should I do for it? I'm new to linux server,by the way my linux server pc is offline. So it can't use the command about "yum". please give me solution in detail. Thanks so much.
It seems like the firewall is not configured correctly.
Disclaimer: Fiddling with firewall settings has security implications. DO NOT USE THE FOLLOWING PROCEDURE ON PRODUCTION SYSTEMS UNLESS YOU KNOW WHAT YOU ARE DOING!!! If in the slightest doubt, get back to a sysadmin or DBA.
The problem
Put simply, a firewall limits the access to services like MongoDB running on the protected machine by unauthorized parties.
CentOS only allows access to ssh by default. We need to configure the firewall so that you can access the MongoDB service.
The solution
We will install a small tool provided by CentOS < 7 (version 7 provides different means), which simplifies the use of iptables, which in turn configures netfilter, the framework of the Linux kernel allowing manipulation of network packets – thus providing firewall functionality (amongst other cool things).
Then, we will use said tool to configure the firewall functionality so that MongoDB is accessible from everywhere. I can't give you a more secure configuration, since I do not know your network setup. Again, use this procedure on production systems at your own risk. You have been warned!
Installation of system-config-firewall-tui
First, you have to log into your CentOS box as root, which allows installation and deinstallation of packages and change system-wide configurations.
Then, you need to issue (the dollar sign denotes the shell prompt)
$ yum -y install system-config-firewall-tui
The result should look something like this
Configuration of the firewall
Next, you need to start the tool we just installed
$ system-config-firewall-tui
which will create a small command line GUI:
Do not simply disable the firewall!.
Press Tab or →| respectively, until the "Customize" button is highlighted. Now press ↵. In the next screen, highlight "Forward" and press ↵. You now should be in a screen called "Other Ports",
in which you highlight "Add" and press↵. This brings you to a screen "Port and Protocol" which you fill like shown below
The configuration explained: MongoDB uses TCP for communicating with the clients and it listens on port 27017 by default for a standalone instance. Note that you might need to change the port according to the referenced list in case you do not run a standalone instance or replica set.
The next step is to highlight "OK" and press ↵, which will seemingly clear the inputs. However, the configuration we just made is saved. So we will press "Cancel" and return to the "Other Ports" screen, which should now look like this:
Now, we press "Close" and return to the main screen of "system-config-firewall-tui". Here, we press "Ok" and the tool asks you if you really want to apply those the changes you just made. Take the time to really think about that. ;)
Pressing "Yes" will now modify the firewall rules executed by the Linux kernel.
We can verify that by issuing
$ iptables -L -n | grep 27017
which should result in the output below:
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
Now you should be able to connect to your MongoDB server.

iptables / cherrypy redirection changes request mid-processing

Sorry for the vague title, but my issue is a bit complicated to explain.
I have written a "captive portal" for a WLAN access point in cherrypy, which is just a server that blocks MAC addresses from accessing the internet before they have registered at at certain page. For this purpose, I wrote some iptables rules that redirect all HTTP traffic to me
sudo iptables -t mangle -N internet
sudo iptables -t mangle -A PREROUTING -i $DEV_IN -p tcp -m tcp --dport 80 -j internet
sudo iptables -t mangle -A internet -j MARK --set-mark 99
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp -m mark --mark 99 -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1
(the specifics of this setup are not really important for my question, just note that an "internet" chain is created which redirects HTTP to port 80 on the access point)
At port 80 on the AP, a cherrypy server serves a static landing page with a "register" button that issues a POST request to http://10.0.0.1/agree . To process this request, I have created a method like this:
#cherrypy.expose
def agree(self, **kwargs):
#retrieve MAC address of client by checking ARP table
ip = cherrypy.request.remote.ip
mac = str(os.popen("arp -a " + str(ip) + " | awk '{ print $4 }' ").read())
mac = mac.rstrip('\r\n')
#add an iptables rule to whitelist the client, rmtrack to remove previous connection information
os.popen("sudo iptables -I internet 1 -t mangle -m mac --mac-source %s -j RETURN" %mac)
os.popen("sudo rmtrack %s" %ip)
return open('welcome.html')
So this method retrieves the client's MAC address from the arp table, then adds an iptables exception to remove that specific MAC from the "internet" chain that redirects traffic to the portal.
Now when I test this setup, something interesting happens. Adding the exception in iptables works - i.e. the client can now access web pages without getting redirected to me. The problem is that the initial request doesn't come through to my server , i.e. the page welcome.html is never opened - instead, right after the iptables and rmtrack calls are executed, the client tries to open the "agree" path on the page they requested before the redirect to my portal.
For example, if they hit "google.com" in the address bar, then got sent to my portal and agreed, they would now try to open http://google.com/agree . As a result, they get an error after a while. It appears that the iptables or the rmtrack call changes the request to go for the original destination while it is still being processed at my server, which doesn't make any sense to me. Consequently, it doesn't matter which static page I return or which redirects I make after those terminal commands have been issued - the return value of my function isn't used by the client.
How could I fix this problem? Every piece of useful information is appreciated.
Today I managed to solve my problem, so I'm gonna put the solution here although I kinda doubt that there's a lot of people running into the same problem.
Basically, all that was needed was an absolute-path redirect somewhere during the request processing on the captive portal server. For example, in my case, the form on the index page where you agreed to my T&C was calling action /agree . This meant that the client was left believing he was accessing those paths on his original destination server (eg google.com/agree).
Using the absolute-form 10.0.0.1/agree instead, the client will follow the correct redirect after the iptables call.