Disable iptables permanently in CentOS - centos

I used the following commands
service iptables save
service iptables stop
chkconfig iptables off
But after sometime, when I run the command service iptables status,
I shows me a list of rules.
How to disable iptables permanently?

you can try it .
iptables -F .
flush all rules.

Related

Kubernetes Service pinging not working time to time "Temporary fail in name resolution"

I have two separate clusters (Application and DB) in the same namespace. Statefulset for DB cluster and Deployment for Application cluster. For internal communication I have configured a Headless Service. When I ping from a pod in application cluster to the service it works (Works the other way round too - DB pod to service works). But sometimes, for example if I continuously execute ping command for like 3 times, the third time it gives an error - "ping: : Temporary failure in name resolution". Why is this happening?
As far as I know this is usually a name resolution error and shows that your DNS server cannot resolve the domain names into their respective IP addresses. This can present a grave challenge as you will not be able to update, upgrade, or even install any software packages on your Linux system. Here I have listed few reasons
1.Forgot configuring or Wrongly Configured resolv.conf File
The /etc/resolv.conf file is the resolver configuration file in Linux systems. It contains the DNS entries that help your Linux system to resolve domain names into IP addresses.
If this file is not present or is there but you are still having the name resolution error, create one and append the Google public DNS server as nameserver 8.8.8.8
Save the changes and restart the systemd-resolved service as shown.
$ sudo systemctl restart systemd-resolved.service
It’s also prudent to check the status of the resolver and ensure that it is active and running as expected:
$ sudo systemctl status systemd-resolved.service
2. Due to Firewall Restrictions
By some chance if the first solution did not work for you, firewall restrictions could be preventing you from successfully performing DNS queries. Check your firewall and confirm if port 53 (used for DNS – Domain Name Resolution ) and port 43 are open. If the ports are blocked, open them as follows:
For UFW firewall (Ubuntu / Debian and Mint)
To open ports 53 & 43 on the UFW firewall run the commands below:
$ sudo ufw allow 43/tcp
$ sudo ufw reload```
For firewalld (RHEL / CentOS / Fedora)
For Redhat based systems such as CentOS, invoke the commands below:
```$ sudo firewall-cmd --add-port=53/tcp --permanent
$ sudo firewall-cmd --add-port=43/tcp --permanent
$ sudo firewall-cmd --reload
I hope that you now have an idea about the ‘temporary failure in name resolution‘ error. I also found a similar git issue hope that helps
https://github.com/kubernetes/kubernetes/issues/6667

When nat is set with iptables on the KVM Host machine, routing to the VM set to start automatically as Host starts is not possible

Issue:
Both the Host machine and VM built with CentOS 6.10.
The ExternalMachine⇔VM is routed by using the nat function of Host iptables.
As a problem, iptables have started("service iptables status") after restarting the Host machine or turning on the power,
but it is not possible for us to route to the VM that has been automatically started.
After this phenomenon, restarting iptables("service iptables restart") passes all routing.
Both iptables and VM are running and iptables settings are as expected.
I have no idea why its not possible to route to the VM.
I would be grateful If you could teach me what is the problem.
---------AutostartSetting/StopSetting------------
# vi /etc/sysconfig/libvirt-guests
START_DELAY=30
ON_SHUTDOWN=shutdown
SHUTDOWN_TIMEOUT=180
# virsh autostart <VM NAME>
-----OS-------
cat /etc/redhat-release
CentOS release 6.10 (Final)
----kvm----
qemu-kvm-0.12.1.2-2.506.el6_10.5.x86_64
additional info:
---------------
#virsh net-edit default
<network>
<name>default</name>
<uuid>1d4f2476-0da2-45d5-b97f-xxxxxxxxxxx</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='off' delay='0' />
<mac address='XX:XX:XX:XX:XX:XX'/>
<ip address='1.2.3.4' netmask='255.255.255.0'>
</ip>
</network>
-----------------
After confirming it, the startup order of Host daemons are as below.
1.iptables
2.network
3.qemu-ga
4.libvirtd
5.libvirt-guest
libvirt depends on network and network depends on iptables
The order of chkconfig could not be changed.
In this case, should I have the iptables restart script run at the end of chkconfig, or have anacron restart iptables? or Do you have any other way to archieve it?
How is the libvirt/qemu network configured? If it is tap networking (or macvtap, same for this matter), then the actual tap device (from ip addr output) only exists while the VM is paused or running. And iptables rules use interfaces so if the interface did not exist when iptables (re)started, then something needs to re-add the rule(s) when the VM is cteated. Simple iptables restart would do too.

Why the core dns doesnt resolve any domain name when firewalld is enabled? It works when firewalld is stopped

I have enabled all required ports. When i enable the firewalld service then the core-dns doesnt resolve any domain-name with command $ kubectl exec -ti busybox -- nslookup kubernetes.default
This seems to be a know case, which you can find on GitHub Fresh deploy with CoreDNS not resolving any dns lookup #1056.
There seems to be few solutions which would mean different problems.
One being:
sudo systemctl stop firewalld
sudo systemctl stop firewalld
Please remember this is not recommended.
Another solution might be:
Adding iptables -p FORWARD ACCEPT.
Also check if core dns daemon controller has enough resources, as this might be causing restarts.
You need to provide more details regarding your cluster so we can pinpoint the issue.
This problem may originate due to forwarding packets between interfaces. There are two options:
First, for sessions, I also recommend this for testing:
$ vim /proc/sys/net/ipv4/ip_forward
# set to 1
For a more permanent solution:
$ vim /etc/sysctl.conf
# ADD net.ipv4.ip_forward=1
$ sudo /sbin/sysctl -p

kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.

iptables rules deleted after reboot on Kubernetes nodes

After manually adding some iptables rules and rebooting the machine, all of the rules are gone (no matter the type of rule ).
ex.
$ iptables -A FUGA-INPUT -p tcp --dport 23 -j DROP
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
DROP tcp -- anywhere anywhere tcp dpt:telnet
After the reboot:
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
If I am not mistaken, kube-proxy running on every node is dynamically modifying the iptables. If that is correct how can I add rules that are permanent but still enable kubernetes/kube-proxy to do it's magic and not delete all the INPUT, FORWARD and OUTPUT rules that both Kubernetes and Weave plugin network dynamically generate?
Running iptables on any system is not a persistent action and would be forgotten on reboot, a k8s node is not an exception. I doubt that k8s will erase the IPTABLES rules when it starts, so you could try this:
create your rules (do this starting with empty iptables, with iptables -A commands, as you need them)
run iptables-save >/etc/my-iptables-rules (NOTE you could create a rules file manually, too).
create a system service script that runs on boot (or use /etc/rc.local) and add iptables-restore -n </etc/my-iptables-rules to it. This would load your rules on reboot. Note if you use rc.local, your 'iptables-restore' command may well run after k8s starts, check that your iptables -A commands are not sensitive to being loaded after those of k8s; if needed replace the -A commands in the file with -I (to place your commands first in the tables).
(be aware that some OS installations might include a boot-time service that loads iptables as well; there are some firewall packages that install such a service - if you have one on your server, the best approach is to add your rules to that firewall's config, not write and load your own custom config).