I have been running fail2ban on my server for a while without any problems but have recently seen fail2ban restart randomly. After some looking in my logs I noticed that the restarts occur when ever someone gets banned.
So every time someone gets banned the fail2ban server restarts.
Here are a couple of logs.
2012-03-23 00:58:39,025 fail2ban.actions: WARNING [dovecot] Ban xx.xx.xx.xx (ip removed)
What I have been able to find out is that the fail2ban bans the IP and then it restarts and the only error I can find is.
modprobe: FATAL: Module ip_tables not found.
Iptables is present and running - and I can issue any commands without iptables failing or fail2ban crashing.
Please let me know if you have any idea and if you would need the fail2ban config files
System Information:
Centos 5.8
Fail2ban 0.8.2
Iptables 1.3.5
You have to enable iptable module in your kernel.
Run modprobe ip_tables
Related
Good afternoon!
Setting up my first k8s cluster :)
I set up a virtual machine on vmware, set up a control plane, connected a worker node, set up kubectl on the master node and on a laptop (vmware is installed on it). I observe the following problem: periodically, every 2 - 5 minutes, the api-server stops responding, when you run any kubectl command (for example, "kubectl get nodes"), an error appears: "The connection to the server 192.168.131.133:6443 was refused - did you specify the right host or port?" A few minutes pass - everything is restored, in response to the "kubectl get nodes", the system shows the nodes. A few more minutes - again the same error. The error synchronously appears both on the master node and on the laptop.
This is what it looks like (for about 10 minutes):
At the same time, if you execute commands on the master node
$ sudo systemctl stop kubelet
$ sudo systemctl start kubelet
everything is immediately restored. And after a few minutes again the same error.
I would be grateful if you could help interpret these logs and tell me how to fix this problem?
kubectl logs at the time of the error (20:42:55):
log
Could imagine that the process on 192.168.131.133 is restarting which is leading to a connection refused when it is not listening any more on the API port.
You should start investigating if you can see any hw issues.
Either CPU is increasing leading to a restart. Or memory leak.
You can check the running processes with.
ps -ef
Use
top
Command to see CPU consumption.
There should be some logs and events in k8s available as well.
It seems no connectivity issue as you are receiving a clear failure back.
I have no clue how it got installed and why it got installed. I do have docker installed but i don't have Kubernetes installed via docker. I was assuming it was due to killer intelligence but i got no idea how to verify or check that.
This isn't a kubernetes process. This is a result of installing docker and how it configures itself to enable you to run kubernetes locally. The string "kubernetes" is coming from your hosts file even though you might not have turned on the feature to use k8s.
If you open C:\Windows\System32\drivers\etc\hosts, you'll see a line that associates "kubernetes" with 127.0.0.1, your localhost.
So, while netstat may show "kubernetes" as the destination address, it's a little misleading because anything going to 127.0.0.1 will show up as "kubernetes".
netstat -ab will show the executable associated with each connection if you want to verify.
I have kubernetes cluster.every thing work fine. but after 8 days when i run kubectl get pods it shows:
The connection to the server <host>:6443 was refused - did you specify the right host or port?
I have one master and one worker.
I run them in my lab without any cloud.
systemctl kubelet status
show **node not found**
my /etc/hosts was checked and it is correct
i have lack of hardware. I run this command to solve the issue
sudo -i
swapoff -a
exit
strace -eopenat kubectl version
most likely that the servers are rebooted. i had similar problem.
check kubelet logs on master server and take action.
if you can share the kubelet logs then we will be able to offer you further help
Reboot itself should not be a problem - but if you did not disabled swap permanently reboot will enable swap again and API server will not launch - it could be first shot.
Second - check free disk space, API server will not respond if disk is full (will raise disk pressure event and will try to evict pods).
If it will not help - please add logs from Kubelet (systemctl and journalctl).
verify /var/log/messages to get further information about the error
or
systemctl status kubelet
or
Alternately journalctl will also show the details.
I have downloaded/installed Kubernetes, Virtual Box and MiniKube. Later, I started minikube on VM. When I try running kubectl version command from my terminal I receive the below error message. Could anybody tell me what is the reason behind this error. I have explored everywhere but I couldn't find right resolution for this problem. I am new to this and just taking baby steps. Any help would be appreciated. Thank you.
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
Could anybody tell me what is the reason behind this error
It is because your kubectl configuration file, housed at $HOME/.kube/config, points at that IP address and port but there is nothing listening on that IP and port.
Using the minikube status command will tell you what it thinks is going on, and minikube ssh will place you inside the virtual machine and allow you to look around for yourself, which can be helpful to get the docker logs to say why there is nothing listening on the port you were expecting.
A good place to start is to run minikube ip and see if it matches the IP address kubectl is expecting (as seen in the error message). If not, update your kubeconfig accordingly.
minikube ssh
and then
journalctl -u kubelet
The above should provide you with additional information about why the server is refusing connections.
This answer might also be helpful: How to diagnose Kubernetes not responding on API
If you are running behind a proxy, make sure to export the NO_PROXY env variable or permanently set it in your /etc/environment file.
export NO_PROXY=192.168.99.0/24,127.0.0.1,...
I have an weave network plugin.
inside my folder /etc/cni/net.d there is a 10-weave.conf
{
"name": "weave",
"type": "weave-net",
"hairpinMode": true
}
My weave pods are running and the dns pod is also running
But when i want to run a pod like a simple nginx wich will pull an nginx image
The pod stuck at container creating , describe pod gives me the error , failed create pod sandbox.
When i run journalctl -u kubelet i get this error
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
is my network plugin not good configured ?
i used this command to configure my weave network
kubectl apply -f https://git.io/weave-kube-1.6
After this won't work i also tried this command
kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
I even tried flannel and that gives me the same error.
The system i am setting kubernetes on is a raspberry pi.
I am trying to build a raspberry pi cluster with 3 nodes and 1 master with kubernetes
Dose anyone have ideas on this?
Thank you all for responding to my question. I solved my problem now. For anyone who has come to my question in the future the solution was as followed.
I cloned my raspberry pi images because i wanted a basicConfig.img for when i needed to add a new node to my cluster of when one gets down.
Weave network (the plugin i used) got confused because on every node and master the os had the same machine-id. When i deleted the machine id and created a new one (and reboot the nodes) my error got fixed. The commands to do this was
sudo rm /etc/machine-id
sudo rm /var/lib/dbus/machine-id
sudo dbus-uuidgen --ensure=/etc/machine-id
Once again my patience was being tested. Because my kubernetes setup was normal and my raspberry pi os was normal. I founded this with the help of someone in the kubernetes community. This again shows us how important and great are IT community is. To the people of the future who will come to this question. I hope this solution will fix your error and will decrease the amount of time you will be searching after a stupid small thing.
Looking at the pertinent code in Kubernetes and in CNI, the specific error you see seems to indicate that it cannot find any files ending in .json, .conf or .conflist in the directory given.
This makes me think it could be something as the conf file not being present on all the hosts, so I would verify that as a first step.