Can i Monitor iptables on Monitoring system - centos

I have a Centos server and I install iptables on this for firewall.
I want to connect iptables to monitoring system Like (Prtg,Solarwinds,Opmanger)
Is this possible?

Yes
I don't know what other monitoring systems, but you can do that with Solarwinds Server Application Monitor, by creating custom LinuxScript component. You can find an example in thwack forum

Related

Kubernetes on AWS EKS: Is there a way to configure Service's load balancer algorithm?

I am new to Kubernetes and currently deploy an application on AWS EKS.
I want to configure a Service of my K8s cluster deployed on AWS EKS.
Here is the description of my issue: I did an experiment. I spin up 2 Pods running the same web application and expose them using a service (the Service is using LoadBalancer type). Then I got an external IP of that Service. Then and found that requests that I sent were not distributed evenly to every Pod under the Service I created. To be more precise, I sent 3 requests and all the three requests are processed by the same Pod.
Therefore, I want to configure the load balancer algorithm to be round robin or least_connection to resolve this issue.
I asked a similar question before and I am suggested to try the IPVS mode of the kube-proxy, but I did not get the detailed instruction on how to apply that config, and did not find any useful material online. If the IPVS mode is a feasible solution to this issue, please provide some detailed instructions.
Thanks!
Your expectation from a load balancer is correct, it should distribute the incoming requests. But since you are using a web-socket to perform requests, requests are being handled by the same pod.
A web-socket uses a persistent connection between a client and a server that means the connection is re-used rather than establishing a new connection (costly!). So, you're not getting the load balancing you wanted.
Use something that uses non-persistent connections to check the load balancing feature:
$ curl -H "Connection: close" http://address:port/
Had the exact issues and while using the -H "Connection: close" tag when testing externally load balanced connections helps, I still wanted inter-service communication to benefit from having IPVS with rr or sed.
To summarize, you will need to setup the following dependencies to the nodes. I would suggest adding these to your cloud config.
#!/bin/bash
sudo yum install -y ipvsadm
sudo ipvsadm -lsudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack_ipv4
Once that is done will need to edit your kube-proxy-config in kube-system namespace to have mode: ipvs and scheduler: <desired lb algo>.
Lastly, you will need to update the container commands for the kube-proxy daemonset with the appropriate proxy-mode flag --proxy-mode=ipvs and --ipvs-scheduler=<desired lb algo>.
Following are the available lb algos for IPVS:
rr: round-robin
lc: least connection
dh: destination hashing
sh: source hashing
sed: shortest expected delay
nq: never queue
Source: https://medium.com/#selfieblue/how-to-enable-ipvs-mode-on-aws-eks-7159ec676965

Network Policy in Kubernetes under the hood

I have network policy created and implemented as per https://github.com/ahmetb/kubernetes-network-policy-recipes, and its working fidn , however I would like to understand how exactly this gets implemeneted in the back end , how does network policy allow or deny traffic , by modifying the iptables ? which kubernetes componenets are involved in implementing this ?
"It depends". It's up to whatever controller actually does the setup, which is usually (but not always) part of your CNI plugin.
The most common implementation is Calico's Felix daemon, which supports several backends, but iptables is a common one. Other plugins use eBPF network programs or other firewall subsystems to similar effect.
Network Policy is implemented by network plugins (calico for example) most commonly by setting up Linux Iptables Netfilter rules on the Kubernetes nodes.
From the docs here
In the Calico approach, IP packets to or from a workload are routed and firewalled by the Linux routing table and iptables infrastructure on the workload’s host. For a workload that is sending packets, Calico ensures that the host is always returned as the next hop MAC address regardless of whatever routing the workload itself might configure. For packets addressed to a workload, the last IP hop is that from the destination workload’s host to the workload itself

Configure Zabbix monitoring tool on kubernetes cluster in GCP

I am trying to configure zabbix monitoring tool on the top of kubernetes clutser in Google Cloud Platform.
I followed the KB and the zabbix server configured successfully. I have also configured a zabbix agent using this link
Now I would like to know how my pods running on the cluster can be added to this zabbix server. Seeking your help.
Thanks in advance.
Used Dockbix Docker images already have preconfigured Zabbix with auto registration, which should discover your nodes/containers (containers!=pods). I guess you didn't configured sec. groups or DNS names properly.

Can I setup kubernetes cluster using kubeadm on ubuntu machines inside a office LAN

I was looking at this url.
It says-"If you already have a way to configure hosting resources, use kubeadm to easily bring up a cluster with a single command per machine."
What do you mean by "If you already have a way to configure hosting resources"?
If I have a few Ubuntu machines within my office LAN can I setup Kubernetes cluster on them using kubeadm?
It just means that you already have a way of installing an OS on these machines, booting them, assigning IPs on your LAN and so. If you can SSH into your nodes to be you are ready!
Follow the guide carefully and you will have a demo cluster in no time.

What does userspace mode means in kube-proxy's proxy mode?

kube-proxy has an option called --proxy-mode,and according to the help message, this option can be userspace or iptables.(See below)
# kube-proxy -h
Usage of kube-proxy:
...
--proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
...
I can't figure out what does userspace mode means here.
Anyone can tell me what the working principle is when kube-proxy runs under userspace mode?
Userspace and iptables refer to what actually handles the connection forwarding. In both cases, local iptables rules are installed to intercept outbound TCP connections that have a destination IP address associated with a service.
In the userspace mode, the iptables rule forwards to a local port where a go binary (kube-proxy) is listening for connections. The binary (running in userspace) terminates the connection, establishes a new connection to a backend for the service, and then forwards requests to the backend and responses back to the local process. An advantage of the userspace mode is that because the connections are created from an application, if the connection is refused, the application can retry to a different backend.
In iptables mode, the iptables rules are installed to directly forward packets that are destined for a service to a backend for the service. This is more efficient than moving the packets from the kernel to kube-proxy and then back to the kernel so it results in higher throughput and better tail latency. The main downside is that it is more difficult to debug, because instead of a local binary that writes a log to /var/log/kube-proxy you have to inspect logs from the kernel processing iptables rules.
In both cases there will be a kube-proxy binary running on your machine. In userspace mode it inserts itself as the proxy; in iptables mode it will configure iptables rather than to proxy connections itself. The same binary works in both modes, and the behavior is switched via a flag or by setting an annotation in the apiserver for the node.