Envoy sidecar proxy - kubernetes

I am trying to understand the istio and envoy behavior and how the proxy works!
Lets assume that I created an application which keeps on the sending the request to google search API. When I deploy that in my k8s cluster with istio and envoy as a sidecar container, It is said all the requests are routed via the proxy/sidecar container.
My question is - both the application and the proxy/sidecar are running in the same pod and sharing the same IP. In order for the app to send the request to sidecar, it should be modified to send the request to the localhost (ie to the proxy server port) so that it can forward to google. But how the outbound requests of one application is routed to another application. Where is this configuration maintained?
Can someone who understood this well please explain?

istio-init init container is used to setup the iptables rules so that inbound/outbound traffic will go through the sidecar proxy. An init container is different than an app container in following ways:
It runs before an app container is started and it always runs to completion.
If there are many init containers, each should complete with success before the next container is started.
So, you can see how this type of container is perfect for a set-up or initialization job which does not need to be a part of the actual application container. In this case, istio-init does just that and sets up the iptables rules.
istio-proxy This is the actual sidecar proxy (based on Envoy).
Get into the application pod and look at the configured iptables. I am going to show an example using nsenter. Alternatively, you can enter the container in a privileged mode to see the same information. For folks without access to the nodes, using exec to get into the sidecar and running iptables is more practical.
$ docker inspect b8de099d3510 --format '{{ .State.Pid }}'
4125
$ nsenter -t 4215 -n iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N ISTIO_INBOUND
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-N ISTIO_REDIRECT
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 80 -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
The output above clearly shows that all the incoming traffic to port 80, which is the port the application is listening, is now REDIRECTED to port 15001, which is the port that the istio-proxy, an Envoy proxy, is listening. The same holds true for the outgoing traffic.
Update: In place of istio-init, there now seems to be an option of using the new CNI, which removes the need for the init container and associated privileges. This istio-cni plugin sets up the pods’ networking to fulfill this requirement in place of the current Istio injected pod istio-init approach.
https://istio.io/blog/2019/data-plane-setup/#traffic-flow-from-application-container-to-sidecar-proxy

Related

Is there a way to change local port bound using iptables?

Sorry, I'm a noob in iptables.
I have a VPN app which binds on local port 1080, while it goes to destination port 1194 (openvpn). The app does not support privileged port binding (which needs root, of which I have). I want the app to bind on local port 25. I have browsed Google and the answer seems to be iptables. I have seen many posts, of which many say the SNAT target is the one I should use.
I have tried this code:
iptables -I POSTROUTING -o wlan0 -t nat -p tcp --destination 195.123.216.159 -m tcp --dport 1194 -j SNAT --to-source 192.168.43.239:25
And these:
iptables -I FORWARD -p tcp -d 192.168.43.239 -m tcp --dport 25 -j ACCEPT
iptables -I FORWARD -p tcp -s 192.168.43.239 -m tcp --sport 25 -j ACCEPT
iptables -I OUTPUT -o wlan0 -p tcp -m tcp --sport 25 -j ACCEPT
iptables -I INPUT -i wlan0 -p tcp -m tcp --dport 25 -j ACCEPT
What I want is to make the output to be something like this when I run the netstat command:
tcp 0 0 192.168.43.239:25 195.123.216.159:1194 ESTABLISHED
But instead, after running all the codes, the output to netstat becomes this:
tcp 0 0 192.168.43.239:1080 195.123.216.159:5000 ESTABLISHED
Is it impossible to change binding port using iptables? Please help me to understand the concepts of networking.
Turns out iptables was just doing its job correctly. Translated packets turn out to not be tracked by netstat. I was lost and completely didnt understand that iptables doesnt alter ip v6 traffic of which the app was using. And the forward rules where not necessary since the chain policy was to accept the packets.

Kubernetes UDP Service Connection Problem

I am trying to resolve a Cluster internal DNS from a Node in the Cluster.
Example: dig #10.96.0.10 kubernetes.local (10.96.0.10 being the Service IP of the DNS)
I am expecting to get the IP of the Service (10.96.0.1 in this case), however the Connection times out.
This Problem only happens if I try connecting from a Host in the Cluster to a Service via UDP, while the Pods of the Service are not hosted on the Node I am connecting from.
If I try to connect from a Pod running on the same Node, it works as expected.
If I try to connect to the Pods directly instead of the Service, it works as expected.
If I try to connect to the Service via TCP instead of UDP, it works as expected.
If I try to connect to the Service when the Pods are running on the same Node as I am connecting from, it works as expected.
I am running Kubernetes v1.17 (Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}) with the flannel pod network, running on Debian Buster.
So far I have looked at the iptables rules, the service and pod rules seem correct.
The relevant sections of iptables-save:
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SBQ7D3CPOXKXY6NJ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-CDWMPIYNA34YYC2O
-A KUBE-SEP-CDWMPIYNA34YYC2O -s 10.244.1.218/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CDWMPIYNA34YYC2O -p udp -m udp -j DNAT --to-destination 10.244.1.218:53
-A KUBE-SEP-SBQ7D3CPOXKXY6NJ -s 10.244.1.217/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SBQ7D3CPOXKXY6NJ -p udp -m udp -j DNAT --to-destination 10.244.1.217:53
I also used the sudo tcpdump -i flannel.1 udp command on sender and receiver and found out, that the packages get send, but not received.
When I address the Pods directly, for example via dig #10.244.1.218 kubernetes.local, the package gets send and received properly.

Iptables Add DNAT rules to forward request on an IP:port to a container port

I have a kubernetes cluster which has 2 interfaces:
eth0: 10.10.10.100 (internal)
eth1: 20.20.20.100 (External)
There are few pods running in the cluster with flannel networking.
POD1: 172.16.54.4 (nginx service)
I want to access 20.20.20.100:80 from another host which is connected to the above k8s cluster, so that I can reach the nginx POD.
I had enabled ip forwarding and also added DNAT rules as follows:
iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.54.4:80
After this when I try to do a curl on 20.20.20.100, I get
Failed to connect to 10.10.65.161 port 80: Connection refused
How do I get this working?
You can try
iptables -t nat -A PREROUTING -p tcp -d 20.20.20.100 --dport 80 -j DNAT --to-destination 172.16.54.4:80
But I don't recommend that you manage the iptables by yourself, it's painful to maintain the rules...
You can use the hostPort in the k8s. You can use kubenet as network plugin, since cni plugin does not support hostPort.
why not use nodeport type? I think it is a better way to access service by hostIP. Please try iptables -nvL -t nat and show me the detail.

How to allow incoming connection on a particular port from specific IP

I am running mongodb in a docker container with 27017 port exposed with host to allow remote incoming connection. I want to block incoming connection on this port except a particular IP. I tried with iptables but it is not working. Maybe because of the docker service for which iptables commands need to be modified.
However I used the following commands:
myserver>iptables -I INPUT -p tcp -s 10.10.4.232 --dport 27017 -j ACCEPT
myserver>iptables -I INPUT -p tcp -s 0.0.0.0/0 --dport 27017 -j DROP
myserver>service iptables save
Then tried the following to check
mylocal>telnet myserver 27017
It is connected. So iptables is not working.
How do I do it?
I am using centos 6.8 and running mongodb 10 in docker container.
First, enable the source IP you wish to connect:
iptables -A INPUT -p tcp --dport 27017 -s 10.10.4.232 -j ACCEPT
Then DROP all the rest:
iptables -A INPUT -p tcp --dport 27017 -j DROP

iptables redirect local cennections

I used
iptables -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 8085
to redirect all http requests to jboss server on port 8085. This works fine if packets come from outside.
If I try to open from the same machine it doesnt work. Telnet gives connection refused.
How do I redirect local connections?
Working on centos, kernel 2.6.18 x64
local generated packets does not income on eth0.
you have to do this:
iptables -t nat -A OUTPUT --src 0/0 --dst 127.0.0.1. -p tcp --dport 80 -j REDIRECT --to-ports 8085
and
To redirect locally generated packets, you must have the kernel option CONFIG_IP_NF_NAT_LOCAL set to Y
from: http://wiki.debian.org/Firewalls-local-port-redirection
Also to allow forward just run the command
sysctl -w net.ipv4.ip_forward=1