I am trying to connect to an external SCTP server from a pod inside a K8s cluster. The external SCTP server only allows connections from a configured IP address AND a specific source port.
From what i can understand, K8s performs SNAT during connection establishment and updates the IP address with the K8s node IP address, and also the port with a random port. So the SCTP server sees the random source port and therefore rejects the connection.
K8s cluster is using calico plugin and I have tried to "disable NAT for target CIDR range" option as explained here by installing an ip-pool. But it didn't work, I can see via tcpdump on the server that source port is still random, and not sure if the ip-pool gets picked up.
So my question is: Is there a way to preserve the source port? Am i on the right track trying to disable NAT, i.e. will it work considering the pod IPs are internal?
Note: I am not sure if the problem/solution is related to it, but kube-proxy is in iptables mode.
Note: There is actually an identical question here and the accepted answer suggesting to use hostNetwork:true works for me as well. But I can't use hostNetwork, so wanted to post this as a new question. Also the "calico way" of disabling NAT towards specific targets seemed to be promising, and hoping that calico folks can help.
Thanks
Related
Problem statement:
I cannot access services running in pods within third party containers that don't listen on a few specific ports when using istio-sidecar
Facts:
I am running on a network with firewalled connections, so only a handful of ports can be used to communicate across the nodes, I cannot change that.
I have some third party containers running within pods that listen on ports that do not belong to the handful that is allowed
Without istio, I can do an iptables REDIRECT on an initContainer and just use any port I want
With istio-sidecar, the envoy catch-all iptables rules forward the ORIGINAL_DST to envoy with the original port, so it always tries to connect to a port that nobody is listening at. I see envoy receiving it and trying to connect to the pod at the port that I faked, the one that is allowed in the network, not the one the service is listening at.
I am trying to avoid using a socat-like solution that runs another process copying from one port to another.
I can use any kind of iptables rules and/or istio resources, EnvoyFilters etc....
My istio setup is the standard sidecar setup with nothing particular to it.
I have network policy created and implemented as per https://github.com/ahmetb/kubernetes-network-policy-recipes, and its working fidn , however I would like to understand how exactly this gets implemeneted in the back end , how does network policy allow or deny traffic , by modifying the iptables ? which kubernetes componenets are involved in implementing this ?
"It depends". It's up to whatever controller actually does the setup, which is usually (but not always) part of your CNI plugin.
The most common implementation is Calico's Felix daemon, which supports several backends, but iptables is a common one. Other plugins use eBPF network programs or other firewall subsystems to similar effect.
Network Policy is implemented by network plugins (calico for example) most commonly by setting up Linux Iptables Netfilter rules on the Kubernetes nodes.
From the docs here
In the Calico approach, IP packets to or from a workload are routed and firewalled by the Linux routing table and iptables infrastructure on the workload’s host. For a workload that is sending packets, Calico ensures that the host is always returned as the next hop MAC address regardless of whatever routing the workload itself might configure. For packets addressed to a workload, the last IP hop is that from the destination workload’s host to the workload itself
External firewall logs show blocked connection from < node IP >:< big port >.
The current cluster uses calico networking.
How do I detect which pod trying to connect?
This would usually be pretty hard to work out, you would have to check the NAT table on the node where the packets exited to the public internet.
We are running a SaaS service that we are looking to migrate to Kubernetes, preferably at one of the hyperscalars. One specific issue I have not yet found a clean solution for is the need for Egress IP address selection from within the application.
We deal with a large amount of upstream providers that have access control and rate limiting based on source IP adres. Also a partition of our customers are using their own accounts with some of the upstream providers. To access the upstream providers in the context of their account we need to control the source IP used for the connection from within the application.
We are running currently our services in a DMZ behind a load balancer, so direct network interface selection is already impossible. We use some iptables rules on our load balancers/gateways to do address selection based on mapped port numbers. (e.g. egress connections to port 1081 are mapped to source address B and target port 80, port 1082 to source address C port 80)
This however is quite a fragile setup that also does not map nicely when trying to migrate to more standardized *aaS offerings.
Looking for suggestions for a better setup.
One of the things that could help you solve it is Istio Egress Gateway so I suggest you look into it.
Otherwise, it is still dependent on particular platform and way to deploy your cluster. For example on AWS you can make sure your egress traffic always leaves from predefined, known set of IPs by using instances with Elastic IPs assigned to forward your traffic (be it regular EC2s or AWS NAT Gateways). Even with Egress above, you need some way to define a fixed IP for this, so AWS ElasticIP (or equivalent) is a must.
I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ?
This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?.
The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes.
Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
Try to get that service IP which service is associated with that pods.
One very roundabout way right now is to set up an HTTP liveness probe and watch the IP it originates from. Just be sure to also respond to it appropriately or it'll assume your pod is down.