Using a static IP for both ingress/egress in Kubernetes - kubernetes

I have a program which I'm trying to run in a Kubernetes cluster.
The program is a server that speaks a non-standard UDP-based protocol.
The protocol mostly consists of short request/reply pairs, similar to DNS.
One major difference from DNS is that both the "server" and the "clients" can send requests, ie. the communication can be initiated by either party.
The clients are embedded devices configured with the server's IP address.
The clients send their requests to this IP.
They also check that incoming messages originate from this IP, discarding messages from other IPs.
My question is how I can use Kubernetes to set up the server such that
The server accepts incoming UDP messages on a specific IP.
Real client source IPs are seen by the server.
Any replies (or other messages) the servers sends have that same IP as their source (so that the clients will accept them).
One thing I have tried that doesn't work is to set up a Service with type: LoadBalancer and externalTrafficPolicy: Local (the latter to preserve source IPs for requirement 2).
This setup fulfills requirements 1 and 2 above, but since outbound messages don't pass through the load balancer, their source IP is that of whatever node the pod containing the server is running on.
I'm running Kubernetes on Google Cloud Platform (GKE).

Please verify solution as described in:
1. Kubernetes..,
c) Source IP for Services with Type=LoadBalancer
- expose deployment as: --type=LoadBalancer
- set service.spec.externalTrafficPolicy: '{"spec":{"externalTrafficPolicy":"Local"}}'
Using the image as described in the example "echoserver" is returning my public address.

Related

Kubernetes load balance HTTP/1.1 requests

As we know, by default HTTP 1.1 uses persistent connections which is a long-lived connection. For any service in Kubernetes, for example, clusterIP mode, it is L4 based load balancer.
Suppose I have a service which is running a web server, this service contains 3 pods, I am wondering whether HTTP/1.1 requests can be distributed to 3 pods?
Could anybody help clarify it?
This webpage perfectly address your question: https://learnk8s.io/kubernetes-long-lived-connections
In the spirit of StackOverflow, let me summarize the webpage here:
TLDR: Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others.
Kubernetes Services do not exist. There's no process listening on the IP address and port of a Service.
The Service IP address is used only as a placeholder that will be translated by iptables rules into the IP addresses of one of the destination pods using cleverly crafted randomization.
Any connections from clients (regardless from inside or outside cluster) are established directly with the Pods, hence for an HTTP 1.1 persistent connection, the connection will be maintained between the client to a specific Pod until it is closed by either side.
Thus, all requests that use a single persistent connection will be routed to a single Pod (that is selected by the iptables rule when establishing connection) and not load-balanced to the other Pods.
Additional info:
By W3C RFC2616 (https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.3), any proxy server that serves between client and server must maintain HTTP 1.1 persistent connections from client to itself and from itself to server.

Is it possible for me to get a log which shows the source IP of requests hitting a NodePort in my Kubernetes cluster?

I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this?
I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.
When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.
Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is X-Fordwared-for, more info here, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.

Assign external ip to kubernetes pod

Context:
We're working on an integration with one of our clients
In order to get access to their systems, we need to establish a VPN connection
For security reasons, we need to bind this VPN connection to a static IP on our side (basically, layer 4 security check enforced by a Juniper router; we use OpenSwan to connect to it).
To do that, we must be connecting from that IP ; that is, we need to establish a socket connection, where the source IP corresponds to that static IP from the router's perspective (and, of course, that needs to route back to our pod successfully)
Client's side has very limited resources ops-wise, so this security hoop is the only way to connect to their systems
While our current system is running (AWS) Kubernetes, which is:
Made out of transient pods, transient nodes, with shifting IPs
Can assign an ExternalIP to a service (which, in turn, can route it to a pod); however that, by default, makes no guarantees about the originator IP of the traffic initiated by that pod
For this reason, we set up an external box & assigned Elastic IP to it, as a binding for the VPN, exposing endpoints, and calling our Kubernetes Services. This introduces a single point of failure -if that box goes down, so does our integration.
Question: in what ways can this be made HA within the Kubernetes world, given the constrains on the first list above?

Proxy outgoing traffic of Kubernetes cluster through a static IP

I am trying to build a service that needs to be connected to a socket over the internet without downtime. The service will be reading and publishing info to a message queue, messages should be published only once and in the order received.
For this reason I thought of deploying it into Kubernetes where I can automatically have multiple replicas in case one process fails, i.e. just one process (pod) should be running all time, not multiple pods publishing the same messages to the queue.
These requests need to be routed through a proxy with a static IP, otherwise I cannot connect to the socket. I understand this may not be a standard use case as a reverse proxy as it is normally use with load balancers such as Nginx.
How is it possible to build this kind of forward proxy in Kubernetes?
I will be deploying this on Google Container Engine.
Assuming you're happy to use Terraform, you can use this:
https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway
However, there's one caveat and that is it may inbound traffic to other clusters in that same region/zone.
Is the LoadBalancer that you need?
kubernetes create external loadbalancer,you can see this doc.

How get public ip for kubernetes pod?

The question:
I have a VOIP application running in a kubernetes pod. I need to set in the code the public IP of the host machine on which the pod is running. How can I get this IP via an API call or an environmental variable in kubernetes? (I'm using Google Container Engine if that's relevant.) Thanks a lot!
Why do I need this?
The application is a SIP-based VOIP application. When a SIP request comes in and does a handshake, the application needs to send a SIP invite request back out that contains the public IP and port which the remote server must use to set up a RTP connection for the audio part of the call.
Please note:
I'm aware of kubernetes services and the general best-practise of exposing those via a load balancer. However I would like to use hostNetwork=true and expose ports on the host for the remote application to send RTP audio packets (via UDP) directly. This kubernetes issue (https://github.com/kubernetes/kubernetes/issues/23864) contains a discussion of various people running SIP-based VOIP applications on kubernetes and the general concessus is to use hostNetwork=true (primarily for performance and due to limitations of load balancing UDP I believe).
You can query the API server for information about the Node running your pod like it's addresses. Since you are using hostNetwork=true the $HOSTNAME environment variable identifies the node already.
There is an example below that was tested on GCE.
The code needs to be run in a pod. You need to install some python dependencies first (in the pod):
pip install kubernetes
There is more information available at:
https://github.com/kubernetes-incubator/client-python
import os
from kubernetes import client, config
config.load_incluster_config()
v1=client.CoreV1Api()
for address in v1.read_node(os.environ['HOSTNAME']).status.addresses:
if address.type == 'ExternalIP':
print address.address