Kubernetes: Bind physical network interface - kubernetes

Is it possible to bind a Kubernetes pod or container to a physical network interface of the host such that all traffic arriving on the interface is sent to the pod (not just HTTP traffic on a specific port)?
Specifically, I want to start a VPN client as a pod on Kubernetes and bind it to a network interface. All traffic arriving on the interface should go via the pod through the VPN.
I found something about Network Plugins in K8s, but that seems to be something else.

If I understood your question correctly you can do it. I am exposing an amqp application just by exposing the amqp port of my application via a NodePort service. What you need to do is define a new service which is almost exactly a service that you use for internal interaction but use LoadBalancer or NodePort as the service type.

I think that this could solve your problem https://github.com/intel/multus-cni. It allows you to have multiple network interfaces inside of a pod.

Related

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?

For instance, I have a bare-metal cluster with 3 nodes ich with some instance exposing the port 105. In order to expose it on external Ip address I can define a service of type NodePort with "externalIPs" and it seems to work well. In the documentation it says to use a load balancer but I didn't get well why I have to use it and I worried to do some mistake.
Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?
You don't have to use it, it's up to you to choose if you would like to use NodePort or LoadBalancer.
Let's start with the difference between NodePort and LoadBalancer.
NodePort is the most primitive way to get external traffic directly to your service. As the name implies, it opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
LoadBalancer service is the standard way to expose a service to the internet. It gives you a single IP address that will forward all traffic to your service.
You can find more about that in kubernetes documentation.
As for the question you've asked in the comment, But NodePort with "externalIPs" option is doing exactly the same. I see only one tiny difference is that the IP should be owned by one of the cluster machin. So where is the profit of using a loadBalancer? let me answer that more precisely.
There are the advantages & disadvantages of ExternalIP:
The advantages of using ExternalIP is:
You have full control of the IP that you use. You can use IP that belongs to your ASN >instead of a cloud provider’s ASN.
The disadvantages of using ExternalIP is:
The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.
There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention
Summarizing the pros and cons of both, we can conclude that ExternalIP is not made for a production environment, it's not highly available, if node dies the service will be no longer reachable and you will have to manually fix that.
With a LoadBalancer if node dies the service will be recreated automatically on another node. So it's dynamically provisioned and there is no need to configure it manually like with the ExternalIP.

Is there some way to handle SIP, RTP, DIAMETER, M3UA traffic in Kubernetes?

From a quick read of the Kubernetes docs, I noticed that the kube-proxy behaves as a Level-4 proxy, and perhaps works well for TCP/IP traffic (s.a. typically HTTP traffic).
However, there are other protocols like SIP (that could be over TCP or UDP), RTP (that is over UDP), and core telecom network signaling protocols like DIAMETER (over TCP or SCTP) or likewise M3UA (over SCTP). Is there a way to handle such traffic in application running in a Kubernetes minion ?
In my reading, I have come across the notion of Ingress API of Kuberntes, but I understood that it is a way to extend the capabilities of the proxy. Is that correct ?
Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?
Finally, other than usage of the Ingress API, is there no way to deal with the above listed traffic, even if it has performance limitations ?
Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?
Probably, and this IBM study on IBM Voice Gateway "Setting up high availability"
(here with SIPs (Session Initiation Protocol), like OpenSIPS)
Kubernetes deployments
In Kubernetes terminology, a single voice gateway instance equates to a single pod, which contains both a SIP Orchestrator container and a Media Relay container.
The voice gateway pods are installed into a Kubernetes cluster that is fronted by an external SIP load balancer.
Through Kubernetes, a voice gateway pod can be scheduled to run on a cluster of VMs. The framework also monitors pods and can be configured to automatically restart a voice gateway pod if a failure is detected.
Note: Because auto-scaling and auto-discovery of new pods by a SIP load balancer in Kubernetes are not currently supported, an external SIP.
And, to illustrate Kubernetes limitations:
Running IBM Voice Gateway in a Kubernetes environment requires special considerations beyond the deployment of a typical HTTP-based application because of the protocols that the voice gateway uses.
The voice gateway relies on the SIP protocol for call signaling and the RTP protocol for media, which both require affinity to a specific voice gateway instance. To avoid breaking session affinity, the Kubernetes ingress router must be bypassed for these protocols.
To work around the limitations of the ingress router, the voice gateway containers must be configured in host network mode.
In host network mode, when a port is opened in either of the voice gateway containers, those identical ports are also opened and mapped on the base virtual machine or node.
This configuration also eliminates the need to define media port ranges in the kubectl configuration file, which is not currently supported by Kubernetes. Deploying only one pod per node in host network mode ensures that the SIP and media ports are opened on the host VM and are visible to the SIP load balancer.
That network configuration put in place for Kubernetes is best illustrated in this answer, which describes the elements involved in pod/node-communication:
It is possible to handle TCP and UDP traffic from clients to your service, but it slightly depends where you run Kubernetes.
Solutions
A solution which working everywhere
It is possible to use Ingress for both TCP and UDP protocols, not only with HTTP. Some of the Ingress implementations has a support of proxying that types of traffic.
Here is an example of that kind of configuration for Nginx Ingress controller for TCP:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-example
data:
9000: "default/example-go:8080" here is a "$namespace/$service_name:$port"
And UDP:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-configmap-example
data:
53: "kube-system/kube-dns:53" # here is a "$namespace/$service_name:$port"
So, actually, you can run your application which needs plain UDP and TCP connections with some limitations (you need somehow manage a load balancing if you have more than one pod etc).
But if you now have an application which can do it now, without Kubernetes - I don't think that you will have any problems with that after migration to Kubernetes.
A Small example of a traffic flow
For SIP UDP traffic, for an example, you can prepare configuration like this:
Client -> Nginx Ingress (UDP) -> OpenSIPS Load balancer (UDP) -> Sip Servers (UDP).
So, the client will send packets to Ingress, it will forward it to OpenSIPS, which will manage a state of your SIP cluster and send clients packets to a proper SIP server.
A solution only for Clouds
Also, if you will run in on Cloud, you can use ServiceType LoadBalancer for your Service and get TCP and UDP traffic to your application directly thru External Load Balancer provided by a cloud platform.
About SCTP
What about SCTP, unfortunately, no, that is not supported yet, but you can track a progress here.
With regard to SCTP support in k8s: it has been merged recently into k8s as alpha feature. SCTP is supported as a new protocol type in Service, NetworkPolicy and Pod definitions. See the PR here: https://github.com/kubernetes/kubernetes/pull/64973
Some restrictions exist:
the handling of multihomed SCTP associations was not in the scope of the PR. The support of multihomed SCTP associations for the cases when NAT is used is a much broader topic which affects also the current SCTP kernel modules that handle NAT for the protocol. See an example here: https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-natsupp-12
From k8s perspective one would also need a CNI plugin that supports the assignment of multiple IP addresses (on multiple interfaces preferably) to pods, so the pod can establish multihomed SCTP association. Also one would need an enhanced Service/Endpoint/DNS controller to handle those multiple IP addresses on the right way.
the support of SCTP as protocol for type=LoadBalancer Services is up to the load balancer implementation, which is not a k8s issue
in order to use SCTP in NetworkPolicy one needs a CNI plugin that supports SCTP in NetworkPolicies

Share same IP for multiple pods

Is it possible to expose pods application of different ports on single IP on different port for example that
microservices-cart LoadBalancer 10.15.251.89 35.195.135.146 80:30721/TCP
microservices-comments LoadBalancer 10.15.249.230 35.187.190.124 80:32082/TCP
microservices-profile LoadBalancer 10.15.244.188 35.195.255.183 80:31032/TCP
would look like
microservices-cart LoadBalancer 10.15.251.89 35.195.135.146 80:30721/TCP
microservices-comments LoadBalancer 10.15.249.230 35.195.135.146 81:32082/TCP
microservices-profile LoadBalancer 10.15.244.188 35.195.135.146 82:31032/TCP
Reusing the same external IP is usually accomplished by using ingress resources.
See https://kubernetes.io/docs/concepts/services-networking/ingress/
But you'll have to route with paths instead of ports.
One possible solution is to combine NodePort and a reverse proxy. NodePort expose pods on different ports on all nodes. The reverse proxy serves as the entrance and redirects traffic to nodes.
One way or another you'll have to consolidate onto the same pod.
You can create a deployment that proxies each of the ports to the appropriate service. There are plenty of ways to create a TCP proxy - via nginx, node via package, there's a Go package maintained by Google; whatever you're most comfortable with.
First of all, if you're building a microservices app, you need an api gateway. It can have an external IP address and communicate with other pods using internal services. One possible way is using nginx. You can watch a guide about api gateways here.

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.