How to map range of ports for NGINX Controller? - docker-compose

I'm following the documentation on exposing UDP/TCP services via NGINX Controller. All examples there show how to map a single port. Is there a way to map a range of ports, similar to docker-compose's yaml? e.g.
ports:
- "1000-2000:1000-2000"
I need this because the service I'm interested in deploying has an API which allocates ports to clients once they request it, assuming there are free ports within a certain fixed range (1000-2000). The clients then need to contact the service over the allocated port.

Related

What is the advantage of allowing port and targetPort to be different in Kubernetes services?

Today I have started to learn about Kubernetes because I have to use it in a project. When I came to the Service object, I started to learn what is the difference between all the different types of ports that can be specified. I think now I undertand it.
Specifically, the port (spec.ports.port) is the port from which the service can be reached inside the cluster, and targetPort (spec.ports.targetPort) is the port that an application in a container is listening to.
So, if the service will always redirect the traffic to the targetPort, why is it allowed to specify them separately? In which situations would it be necessary?
The biggest use is with LoadBalancer services where you want to expose something on (usually) 80 or 443, but don't want the process to run as root so it's listening on 8080 or something internally. This lets you map things smoothly.

Best way to go between private on-premises network and kubernetes

I have setup an on-premises Kubernetes cluster, and I want to be ensure that my services that are not in Kubernetes, but exist on a separate class B are able to consume those services that have migrated to Kubernetes. There's a number of ways of doing this by all accounts and I'm looking for the simplest one.
Ingress + controller seems to be the one favoured - and it's interesting because of the virtual hosts and HAProxy implementation. But where I'm getting confused is how to set up the Kubernetes service:
We've not a great deal of choice - ClusterIP won't be sufficient to expose it to the outside, or NodePort. LoadBalancer seems to be a simpler, cut down way of switching between network zones - and although there are OnPrem solutions (metalLB), seems to be far geared towards cloud solutions.
But if I stick with NodePort, then my entry into the network is going to be on a non-standard port number, and I would prefer it to be over standard port; particuarly if running a percentage of traffic for that service over non-kube, and the rest over kubernetes (for testing purposes, I'd like to monitor the traffic over a period of time before I bite the bullet and move 100% of traffic for the given microservice to kubernetes). In that case it would be better those services would be available across the same port (almost always 80 because they're standard REST micro-services). More than that, if I have to re-create the service for whatever reason, I'm pretty sure the port will change, and then all traffic will not be able to enter the Kubernetes cluster and that's a frightening proposition.
What are the suggested ways of handling communication between existing on-prem and Kubernetes cluster (also on prem, different IP/subnet)?
Is there anyway to get traffic coming in without changing the network parameters (class B's the respective networks are on), and not being forced to use NodePort?
NodePort service type may be good at stage or dev environments. But i recommend you to go with LoadBalancer type service (Nginx ingress controller is one). The advantage for this over other service types are
You can use standard port (Rather random Nodeport generated by your kubernetes).
Your service is load balanced. (Load balancing will be taken care by ingress controller).
Fixed port (it will not change unless you modify something in ingress object).

Multiple services for same app:port in kubernetes

I am experimenting with a service discovery scheme on Kubernetes. I have 20+ GRPC services that can be grouped and deployed as applications on Kubernetes. Each application serves several of these services with a common GRPC server. There is a service to publish this GRPC port, and I have labels on those services that identify which GRPC servers are running there.
For instance, I have APP1 application serving GRPC services a,b,c. There is a service in front of APP1 connected to the port 8000, with labels a,b,c. So when a component in the cluster needs to connect to service, say, "b", it looks up services that have the label "b", and connects to port 8000 of one of those. This way, I can group the GRPC services in different ways, deploy them, and they all find each other.
I started thinking about an alternative approach. Instead of having one service with labels for each app, I want to have multiple services (one for each GRPC service) for the same app:port with different names. So in this new scheme APP1 would have three services, a, b, and c, all connected to the same app:port. The clients would simply look up the name "b" to find the GRPC server "b".
The question is: do you see any potential problems with having multiple services with different names that are connected to the same port of the same app, exposing the same port? That is, addresses a:8000, b:8000, c:8000 all pointing to APP1:8000.
To be honest I don't see any problem as long as your application identifies internally that the client is trying to talk to either a:8000, b:8000, or c:8000. Essentially, you will find to just a single port 8000 in this case in the container. This would be analogous to different HTTP endpoints per service some like https://myendpoint:8000/a, https://myendpoint:8000/b, and https://myendpoint/c.
Note that 8000 would be the port in the container but Kubernetes will use a random port on the node to forward the traffic to 8000 in the container.

Load Balancing Applications in Kubernetes Bare-Metal

I've been looking at setting up an Ingress controller for a bare-metal Kubernetes cluster. I started looking at Ingress controllers, but these seem to only work well for HTTP services that are reachable via port 80 or 443. If you need to expose a TCP or UDP service on an arbitrary port, it seems possible to do with the Nginx or HAProxy Ingress controllers, but your cluster ends up sharing a single port range. Please let me know if I've misunderstood this.
If you need to expose and load balance TCP or UDP services on arbitrary ports, how would you do it? I was thinking of using ClientIP so that services get their own VIP and can use any ports they want, but the question then becomes, how do you route traffic to those VIPs and give them friendly DNS names? Is there a solution for this already, or do you have to build one yourself? Using NodePort or any solution that means namespaces have to share a single port range isn't really scalable or desirable. Especially if Bob in namespace 1 absolutely needs his service to be reachable on port 8000, but Linda in namespace 2 is already using that port.
Any clarification, potential solutions, or help in general will be much appreciated.
The github issue is an interesting read, and there are some clever workarounds like starting with HTTPS then using ALPN to switch to a custom protocol: https://github.com/kubernetes/kubernetes/issues/23291 but of course then your clients need to know how to do that.
But if the protocols for these TCP and UDP services using the same port are different and don't have a way to interoperate, then the ingress controller needs to be able to allocate the equivalent of a distinct routable IP address- either with the cloud provider, or with the proprietary infrastructure, however that is handled, per exposed service.
I have not looked closely though my sense is that the packaged ingress controllers from nginx and haproxy are not going to have that automation. It would have to be built in coordination with the available infrastructure automation.

Share same IP for multiple pods

Is it possible to expose pods application of different ports on single IP on different port for example that
microservices-cart LoadBalancer 10.15.251.89 35.195.135.146 80:30721/TCP
microservices-comments LoadBalancer 10.15.249.230 35.187.190.124 80:32082/TCP
microservices-profile LoadBalancer 10.15.244.188 35.195.255.183 80:31032/TCP
would look like
microservices-cart LoadBalancer 10.15.251.89 35.195.135.146 80:30721/TCP
microservices-comments LoadBalancer 10.15.249.230 35.195.135.146 81:32082/TCP
microservices-profile LoadBalancer 10.15.244.188 35.195.135.146 82:31032/TCP
Reusing the same external IP is usually accomplished by using ingress resources.
See https://kubernetes.io/docs/concepts/services-networking/ingress/
But you'll have to route with paths instead of ports.
One possible solution is to combine NodePort and a reverse proxy. NodePort expose pods on different ports on all nodes. The reverse proxy serves as the entrance and redirects traffic to nodes.
One way or another you'll have to consolidate onto the same pod.
You can create a deployment that proxies each of the ports to the appropriate service. There are plenty of ways to create a TCP proxy - via nginx, node via package, there's a Go package maintained by Google; whatever you're most comfortable with.
First of all, if you're building a microservices app, you need an api gateway. It can have an external IP address and communicate with other pods using internal services. One possible way is using nginx. You can watch a guide about api gateways here.