How to make an Ingress Controller send traffic to outside IP? - kubernetes

It's possible to make an Ingress Controller, or anything else (preferably something already done, not needing to code a service per say), to send traffic to an external IP?
Why: I have an application which will interact with my k8s cluster from the outside, I already know that I can use an Ingress Controller to make its connection to the cluster, but what if the other applications need to reach this external application? Is there a way to do this?

It depends on the controller, but most will work with an ExternalName type Service to proxy to an arbitrary IP even if that's outside the cluster.

Related

What I need to provide to make calls from my k8s cluster?

I have a Kubernetes Cluster with my application running inside of it, also I have a host machine, that my application need to access.
All the infrastructure is located inside the VPN network
How can I setup egress to let my application send requests from the cluster to this host machine (does the Kubernetes Network Policies is an appropriate way to handle this stuff and actually solving this problem?)
(Sorry, if this is too obvious question, haven't found any solutions for that yet, that works)
I'm not sure if I get your question right, but by default no network connectivity is blocked by Kubernetes. I assume you haven't set up any NetworkPolicies, this means all Ingress & Egress communication is open and nothing will block access, at least from K8s perspective.
However, if you have only deployed your application but haven't exposed it yet (with Ingress or Service: LoadBalancer) you will not be able to reach your application from outside the cluster. If you're running on-prem you will need to install MetalLB or some sort of service that allows you to create Services of Type LoadBalancer. The same goes for Ingress however, as the Ingress Controller will need some sort of access in the first place.

kubernetes load balancer same ip adress different ports

I have 3 different services and their service types LoadBalancer. Each one has different external ip. However I want to use one ip address for every one but different ports as external ip. Is it possible?
The only way you can have the same IP across different services that I am aware of, is to use an Ingress. But depending on the controller implementation, you may only be able to use ports 80/443.
You can implement it using ingress.
Let you applications deployed using the Deployment type workload
Expose your deployments using services
Install ingress controller (you can use nginx ingress controller)
Create your ingress resource to route your request based on a particualr context to a particular service.
Here is the reference from the kubernetes documentation which clearly elaborates on it - https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout

Service-to-Service Communication in Kubernetes

I have deployed my Kubernetes cluster on EKS. I have an ingress-nginx which is exposed via load balancer to route traffic to different services. In ingress-nginx first request goes to auth service for authentication and if it is a valid request then I allow it move forward.
Let say the request is in Service 1 and now from there, it wants to communicate to Service 2. So if I somehow want my request to go directly to ingress not via load balancer and then from ingress to service 2.
Is is possible to do so?
Will it help in improving performance as I bypassed load balancer?
As the request is not moving through load balancer so load balancing won't take place, is it a serious concern?
1/ Is it possible: short answer, no.
There are edge cases, that would require for someone to create another Ingress object exposing Service2 in the first place. Then, you could trick the Ingress into routing you to some service that might not otherwise be reachable (if the DNS doesn't exist, some VIP was not yet exposed, ...)
There's no real issue with external clients bypassing the ELB, as long as they can not join all ports on your nodes, just the ones bound by your ingress controller.
2/ Bypassing the loadbalancer: won't change much in terms of performance.
If we're talking about a TCP loadbalancer, getting it away would help track real client IPs, though. Figuring out how to change it for an HTTP loadbalancer may be better -- though not always easy.
3/ Removing the LoadBalancer: if you have several nodes hosting replicas of your incress controller, then you would still be able to do some kind of DNS-based loadbalancing. Though for sure, it's not the same as having a real LB.
In AWS, you could find a middle ground setting up health-check based Route53 Records: set one for each node hosting an ingress controller, create another regrouping all healthy ingress nodes, then change your existing ingress FQDN records so they'ld all point to your new route53 name. You'ld be able to do TCP/HTTP checks against EC2 instances IPs, that's usually good enough. But again: DNS loadbalancing can suffer from outdated browser caches, some ISP not refreshing zones, ... LB is the real thing.

How expose multiple services on the same port in kubernetes using OpenStack

I have a Kubernetes cluster on a private cloud based on the OpenStack. My service is required to be exposed on a specific port. I am able to do this using NodePort. However, if I try to create another service similar to the first one, I am not able to expose it since I have to use the same port and it is already occupied by the first one.
I've noticed that I can use LoadBalancer in public clouds for this, but I assume this is not possible in OpenStack?
I also tried to use Ingress Controller of Kubernetes but it did not worked. However, I am not sure if I went through a correct way to do it.
Is there any other way else than LoadBalancer or Ingress to do this? (My first assumption was that if I dedicate my pods to specific nodes, then I should be able to expose each of services on the same port on different nodes, but this approach also did not worked.)
Please let me know if you have any thoughts on this.
You have to setup the OpenStack Cloud Provider: basically, this Deployment will watch for LoadBalancer Service and will provide an {internal,external} IP address you can use to interact with your application, even at L4 and not only (sic) L7 like many Ingress Controller resources.
If you want to only expose one port then the only answer to the best of my knowledge is an ingress-controller. The two most famous ones are Nginx and Traefik. I agree that setting up ingress-controller can be difficult and I had problems with them before but you have to solve them one by one.
Another thing you can do is you can build your own ingress controller. What I mean is to use a reverse proxy such as Nginx, configure it to reroute the traffic based on your topology then just expose this reverse proxy so all the traffic goes through this custom reverse proxy but this should be done just if you need something very customized.

Exposing multiple services (or service instances) via IPVS load balancer on Kubernetes

I have an app, which I want to run on Kubernetes (currently on AWS ECS). The app has two TCP ports, neither is http. One port, say APORT is common across all the app instances (replicas) and should be load balanced. The other, lets call it a BPORT, however, is specific to this particular instance of the app, e.g. pod/container specific.
Now here is my problem: the app register it's BPORT with an external controller and the controller should be able to reach this app via that port. I can use NodePort to expose that NodePort to the external IP. From my pod, I will obtain a value of that NodePort and register with the external controller.
However, a service only assigns single NodePort across all replicas, so if I want multiple replicas, I have to run multiple services.
Running multiple services presents a problem on the APORT side, as this port should be load balanced, ideally sitting behind IPVS, and as far as I understand IPVS does not allow to LB between multiple services.
Another wrinkle that ideally I would like to be able to add more replicas to scale this whole thing without service interruption/restarts.
Any ideas? Thanks!
Kubernetes doesn't really handle this, or at least it doesn't get involved. You can use a Service object for A port like normal, but for the B port, you wouldn't use a Service at all, things would have to directly use the pod IP instead, just like if these were servers rather than containers.