How to expose a single REST API to outside cluster in Kubernetes? - kubernetes

I am new to Kubernetes and if I am not wrong, a service can be exposed inside the cluster using ClusterIP, and to the outside world using NodePort or LoadBalancer types. But my requirement is that I have a single container that has few REST APIs. I want that one API (the basic health check API) should be exposed to the outside, and the rest of the APIs should be available only within the cluster (accessible by other nodes). How can I achieve this?

You can keep your service as ClusterIP and use the ingress.
With ingress and ingress controller, you can setup and expose the desired path outside of cluster.
Ingress, you can install the Nginx ingress controller and create the ingress resource.
Read more about the nginx ingress controller setup and ingress reosuce setup.

Related

Exposing Service from a BareMetal(Kubeadm) Kubernetes Cluster to outside world

Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.

Kubernetes: Who is first a loadbalancer or ingress

Question is straightforward, but I've not been able to quite figure out which steps a request follows when it reaches kubernetes system.
Who first handle a received request? Ingress Controller, LoadBalancer, ClusterIP...
So, I know there are several ways to make pods externally accessible:
Creating a NodePort service.
Creating an LoadBalancer service.
Creating an Ingress rule.
Some questions here related with best-practices or mandatory facts?
Ingress is in front of a ClusterIP Service mandatory?
1.1 Could or shouldn't I create an Ingress in front of a NodePort or a LoadBalancer service?
Ingress Controllers are LoadBalancer Services? I mean, traefik or other Ingress Controllers are all of them deployed as LoadBalancer services?
Misunderstanding arises from several texts I've found over there:
image here: Seems LoadBalancer is placed first of Ingress Controllers.
image here: Seems Ingress is in front of a LoadBalancer.
Above questions arises from an attempt of expose externally a mongodb replicatset.
I've created a LoadBalancer for each node. Is this correct?
I'd like to create a domain using my Ingress Controller for those LoadBalancer. Can this be possible?
Is there point to create an Ingress in front of a headless service?
Ingress is in front of a ClusterIP Service mandatory?
If you want the service accessible externally, then you will need an externally accessible service. This can be a LoadBalancer service or an Ingress. A ClusterIP service is not accessible outside the cluster.
Could or shouldn't I create an Ingress in front of a NodePort or a LoadBalancer service?
You can create an Ingress in front of a NodePort or LoadBalancer, but there's no point in creating an Ingress in front of a LoadBalancer unless you want two different endpoints for accessing the same service (the LoadBalancer will get one IP and the Ingress Controller's own LoadBalancer will get another IP). However, using an Ingress will allow you to have additional functionality, such as SSL Certificates, which the standard LoadBalancer service resource does not (normally) provide
Ingress Controllers are LoadBalancer Services? I mean, traefik or other Ingress Controllers are all of them deployed as LoadBalancer services?
Correct. An Ingress controller opens an endpoint for traffic into the cluster, and then uses the ingress resources you create in the cluster to determine how and where to route the traffic.
The endpoint is a publicly accessible endpoint (unless you configure it to be an internal loadbalancer, in which case only machines within your corporate network will be able to access it).
The controller will normally update the Ingress resource in your cluster so you will see the IP of the loadbalancer belonging to the ingress

Azure Kubernetes Service: using an NGINX ingress controller with a Ocelot-based API gateway

I am planning to deploy to an AKS cluster and use an NGINX ingress controller, so that my micro-services will be internal to the cluster and the NGINX ingress controller will be the entry point to the micro-services.
One of my micro-services acts as an API gateway using the Ocelot library, and it implements the BFF pattern. So my ingress controller will have only one rule which will route requests made to the path "/(.*)" to the API gateway micro-service.
My question is - is this the conventional way to use an ingress controller and an API gateway micro-service? Somehow it feels redundant, although I could think that both have different responsibilities.
I don't think you would need an Ingress-Controller in this case, we use an API Gateway which is Ambassador and we simply have a public IP assigned to its kubernetes service.
If you don't expect other pods to expose themselves using Ingress objects, and that all traffic will be coming in your API gateway, I would simply drop the Ingress-controller and enable a Service of Type LoadBalancer for your API gateway pods

Why do we need a load balancer to expose kubernetes services using ingress?

For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :
We know services are supposed to load balance traffic for pod replicaset.
When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.
had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.
It seems loadbalancing is being done by services. URL based routing is
being done by ingress controller.
Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just
forward all the traffic to nginx reverse proxy created and it will
route requests based on URL.
Please correct if I am wrong in my understanding.
A Service type LoadBalancer and the Ingress is the way to reach your application externally, although they work in a different way.
Service:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
There are some types of Services, and of them is the LoadBalancer type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it.
The load balancing will be handled by kube-proxy.
Ingress:
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
When you setup an ingress (i.e.: nginx-ingress), a Service type LoadBalancer is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.
This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.
For example:
Supose you have 10 services of LoadBalancer type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.
But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:
Use regex in path to define the service to redirect;
Use SSL/TLS
Inject custom headers;
Redirect requests for a default service if one of the service failed (default-backend);
Create whitelists based on IPs
Etc...
A important note about Ingress Load balancing in ingress:
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).
An ingress controller(nginx for example) pods needs to be exposed outside the kubernetes cluster as an entry point of all north-south traffic coming into the kubernetes cluster. One way to do that is via a LoadBalancer. You could use NodePort as well but it's not recommended for production or you could just deploy the ingress controller directly on the host network on a host with a public ip. Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods.
When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource. This bypasses the kubernetes service and load balancing(by kube-proxy at layer 4) offered by kubernetes service.Internally the ingress controller discovers all the POD IPs from the kubernetes service's endpoints and directly route traffic to the pods.
It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.
Services do balance the traffic between pods. But they aren't accessible outside the kubernetes in Google Kubernetes Engine by default (ClusterIP type). You can create services with LoadBalancer type, but each service will get its own IP address (Network Load Balancer) so it can get expensive. Also if you have one application that has different services it's much better to use Ingress objects that provides single entry point. When you create an Ingress object, the Ingress controller (e.g. nginx one) creates a Google Cloud HTTP(S) load balancer. An Ingress object, in turn, can be associated with one or more Service objects.
Then you can get the assigned load balancer IP from ingress object:
kubectl get ingress ingress-name --output yaml
As a result your application in pods become accessible outside the kubernetes cluster:
LoadBalancerIP/url1 -> service1 -> pods
LoadBalancerIP/url2 -> service2 -> pods

How to make a Kubernetes "LoadBalancer" service point to an Ingress controller?

I've been using Kubernete's LoadBalancer type service for incoming traffic on AWS. However, it is hard to terminate SSL at a service level, thus the idea of using an Ingress.
However, a LoadBalancer service allows us to make as many rolling changes as we like to our deployments without having to configure our DNS. By using Ingress, you can only use NodePort and while we would like to use Ingress, mapping DNS to new node when the pod is deployed on another node is a problem.
Is there a way to point a Kubernetes to point to an Ingress controller or use a service type LoadBalancer with an Ingress controller to terminate SSL.
We do not want to put our SSL certificates in a container, which is why this trouble.
Is there a way to point a Kubernetes to point to an Ingress controller or use a service type LoadBalancer with an Ingress controller to terminate SSL.
You can simply deploy the on metal (nginx, haproxy, traefic...) ingress controllers as a pod/daemonset/rc in your cluster, and front it with a service of type=loadbalancer. You can find these controllers in various places like: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx, https://libraries.io/go/github.com%2Ftimelinelabs%2Fromulus, https://github.com/containous/traefik/blob/fa25c8ef221d89719bd0c491b66bbf54e3d40438/docs/toml.md#kubernetes-ingress-backend,