Why is nginx-ingress-controller config different in different clusters? - kubernetes

I use pathType: ImplementationSpecific for many routes in an ingress.
The final nginx-ingress-controller configs for two clusters:
location ~* /some/route/(?!one|two|three).{1,} # one cluster
location /some/route/(?!one|two|three).{1,} # other cluster
The second one is wrong because it is a regex route but ~* is missing.
The nginx-ingress-controller versions are matching in both environments.
The use-regex annotation is NOT used in any of the environments.
From the docs I read that ImplementationSpecific depends on the ingress class and I am not sure what that means.
I didn't find any configuration that could explain this behaviour and difference between the configs.
Why is nginx-ingress-controller config different in different clusters?

The nginx-ingress-controller config depends on the cluster.
When running NGINX Ingress Controller, you have the following options with regards to which configuration resources it handles:
Cluster-wide Ingress Controller (default). The Ingress Controller handles configuration resources created in any namespace of the cluster. As NGINX is a high-performance load balancer capable of serving many applications at the same time, this option is used by default in our installation manifests and Helm chart.
Single-namespace Ingress Controller. You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
Ingress Controller for Specific Ingress Class. This option works in conjunction with either of the options above. You can further customize which configuration resources are handled by the Ingress Controller by configuring the class of the Ingress Controller and using that class in your configuration resources. See the section Configuring Ingress Class.
For more information refer to this document.
Some use cases for this might be:
An Ingress Controller that is behind an internal ELB for traffic between services within the VPC (or a group of peered VPCs)
An Ingress Controller behind an ELB that already terminates SSL
An Ingress Controller with different functionality or performance
Most NGINX configuration options have NGINX-wide defaults. They can
also be overridden on a per-Ingress resource level.
For more information refer to this document.

Related

Kubernetes with route fanout - Basic understanding of Service setup

I have questions about my basic understanding about the setup of my k8s cluster.
I have a K8s running on Hetzner-cloud and allocated a "physical" Loadbalancer (which can be controlled via annotations on a Service.)
I use a nginx (or traefik) as my ingress-controller.
Please correct me if I am wrong:
I create the service Loadbalancer with the annotations in the same namespace of my ingress-controller right?
Then I create an ingress with label kubernetes.io/ingress-controller=nginx in my default namespace with the settings to point to my services in the default namespace (one for frontend, one for backend)
Is this the correct way to set this up?
1.- No. Ingress Controller and your workload doesn't have to be in the same namespace. In fact, you will have the Ingress Controller running in a separate namespace than your workload.
2.-Yes. Generally speaking your Ingress rules, meaning your Ingress object, meaning your Ingress yaml and your Service must be in the same namespace. So Ingress can't transpass a namespace.
Note: There is a way to have an Ingress object to send trafffic to a Service in a different namespace.
I create the service Loadbalancer with the annotations in the same
namespace of my ingress-controller right?
No ideally your ingress controller will be running in different namespace in which your workload must not be running.
You should be keeping only the Nginx service with type : Loadbalancer other services of your workload should be ClusterIP.
So all your traffic comes inside the cluster from one point. Your flow will be something like
DNS > LB > Ingress > Service > Pods > Container
Then I create an ingress with label
kubernetes.io/ingress-controller=nginx in my default namespace with
the settings to point to my services in the default namespace (one for
frontend, one for backend)
You mentioned label ideally, it should be an annotation kubernetes.io/ingress-controller=nginx.
Yes, it's perfect. You can create different ingress with different annotation rules as per requirements for different services that you want to expose publicly.
Keep your workload in default namespace for the controller you can use different namespaces like ingress-controller in future also if you have any requirement of setting up the Monitoring tools also you can create namespace and use it for monitoring only.

Is it possible to have multiple ingress resources with a single GKE ingress controller

In GKE Ingress documentation
it states that:
When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the information in the Ingress and its associated Services.
To me it seems that I can not have multiple ingress resources with single GCP ingress controller. Instead, GKE creates a new ingress controller for every ingress resource.
Is this really so, or is it possible to have multiple ingress resources with a single ingress controller in GKE?
I would like to have one GCP LoadBalancer as ingress controller with static IP and DNS configured, and then have multiple applications running in cluster, each application registering its own ingress resource with application specific host and/or path specifications.
Please note that I'm very new to GKE, GCP and Kubernetes in general, so it might be that I have misunderstood something.
I think the question you're actually asking is slightly different than what you have written. You want to know if multiple Ingress resources can be linked to a single GCP Load Balancer, not GKE Ingress controller. Based on the concept of a controller, there is only one GKE Ingress controller in a cluster, which is responsible for fulfilling multiple resources and provisioning multiple load balancers.
So, to answer the question directly (because I've been searching for a straight answer for a long time!):
Combining multiple Ingress resources into a single Google Cloud load
balancer is not supported.
Source: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Sad.
However, using the nginx-ingress controller is one way to at least minimize the number of external (GCP) load balancers provisioned (it only provisions a single TCP load balancer), but since the load balancer is for TCP traffic, it cannot terminate SSL, or apply Firewall rules for you (Cloud Armor cannot be used, for instance).
The only way I know of to have a single HTTPS load-balancer in GCP terminate SSL and route traffic to multiple services in GKE is to combine the ingresses into a single resource with all paths and certificates defined in one place.
(If anybody figures out a way to do it with multiple separate ingress resources, I'd love to hear it!)
Yes it is possible to have the single ingress controller for multiple ingress resources.
You can create multiple ingress resources as per path requirement and all will be managed by single ingress controller.
There are multiple ingress controller options also available you can use Nginx also that will create one LB and manage the paths.
Inside Kubernetes if you are creating a service with type LoadBalancer it will create the new LB resource in GCP so make sure your microservice type is ClusterIP and your all traffic goes inside K8s cluster via ingress path.
When you setup the ingress controller it will create one service with type LoadBalancer you can can use that IP in DNS servers to forward the subdomain and path to K8s cluster.

Complex Ingress Nginx Config(Nginx Ingress maintained by Kubernetes)

We have a microservices architecture. We are planning to move this to Kubernetes cluster with Docker as container Runtime.(On Premise, No cloud)
Now I am able to figure out everything but one thing is not clear.
Basically we have around 10 aggregators which we have exposed via Nginx. So we are planning to Use Nginx Ingress(Project which is maintained by Kubernetes).
My doubt is currently we have complex Nginx config like different log files for different domains, generate custom headers, using Nginx Caching with purging logic with Persistent Volumes etc. Currently, we have 5-6 config files for Nginx.
Is it all possible via Ingress? From what I have read, we cant directly provide Nginx conf, we have to provide all config via ingress only? Also is it possible to break the ingress config in multiple files?
If yes, can someone provide some reference?
Remember that you have to have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect. In your case you need to deploy an Ingress controller such as ingress-nginx.
You can have multiple ingress rules for the same hostname with different paths. You can spread the Ingress configuration for a common host across multiple Ingress resources using Mergeable Ingress resources. Such resources can belong to the same or different namespaces. This enables easier management when using a large number of paths. See the Mergeable Ingress Resources example on our GitHub.
As an alternative to Mergeable Ingress resources, you can use VirtualServer and VirtualServerRoute resources for cross-namespace configuration. See the Cross-Namespace Configuration example on our GitHub.
Take a look: cross-namespace-configuration/, ingress-controller-configmap.

Can you have multiple ingresses that use the same LoadBalancer?

I don't know if I missed something, but I can't seem to find any posts/doc that is related to my question. Maybe I misunderstand the type ingress in kubernetes, but is it possible to define multiple ingresses that use the same LoadBlancer? Having to start one LoadBalancer for every ingress is costly.
One of the benefit of using ingress it helps to avoid creating an external LoadBalancer for each LoadBalancer type service. On many cloud providers some of the ingress controllers will create the corresponding external Load Balancer resource for each ingress resource. But using Nginx Ingress controller you need one loadBalancer to expose the Nginx Ingress controller itself. Then create multiple ingress resource and have multiple backends. All the backends are served by same external Load Balancer.
From the docs of Nginx Ingress
In this section you can find a common usage scenario where a single
load balancer powered by ingress-nginx will route traffic to 2
different HTTP backend services based on the host name

what is an ingress controller and how do I create it?

Good morning guys, so I took down a staging environment for a product on GCP and ran the deployment scripts again, the backend and frontend service have been setup. I have an ingress resource and a load balancer up, however, the service is not running. A look at the production app revealed there was something like an nginx-ingress-controller. I really don't understand all these and how it was created. Can someone help me understand because I have not seen anything online that makes it clear for me. Am I missing something?
loadBalancer: https://gist.github.com/davidshare/5a571e56febe7dacd580282b373f3095
Ingress Resource: https://gist.github.com/davidshare/d0f53912bc7da8310ec3d64f1c8a44f1
Ingress allows access to your Kubernetes services from outside the Kubernetes cluster. There are different kubernetes aka K8 resources alternatively you can use like (Node Port / Loadbalancer) which you can use to expose.
Ingress is independent resource to your service , you can specify routing rules declaratively, so each url with some context can be mapped to different services.
This makes it decoupled and isolated from the services you want to expose.
So to work ingress it needs an Ingress Controller for your cluster.
Like deployment resource in K8, ingress can be created simply by
kubectl create -f ingress.yaml
First, you have to implement Ingress Controller in order to apply Ingress resource, as described in #Shubhu answer. Ingress controller, as an edge router, applies specific logical structure with aim to route external traffic to your Kubernetes cluster underlying services via basic pattern routing rules defined in Ingress resource.
If you select Nginx Ingress Controller then it might be useful to proceed with installation guide approaching some specific prerequisites based on cloud provider environment. In order to simplify Nginx Ingress controller installation procedure it is also possible to use Helm package manager and install appropriate stable/nginx-ingress Helm chart.