I have 1 question regarding migration from Nginx controller to ALB. Does k8s during migration will create a new ingress controller and switch smoothly services to new ingress or will delete an old one and after that will create a new ingress? Why I ask that, because we want to change ingress class and we would like to minimize any downtime. Sorry for newbie question, because I didn't find any answer in doc
First, when transitioning from one infrastructure to another, it's best to pre-build the new infrastructure ahead of the transition so it will be ready to be changed.
In this specific example, you can set up the two IngressClasses to exist in parallel, and create the new ALB ingress with a different domain name.
In the transition moment, change the DNS alias record (directly or using annotations) to point at the new ALB ingress and delete the older Nginx ingress.
In general, I recommend managing the ALB not as ingress from K8s, but as an AWS resource in Terraform/CloudFormation or similar and using TargetGroupBindings to connect the ALB to the application using its K8s Services.
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/targetgroupbinding/targetgroupbinding/
Related
I am learning ingress and ingress controller. So I understand that I have to do the following tasks-
ingress controller deployment
create service account
ingress controller service nodeport expose
create ingress resources to attach services.
Now my question is why we need a service account?? And what role should I attach with that service account and how do I use that service account?
What you are asking is very generic and may change a lot, depending on which is your setup (microk8s, minikube, bare-metal and so on) there are a lot of considerations to make.
The Nginx Ingress Controller installation guide for example can help you see how much things change between different environments.
It is also a good idea to simply use the installation resources provided in such guides instead of creating your own resources.. simply because the guide is more complete and ready-to-use basically.
With this said, the reason for the ServiceAccount is that the Ingress Controller Pod needs to be able to access Kubernetes API. Specifically, it needs to watch for resources such as Ingress (obviously), Services, Pods, Endpoints and more.
Imagine that the user (you) creates (or updates, or delete) a Ingress resource, the Ingress Controller needs to notice, parse it, understand what is declared and configure itself to serve the required Services at the configured domains and so on. Similarly, if something changes in the cluster, it may change how the controller needs to serve things.
For example, if you take a look at the Bare-Metal raw YAML definitions of the Nginx Ingress Controller and search for Role you will notice what it needs and also how it is attached to the other resources.
Lastly, serving the Ingress Controller from a NodePort service may not be the most resilient way to do it, it's okay for tests and such, but usually what you want is to have the Ingress Controller Pod to be served at a Load Balanced IP address, so that it is resilient to a single node of your cluster going down.
The Nginx Controller Bare-Metal considerations explains it very well.
We have a multi tenant application and for each tenant we provision separate container image.
Likewise we create a subdomain for each tenant which will be redirected to its own container.
There might be a scenario where 1000s of tenants can exist and its dynamic.
So it has become necessary for us to consider the limitations in ingress controllers for Kubernetes in general before choosing. Especially the nginx-ingress.
Is there any max limitation on number of Ingress resources or rules inside ingress that can be created? Or will there be any performance or scaling issues when too many ingress resources are created ?
Is it better to add a new rule(for each subdomain) in same ingress resource or to create separate ingress resource for each subdomain ?
AFAIK, there are no limits like that, You will either run out of resources or find a choke point first. This article compares resource consumption of several loadbalancers.
As for Nginx-ingress there are few features hidden behind paid nginx plus version as listed here.
If You wish to have dynamic configurations and scalability, You should try envoy based ingress like Ambassador or Istio.
Envoy offers dynamic configuration updates which will not interrupt existing connections. More info here.
Check out this article which compares most of popular kubernetes ingress controllers.
This article shows a great example of pushing HAproxy and Nginx combination to its limits.
Hope it helps.
I'm trying to figure out which tools from GKE stack I should apply to my use case which is a dynamic deployment of stateful application with dynamic HTTP endpoints.
Stateful in my case means that I don't want any replicas and load-balancing (because the app doesn't scale horizontally at all). I understand though that in k8s/gke nomenclature I'm still going to be using a 'load-balancer' even though it'll act as a reverse proxy and not actually balance any load.
The use case is as follows. I have some web app where I can request for a 'new instance' and in return I get a dynamically generated url (e.g. http://random-uuid-1.acme.io). This domain should point to a newly spawned, single instance of a container (Pod) hosting some web application. Again, if I request another 'new instance', I'll get a http://random-uuid-2.acme.io which will point to another (separate), newly spawned instance of the same app.
So far I figured out following setup. Every time I request a 'new instance' I do the following:
create a new Pod with dynamic name app-${uuid} that exposes HTTP port
create a new Service with NodePort that "exposes" the Pod's HTTP port to the Cluster
create or update (if exists) Ingress by adding a new http rule where I specify that domain X should point at NodePort X
The Ingress mentioned above uses a LoadBalancer as its controller, which is automated process in GKE.
A few issues that I've already encountered which you might be able to help me out with:
While Pod and NodePort are separate resources per each app, Ingress is shared. I am thus not able to just create/delete a resource but I'm also forced to keep track of what has been added to the Ingress to be then able to append/delete from the yaml which is definitely not the way to do that (i.e. editing yamls). Instead I'd probably want to have something like an Ingress to monitor a specific namespace and create rules automatically based on Pod labels. Say I have 3 pods with labels, app-1, app-2 and app-3 and I want Ingress to automatically monitor all Pods in my namespace and create rules based on the labels of these pods (i.e. app-1.acme.io -> reverse proxy to Pod app-1).
Updating Ingress with a new HTTP rule takes around a minute to allow traffic into the Pod, until then I keep getting 404 even though both Ingress and LoadBalancer look as 'ready'. I can't figure out what I should watch/wait for to get a clear message that the Ingress Controller is ready for accepting traffic for newly spawned app.
What would be the good practice of managing such cluster where you can't strictly define Pods/Services manifests because you are creating them dynamically (with different names, endpoints or rules). You surely don't want to create bunch of yaml-s for every application you spawn to maintain. I would imagine something similar to consul templates in case of Consul but for k8s?
I participated in a similar project and our decision was to use Kubernetes Client Library to spawn instances. The instances were managed by a simple web application, which took some customisation parameters, saved them into its database, then created an instance. Because of the database, there was no problem with keeping track of what have been created so far. By querying the database we were able to tell if such deployment was already created or update/delete any associated resources.
Each instance consisted of:
a deployment (single or multi-replica, depending on the instance);
a ClusterIp service (no reason to reserve machine port with NodePort);
an ingress object for shared ingress controller;
and some shared configMaps.
And we also used external DNS and cert manager, one to manage DNS records and another to issue SSL certificates for the ingress. With this setup it took about 10 minutes to deploy a new instance. The pod and ingress controller were ready in seconds but we had to wait for the certificate and it's readiness depended on whether issuer's DNS got our new record. This problem might be avoided by using a wildcard domain but we had to use many different domains so it wasn't an option in our case.
Other than that you might consider writing a Helm chart and make use of helm list command to find existing instances and manage them. Though, this is a rather 'manual' solution. If you want this functionality to be a part of your application - better use a client library for Kubernetes.
I have a Kubernetes cluster on a private cloud based on the OpenStack. My service is required to be exposed on a specific port. I am able to do this using NodePort. However, if I try to create another service similar to the first one, I am not able to expose it since I have to use the same port and it is already occupied by the first one.
I've noticed that I can use LoadBalancer in public clouds for this, but I assume this is not possible in OpenStack?
I also tried to use Ingress Controller of Kubernetes but it did not worked. However, I am not sure if I went through a correct way to do it.
Is there any other way else than LoadBalancer or Ingress to do this? (My first assumption was that if I dedicate my pods to specific nodes, then I should be able to expose each of services on the same port on different nodes, but this approach also did not worked.)
Please let me know if you have any thoughts on this.
You have to setup the OpenStack Cloud Provider: basically, this Deployment will watch for LoadBalancer Service and will provide an {internal,external} IP address you can use to interact with your application, even at L4 and not only (sic) L7 like many Ingress Controller resources.
If you want to only expose one port then the only answer to the best of my knowledge is an ingress-controller. The two most famous ones are Nginx and Traefik. I agree that setting up ingress-controller can be difficult and I had problems with them before but you have to solve them one by one.
Another thing you can do is you can build your own ingress controller. What I mean is to use a reverse proxy such as Nginx, configure it to reroute the traffic based on your topology then just expose this reverse proxy so all the traffic goes through this custom reverse proxy but this should be done just if you need something very customized.
I have been using the GCLB Ingress Controller to forward outside traffic to my in-cluster services, and this has been working great so far.
But, is there a way that based on a route/path match, traffic could be forwarded to outside of cluster resource. From the documentation, I can't seem to find anything and I don't think it can be achieved using GCLB Ingress Controller; but I haven't yet tried the NGINX Ingress Controller.
Is this a behavior that can be achieved using any of these 2 controllers? I would prefer using the native gcloud one, the GCLB but the other one works too.
Hope this can help you kubernates external service