I was using a Docker-based setup with an nginx reverse proxy forwarding to Dockerized Microservices for some time. Right now I am evaluating a switch to a Kubernetes-based approach and the Traefik Ingress Controller.
The Ingress Controller provides all functionality required for this, except for one: It doesn't support caching.
The Microservices aren't very performant when it comes to serving static resources, and I would prefer to reduce the load so they can concentrate on their actual purpose, handling dynamic REST requests.
Is there any way to add caching support for Traefik-based Ingress? As there are many yet small services, I'd prefer not to spinup a dedicated Pod per Microservice if possible. Additionally, a configuration-based approach would be appreciated, if possible (maybe using a custom Operator?).
Caching functionality is still on the wish list in Traefik project.
As a kind of workaround please check this scenario where NGINX is put in front to do caching. I don't see any contraindications to apply the same idea in front of Traefik Ingress Controller.
This is an enterprise feature. You have to buy Traefik enterprise to get caching functionality.
Came accross this and alltough we are still testing it, apparently cache is finally been implemented directly in traeffik, including selective per path whish was our main concern. Unsure of the limitations/performance alltough I've read that only memory allocated per router is currently available as a storage:
https://github.com/traefik/traefik/issues/878
Related
We have a multi tenant application and for each tenant we provision separate container image.
Likewise we create a subdomain for each tenant which will be redirected to its own container.
There might be a scenario where 1000s of tenants can exist and its dynamic.
So it has become necessary for us to consider the limitations in ingress controllers for Kubernetes in general before choosing. Especially the nginx-ingress.
Is there any max limitation on number of Ingress resources or rules inside ingress that can be created? Or will there be any performance or scaling issues when too many ingress resources are created ?
Is it better to add a new rule(for each subdomain) in same ingress resource or to create separate ingress resource for each subdomain ?
AFAIK, there are no limits like that, You will either run out of resources or find a choke point first. This article compares resource consumption of several loadbalancers.
As for Nginx-ingress there are few features hidden behind paid nginx plus version as listed here.
If You wish to have dynamic configurations and scalability, You should try envoy based ingress like Ambassador or Istio.
Envoy offers dynamic configuration updates which will not interrupt existing connections. More info here.
Check out this article which compares most of popular kubernetes ingress controllers.
This article shows a great example of pushing HAproxy and Nginx combination to its limits.
Hope it helps.
I've dockerized a legacy desktop app. This app does resource-intensive graphical rendering from a command line interface.
I'd like to offer this rendering as a service in a "compute farm", and I wondered if Kubernetes could be used for this purpose.
If so, how in Kubernetes would I ensure that each pod only serves one request at a time (this app is resource-intensive and likely not thread-safe)? Should I write a single-threaded wrapper/invoker app in the container and thus serialize requests? Would K8s then be smart enough to route subsequent requests to idle pods rather than letting them pile up on an overloaded pod?
Interesting question.
The inbuilt default Service object along with kube-proxy does route the requests to different pods, but only does so in a round-robin fashion which does not fit our use case.
Your use-case would require changes to be made to the kube-proxy setup during the cluster setup. This approach is tedious and will require you to have your own cluster setup (not supported by cloud services). As described here.
Best bet would be to setup a service-mesh like Istio which provides the features with little configuration along with a lot of other useful functionalities.
See if this helps.
So, I'm dabbling with setting up a Kubernetes cloud on Azure AKS. I get that an Nginx ingress controller routes requests to services in the same namespace, and I understand that the path can be rewritten. I'm now trying to set up something which is borderline dodgy, but would potentially allow more actual work to be done on each of my nodes.
The scenario is as follows - a part of my cloud is going to be static HTML content. I can add a web-server in a pod and mount an Azure fileshare with the static files in, and that works great with NGingx ingress controller. I have also noticed that I can create a static web site Azure blob, which basically serves my web content cheaply via an ugly Microssoft Url.
What I'm wondering is if can I configure NGinx ingress server to proxy content from a full external website Url instead of a service name from my own Kubernetes namespace and therefore host my static content in my Url space without burning Node memory and Cpu cycles.
As I said, maybe a bit dodgy, but also seamingly simple approach to static content. If anybody knows for certain that this is possible, I'd be keen to learn how. I'd also be very interested in learning that this is absolutely not possible - thanks.
What you want to do is proxying from load balancer(created by Nginx Ingress) to Azure blob?
If my understanding is correct, you maybe can proxy with using a service of type ExternalName.
https://www.elvinefendi.com/2018/08/08/ingress-nginx-proxypass-to-external-upstream.html
https://github.com/kubernetes/ingress-nginx/issues/4280
I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.
From this youtube Brendan Burns talks about having a load balancer between each app layer. This makes good sense - and when he says load balancer, he is talking about a services right?
The real question is, having a service between each layer makes sense, but what about when you have a web application. Would you still need a reverse proxy like nginx as HTTP load balancer on top of the Kubernetes services. I can see the need to direct the the url to prevent a cross domain, but not for balancing since this would be handled by the Kubernetes service, right?
Then would you have pods of nginx redirecting to other services(internal cabernets load balancer/services)?
Just saw this. Again any comments are welcome.
Thanks
Yes, there are definitely use cases for which you might want a reverse proxy in front of the Kubernetes services. Experimental support is being added for this to Kubernetes version 1.1.
You can check out the design proposal here and an implementation using haproxy here.