From this youtube Brendan Burns talks about having a load balancer between each app layer. This makes good sense - and when he says load balancer, he is talking about a services right?
The real question is, having a service between each layer makes sense, but what about when you have a web application. Would you still need a reverse proxy like nginx as HTTP load balancer on top of the Kubernetes services. I can see the need to direct the the url to prevent a cross domain, but not for balancing since this would be handled by the Kubernetes service, right?
Then would you have pods of nginx redirecting to other services(internal cabernets load balancer/services)?
Just saw this. Again any comments are welcome.
Thanks
Yes, there are definitely use cases for which you might want a reverse proxy in front of the Kubernetes services. Experimental support is being added for this to Kubernetes version 1.1.
You can check out the design proposal here and an implementation using haproxy here.
Related
I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.
I am using a Nginx Ingress Controller in a Kubernetes Cluster. I've got an application within the cluster, which was available over the internet. Now I'm using the Ingress Controller to access the application, with the intent of showing some custom errors.
If i access the application (which is not written by myself, therefore I can't change things there), it receives the IP address of the nginx-ingress-controller-pod. The logs of the nginx-ingress-controller-pod indicate that the remote address is a different one.
I've already tried things like use-proxy-protocol, then I would be able to use $remote_addr and get the right IP. But as I mentioned I am not able to change my application, so I have to "trick" the ingress controller to use the $remote_addr as his own.
How can i configure the ingress, so the application will get the request from the remote IP and not from the nginx-ingress-controller-pod IP? Is there a way to do this?
Edit: I'm using a bare metal kubernetes installation with kubernetes v1.19.2 and the nginx chart ingress-nginx-3.29.0.
It could not achievable by using layer 7 ingress controller.
If Ingress preserves the source IP then response will got directly from the app pod to the client, so the client will get a response from a IP:port different from what he connected to. Or even worse - client's NAT drops the response completely because it doesn't match the existing connections.
You can take a look at this similar question on stackoverflow with accepted answer:
As ingress is above-layer-4 proxy. There is no way you can preserve SRC IP in layer 3 IP protocol. The best is and I think Nginx Ingress already been set by default that they put the "X-Forwarded-For" header in any HTTP forward.
Your app supposes to log the X-Forwarded-For header
You can try to workaround by following this article. It could help you to preserve your IP.
I also recommend this very good article about load balancing and proxying. You will also learn a bit about load balancing on L7:
L7 load balancing and the OSI model
As I said above in the section on L4 load balancing, using the OSI model for describing load balancing features is problematic. The reason is that L7, at least as described by the OSI model, itself encompasses multiple discrete layers of load balancing abstraction. e.g., for HTTP traffic consider the following sublayers:
Optional Transport Layer Security (TLS). Note that networking people argue about which OSI layer TLS falls into. For the sake of this discussion we will consider TLS L7.
Physical HTTP protocol (HTTP/1 or HTTP/2).
Logical HTTP protocol (headers, body data, and trailers).
Messaging protocol (gRPC, REST, etc.).
I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.
I was using a Docker-based setup with an nginx reverse proxy forwarding to Dockerized Microservices for some time. Right now I am evaluating a switch to a Kubernetes-based approach and the Traefik Ingress Controller.
The Ingress Controller provides all functionality required for this, except for one: It doesn't support caching.
The Microservices aren't very performant when it comes to serving static resources, and I would prefer to reduce the load so they can concentrate on their actual purpose, handling dynamic REST requests.
Is there any way to add caching support for Traefik-based Ingress? As there are many yet small services, I'd prefer not to spinup a dedicated Pod per Microservice if possible. Additionally, a configuration-based approach would be appreciated, if possible (maybe using a custom Operator?).
Caching functionality is still on the wish list in Traefik project.
As a kind of workaround please check this scenario where NGINX is put in front to do caching. I don't see any contraindications to apply the same idea in front of Traefik Ingress Controller.
This is an enterprise feature. You have to buy Traefik enterprise to get caching functionality.
Came accross this and alltough we are still testing it, apparently cache is finally been implemented directly in traeffik, including selective per path whish was our main concern. Unsure of the limitations/performance alltough I've read that only memory allocated per router is currently available as a storage:
https://github.com/traefik/traefik/issues/878
I have a cluster on AWS installed via kops. Now I need to expose a WebSocket service (with security enabled, the wss://) to the outside world. There are different ingress controllers, nginx, traefik, ELBs, ALBs. Which one is the suggested and:
easy to deploy and config
support http://, https://, ws://, and wss://
In my opinion this question is opinion based and too broad. Please try to avoid such questions as there is not one solution that is the best.
I was able to find plenty resources about nginx and websockets. I do not have production experience with configuring this, but I think you might find this helpful.
NGINX is a popular choice for an Ingress Controller for a variety of
features:
Websocket, which allows you to load balance Websocket applications.
SSL Services, which allows you to load balance HTTPS applications.
Rewrites, which allows you to rewrite the URI of a request before sending it to the application.
Session Persistence (NGINX Plus only), which guarantees that all the requests from the same client are always passed to the same
backend container.
Support for JWTs (NGINX Plus only), which allows NGINX Plus to authenticate requests by validating JSON Web Tokens (JWTs).
The most important part with nginx is the annotation - which specifies which services are Websocket services. Some more information about usage and configuration. Also useful tutorial about configuration of nginx ingress, although it is about GKE it might be useful.