I have one master service and multiple slave services. The master service continuously polls a topic using subscriber from Google PubSub. The Slave services are REST APIs. Once the master service receives a message, it delegates the message to a slave service. Currently I'm using ClusterIP service in Kubernetes. Some of my requests are long running and some are pretty short.
I happen to observe that sometimes if there's a short running request while a long running request is in process, it has to wait until the long running request to finish even though many pods are available without serving any traffic. I think it's due to the round robin load balancing. I have been trying to find a solution and looked into approaches like setting up external HTTP load balancer with ingress and internal HTTP load balancer. But I'm really confused about the difference between these two and which one applies for my use case. Can you suggest which of the approaches would solve my use case?
TL;DR
assuming you want 20% of the traffic to go to x service and the rest 80% to y service. create 2 ingress files for each of the 2 targets, with same host name, the only difference is that one of them will carry the following ingress annotations: docs
nginx.ingress.kubernetes.io/canary: "true" #--> tell the controller to not create a new vhost
nginx.ingress.kubernetes.io/canary-weight: "20" #--> route here 20% of the traffic from the existing vhost
WHY & HOW TO
weighted routing is a bit beyond the ClusterIP. as you said yourself, its time for a new player to enter the game - an ingress controller.
this is a k8s abstraction for a load balancer - a powerful server sitting in front of your app and routing the traffic between the ClusterIPs.
install ingress controller on gcp cluster
once you have it installed and running, use its canary feature to perform a weighted routing. this is done using the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: echo.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
here is the full guide.
External vs internal load balancing
(this is the relevant definition from google cloud docs but the concept is similar among other cloud providers)
GCP's load balancers can be divided into external and internal load
balancers. External load balancers distribute traffic coming from the
internet to your GCP network. Internal load balancers distribute
traffic within your GCP network.
https://cloud.google.com/load-balancing/docs/load-balancing-overview
Related
I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself โ NGINX in this case โ will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Serviceโs URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.
I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp ๐
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test ๐
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way
I have a working Nexus 3 pod, reachable on port 30080 (with NodePort): http://nexus.mydomain:30080/ works perfectly from all hosts (from the cluster or outside).
Now I'm trying to make it accessible at the port 80 (for obvious reasons).
Following the docs, I've implemented it like that (trivial):
[...]
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nexus.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: nexus-service
servicePort: 80
Applying it works without errors. But when I try to reach http://nexus.mydomain, I get:
Service Unavailable
No logs are shown (the webapp is not hit).
What did I miss ?
K3s Lightweight Kubernetes
K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.
As I mentioned in comments, K3s as default is using Traefik Ingress Controller.
Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
This information can be found in K3s Rancher Documentation.
Traefik is deployed by default when starting the server... To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.
To disable it, start each server with the --disable traefik option.
If you want to deploy Nginx Ingress controller, you can check guide How to use NGINX ingress controller in K3s.
As you are using specific Nginx Ingress like nginx.ingress.kubernetes.io/rewrite-target: /$1, you have to use Nginx Ingress.
If you would use more than 2 Ingress controllers you will need to force using nginx ingress by annotation.
annotations:
kubernetes.io/ingress.class: "nginx"
If mention information won't help, please provide more details like your Deployment, Service.
I do not think you can expose it on port 80 or 443 over a NodePort service or at least it is not recommended.
In this configuration, the NGINX container remains isolated from the
host network. As a result, it can safely bind to any port, including
the standard HTTP ports 80 and 443. However, due to the container
namespace isolation, a client located outside the cluster network
(e.g. on the public internet) is not able to access Ingress hosts
directly on ports 80 and 443. Instead, the external client must append
the NodePort allocated to the ingress-nginx Service to HTTP requests.
-- Bare-metal considerations - NGINX Ingress Controller
* Emphasis added by me.
While it may sound tempting to reconfigure the NodePort range using
the --service-node-port-range API server flag to include unprivileged
ports and be able to expose ports 80 and 443, doing so may result in
unexpected issues including (but not limited to) the use of ports
otherwise reserved to system daemons and the necessity to grant
kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches
proposed in this page for alternatives.
-- Bare-metal considerations - NGINX Ingress Controller
I did a similar setup a couple of months ago. I installed a MetalLB load balancer and then exposed the service. Depending on your provider (e.g., GKE), a load balancer can even be automatically spun up. So possibly you don't even have to deal with MetalLB, although MetalLB is not hard to setup and works great.
I want to create a load balancer for 4 http server pods.
I have one mysql pod too.
Everything works fine, i have created a loadbalancer service for http, and another service for mysql.
I have read i should create an ingress too. But i do not understand what is an ingress because everything works with Services.
What is the value-add of an Ingress ?
Thanks
Since you have single service serving http, your current solution using LoadBalancer service type works fine. Imagine you have multiple http based services that you want to make externally available on different routes. You would have to create a LoadBalancer services for each of them and by default you would get a different IP address for each of them. Instead you can use an Ingress, which sits infront of these services and does the routing.
Example ingress manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /cart
backend:
serviceName: cart
servicePort: 80
- path: /payment
backend:
serviceName: payment
servicePort: 80
Here you have two different HTTP services exposed by an Ingress on a single IP address. You don't need a LoadBalancer per service when using an Ingress.
The Service of type LoadBalancer relies on a third-party LoadBalancer and IP provisioning thing somewhere that deals with getting Layer 3 traffic (IP) from outside to the Nodes on some high-numbered NodePort.
A Ingress relies on a third-party Ingress Controller to accept Layer 3 traffic, open it up to Layer 7 (eg, terminate TLS) and do protocol-specific routing (eg by http fqdn/path) to some other Service (probably of type ClusterIP) inside the cluster.
If all your service should be explictly exposed without any further filtering or other options, a LoadBalancer and no Ingress might be the right choice....but LoadBalancers dont do much on their own....they just expose the Service to the outside world....very little in the way of traffic shpaing, A/B testing, etc
However, if you want to put multiple services behind a single IP/VIP/certificate, or you want to direct traffic in some weird ways (like based on Header:, client type, percentage weighting, etc), you'd probably want an Ingress (which itself would be exposed by a LoadBalancer Service)
We have Kubernetes setup hosted on premises and are trying to allow clients outside of K8s to connect to services hosted in the K8s cluster.
In order to make this work using HA Proxy (which runs outside K8s), we have the HAProxy backend configuration as follows -
backend vault-backend
...
...
server k8s-worker-1 worker1:32200 check
server k8s-worker-2 worker2:32200 check
server k8s-worker-3 worker3:32200 check
Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).
We came across the HAProxy Ingress Controller (https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.
Is there a better solution to implement this requirement?
Now, this solution works, but the worker names and the corresponding nodePorts are hard-coded in this config, which obviously is inconvenient as and when more workers are added (or removed/changed).
You can explicitly configure the NodePort for your Kubernetes Service so it doesn't pick a random port and you always use the same port on your external HAProxy:
apiVersion: v1
kind: Service
metadata:
name: <my-nodeport-service>
labels:
<my-label-key>: <my-label-value>
spec:
selector:
<my-selector-key>: <my-selector-value>
type: NodePort
ports:
- port: <service-port>
nodePort: 32200
We came across the HAProxy Ingress Controller (https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/) which sounds promising, but (we feel) effectively adds another HAProxy layer to the mix..and thus, adds another failure point.
You could run the HAProxy ingress inside the cluster and remove the HAproxy outside the cluster, but this really depends on what type of service you are running. The Kubernetes Ingress is Layer 7 resource, for example. The DR here would be handled by having multiple replicas of your HAProxy ingress controller.