What happens when a destinationrule is pushed in istio - kubernetes

I recently started looking into istio and got confused by the destination rule configuration.
Say I have a service A that have 10 pods running behind it. I pushed a destination rule that has two subsets with label version=v1 and version=v2.
So I'm wondering what will happen to the 10 pods under the hood? Will they be divided into two subsets automatically or just remain unlabeled? Or the subsets will only be valid when the pods themselves are labeled with version=v1 and version=v2?
Thanks a lot!

The general purpose is to set up DestinationRule resource in order to specify how the network traffic will reach your underlying Kubernetes cluster Pods.
Subsets parameter in Istio defines labels that identify version specific instances.
The below example of Istio DestinationRule configuration demonstrates how it works and can potentially reproduce your case:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Actually, label version: v1 indicates that only Pods in Kubernetes marked with the same label will receive network traffic; therefore the same approach will work for label version: v2.
There are several resources available in Istio that can expand functionality for network management purposes as described in Official documentation.

A DestinationRule simply "defines" subsets of the underlying pods. Any pod that has the labels specified in a subset are considered part of that subset and can then be routed to in a VirtualService. If your pods don't have labels that correspond to any of the defined subsets, then they will not receive traffic that is routed to a particular subset. For example, if you set a rule in a VirtualService to send 100% of the traffic to subset v1, and you have no pods with the corresponding version=v1 label, then none of your pods will receive the traffic and client calls will fail. Note that you don't have to route traffic to subsets, you can also set rules to just route to any pod implementing the service. Subsets are used to distribute traffic when you have pods implementing more than one version of service running at the same time.

Related

Within a k8s cluster Should I always call the Ingress Rule Or Node Port Service Name?

I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself — NGINX in this case — will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Service’s URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.

Kubernetes ingress controller strict round-robin

I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service.
My testing setup is a single Kubernetes node, with a deployment of two nginx pods.
The deployment is then exposed with a NodePort service.
I've then deployed an ingress contoller (I've tried both Kubernetes Nginx Ingress Controller and Nginx Kubernetes Ingress Controller, separately) and created an ingress rule for the NodePort service.
I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then curls the NodePort service 100 times. It greps "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end.
As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods.
Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well.
However, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please?
I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.
Nginx default behaviour is like strict round-robin only. You can use it to perform most tests on Nginx ingress with different config tweaks if required.
There is also other options like you can use the Istio service mesh.
You can Load balance the traffic as you required by changing the config only
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Read more at : https://istio.io/latest/docs/reference/config/networking/destination-rule/
& https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings
however, i would suggest going with service mesh only when there is a large cluster implementing for 2-3 services better use the Nginx ingress or haproxy-ingress also good option.

Istio Virtual Service Relationship to Normal Kubernetes Service

I am watching a Pluralsight video on the Istio service mesh. One part of the presentation says this:
The VirtualService uses the Kubernetes service to find the IP addresses of all the pods. The VirtualService doesn't route any traffic through the [Kubernetes] service, but it just uses it to get the list of endpoints where the traffic could go.
And it shows this graphic (to show the pod discovery, not for traffic routing):
I am a bit confused by this because I don't know how an Istio VirtualService knows which Kubernetes Service to look at. I don't see any reference in the example Istio VirtualService yaml files to a Kubernetes Service.
I have theorized that the DestinationRules could have enough labels on them to get down to just the needed pods, but the examples only use the labels v1 and v2. It seems unlikely that a version alone will give only the needed pods. (Many different Services could be on v1 or v2.)
How does an Istio VirtualService know which Kubernetes Service to associate to?
or said another way,
How does an Istio VirtualService know how to find the correct pods from all the pods in the cluster?
When creating a VitualService you define which service to find in route.destination section
port : service running on port
host : name of the service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test
spec:
hosts:
- "example.com"
gateways:
- test-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: app-service
so,
app-pod/s -> (managed by) app-service -> test virtual service
Arfat's answer is correct.
I want to add the following part from the docs about the host, which should make things even more clear.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
[...] Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
So when you write host: app-service and the VirtualService is in the default namespace, the host is interpreted as app-service.default.svc.cluster.local, which is the FQDN of the kubernetes service. If the app-service is in another namespace, say dev, you need to set the host as host: app-service.dev.svc.cluster.local.
Same goes for DestinationRule, where the FQDN of a kubernetes service is defined as host, as well.
https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule
VirtualService and DestinationRule are configured for a host. The VirtualService defines where the traffic should go (eg host, weights for different versions, ...) and the DestinationRule defines, how the traffic should be handled, (eg load balancing algorithm and how are the versions defined.
So traffic is not routed like this
Gateway -> VirtualService -> DestinationRule -> Service -> Pod, but like
Gateway -> Service, considering the config from VirtualService and DestinationRule.

K8S Ingress: How to limit requests in flight per pod

I am porting an application to run within k8s. I have run into an issue with ingress. I am trying to find a way to limit the number of REST API requests in flight at any given time to each backend pod managed by a deployment.
See the image below the shows the architecture.
Ingress is being managed by nginx-ingress. For a given set of URL paths, the ingress forwards the request to a service that targets a deployment of REST API backend processes. The deployment is also managed by an HPA based upon CPU load.
What I want to do is find a way to queue up ingress requests such that there are never more than X requests in flight to any pod running our API backend process. (ex. only allow 50 requests in flight at once per pod)
Does anyone know how to put a request limit in place like this?
As a bonus question, the next thing I would need to do is have the HPA monitor the request queuing and automatically scale up/down the deployment to match the number of pods to the number of requests currently being processed / queued. For example if each pod can handle 100 requests in flight at once and we currently have load levels of 1000 requests to handle, then autoscale to 10 pods.
If it is useful, I am also planning to have linkerd in place for this cluster. Perhaps it has a capability that could help.
Autoscaling in Network request requires the custom metrics. Given that you are using the NGINX ingress controller, you can first install prometheus and prometheus adaptor to export the metrics from NGINX ingress controller. By default, NGINX ingress controller has already exposed the prometheus endpoint.
The relation graph will be like this.
NGINX ingress <- Prometheus <- Prometheus Adaptor <- custom metrics api service <- HPA controller
The arrow means the calling in API. So, in total, you will have three more extract components in your cluster.
Once you have set up the custom metric server, you can scale your app based on the metrics from NGINX ingress. The HPA will look like this.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: srv-deployment-custom-hpa
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: srv-deployment
minReplicas: 1
maxReplicas: 100
metrics:
- type: Pods
pods:
metricName: nginx_srv_server_requests_per_second
targetAverageValue: 100
I won't go through the actual implementation here because it will include a lot of environment specific configuration.
Once you have set that up, you can see the HPA object will show up the metrics which is pulling from the adaptor.
For the rate limiting in the Service object level, you will need a powerful service mesh to do so. Linkerd2 is designed to be lightweight so it does not ship with the function in rate limiting. You can refer to this issue under linkerd2. The maintainer rejected to implement the rate limiting in the service level. They would suggest you to do this on Ingress level instead.
AFAIK, Istio and some advanced serivce mesh provides the rate limiting function. In case you haven't deployed the linkerd as your service mesh option, you may try Istio instead.
For Istio, you can refer this document to see how to do the rate limiting. But I need to let you know that Istio with NGINX ingress may cause you a trouble. Istio is shipped with its own ingress controller. You will need to have extra work for making it work.
To conclude, if you can use the HPA with custom metrics in the number of requests, it will be the quick solution to resolve your issue in traffic control. Unless you still have a really hard time with the traffic control, you will then need to consider the Service level rate limiting.
Nginx ingress allow to have rate limiting with annotations. You may want to have a look at the limit-rps one:
nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
On top of that NGINX will queue your requests with the leaky bucket algorithm so the incoming requests will buffered in the queue with FIFO (first-in-first-out) algorithm and then consumed at limited rate. The burst value in this case defines the size of the queue which allows the request to exceed the beyond limit. When this queue become full the next requests will be rejected.
For more detailed reading about limit traffic and shaping:
Nginx rate limiting in nutshell
Rate limiting nginx
Perhaps you should consider implementing Kubernetes Service APIs
Based on the latest kubernetes docs . We can do hpa based on custom metrics.
Doc reference : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Adding code below :
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 10k
What i would suggest is having an ingress resource specifically for this service (load balancing in round robing) and then if you can do autoscaling based on the number of request(you expect) * no of min replica nodes . This should do a optimal hpa .
PS I will test it out myself and comment.

How to allow/deny http requests from other namespaces of the same cluster?

In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.