Why isn't the circuit breaking of ISTIO working? - kubernetes

with istio 1.4.6
I configured Kubernetes using resources such as service, deployment.
I also configured gateway, virtual service, and destination rules to implement circuit breakers.
The composition diagram is as follows. (number of Pod's replica is two. & I operate only one version of app.)
I wrote VirtualServices and DestinationRules to use circuit breakers
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-virtual-service
spec:
gateways:
- reviews-istio-gateway
hosts:
- reviews
http:
- route:
- destination:
host: reviews-service
port:
number: 80
DestinationRules
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination-rule
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
outlierDetection:
baseEjectionTime: 1m
consecutiveErrors: 1
interval: 1s
maxEjectionPercent: 100
Here, I expect that if more than one error occurs in reviews-app, all pods will be excluded from the load balancing list for a minute.
Therefore, I expected the circuit breaking to work as below.
However, contrary to expectations, circuit breakers did not work, and error logs were continuously being recorded in reviews-app.
Why isn't the circuit breaker working?

I guess the problem is not about Circuit Breaking, but about the usage of Virtual Services and Destination Rules.
For example, if using a Virtual Service with a Gateway, its host should probably be of public host, like http://amce.io
The host of the Destination Rule should probably be that of the Kubernetes Service.

Related

Can Kubernetes Service control traffic percentage to a given pod?

Is it possible to control the percentage of traffic going to a particular pod with Kubernetes Service, without controlling the number of underlying pod? By default, kube- chooses a backend via a round-robin algorithm.
Yes, possible with the extra configuration of Service mesh.
If you looking forward to do it using the simple service it's hard to do it based on % as the default behavior is round-robin.
For example, if you are using the Istio service mesh
You can route the traffic based on weight
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
where subset you can consider is more like the label so you are running the deployments with multiple labels and do the routing based on weight using the Istio.
See the example.

Route change apply in Istio too slow and make deployment failed

I am working on DevOps solution, and try to automate the blue-green deployment solution on kubernetes. However, we are facing the issue that the istio apply the route rules too slow, when removing the virtualservices and take a long time to effective. We tried to wait 60s to wait the rules updated and destroy the old pods. We don't have ideas that 60s is enough to finish the route change, and will have downtime if over 60s to take effective. Would like to get some advises on how to check the route (to green one only ) is updated properly? and how to make the istio apply to execute faster? Thanks.
Here is the yaml file to apply the virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
namespace: xxx-d
name: xxx-virtualservice
labels:
microservice: xxx-new
spec:
hosts:
- xxx.com
gateways:
- mesh
- http-gateway.istio-system.svc.cluster.local
- https-gateway.istio-system.svc.cluster.local
http:
- headers:
request:
set:
x-forwarded-port: '443'
x-forwarded-proto: https
route:
- destination:
host: xxx-service.svc.cluster.local
port:
number: 8080
retries:
attempts: 3
retryOn: gateway-error,connect-failure,refused-stream
timeout: 3s

Ingress in Kubernetes

I was doing some research about ingress and it seems I have to create a new ingress resource for each namespace. Is that correct?
I just created 2 separate ingress resources in different namespaces in my GKE cluster but it seems to use the same LB in(which is great for cost) but I would think it is possible to have clashes then. (when using same path). I just tried it and the first one I've created is still working on the path, the other newer one on the same path is just not working.
Can someone explain me the correct setup for ingress?
As Kubernetes works, ingress controller won't pass a packet to a service that is in a different namespace from the ingress resource. So, if you create an ingress resource in the default namespace, all your services must be in the default namespace as well.
This is something that won't change. EVER. There has been a feature request years ago, and kubernetes team announced that it's not going to happen. It introduces a security hole when ingress controller is being able to transpass a namespace.
Now, what we do in these situations is actually pretty neat. You will have to do the following:
Say you have 2 services in the namespaces you need. e.g. service1.foo and service2.bar.
create 2 headless services without selectors and 2 Endpoint objects pointing to the IP addresses of the services service1.foo and service2.bar, in the same namespace as the ingress resource. The headless service without selectors will force kube-dns (or coreDNS) to search for either ExternalName type service or an Endpoint object. Now, the only requirement here is that your headless service and the Endpoint object must have the same name.
Create your ingress resource pointing to the headless services.
It should look like this (for 1 service):
Say the IP address of service1.foo is 10.10.10.10. Your headless service and the Endpoint object would be:
apiVersion: v1
kind: Service
metadata:
name: bait-svc
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: bait-svc
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port: 80
protocol: TCP
and Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: ssl-certs
rules:
- host: site1.training.com
http:
paths:
- path: /
backend:
serviceName: bait-svc
servicePort: 80
So, the Ingress points to the bait-svc, and bait-svc points to service1.foo. And you will do this for each service.
UPDATE
I am thinking now, it might not work with GKE Ingress Controller, as on GKE you need a NodePort type service for the HTTP load balancer to reach the service. As you can see, in my example I've got nginx Ingress Controller.
Independently if it works or not, I would recommend using some other Ingress Controller. It's not that GKE IC is not good. It is quite robust, but almost always you end up hitting some limitation. Other ICs are more flexible.
The behavior of conflicting Ingress routes is undefined and implementation dependent. In most cases it’s just last writer wins.

Does Istio allow to configure a maximum response timeout for a circuit breaker to open? How?

I'm checking the documentation for the DestinationRule, where there are several examples of a circuit breaking configuration, e.g:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-app
spec:
host: bookinfoappsvc.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
connectTimeout: 30ms
...
The connectionPool.tcp element offers a connectTimeout. However what I need to configure is a maximum response timeout. Imagine I want to open the circuit if the service takes longer than 5 seconds to answer. Is it possible to configure this in Istio? How?
Take a look at Tasks --> Traffic Management --> Setting Request Timeouts:
A timeout for http requests can be specified using the timeout field
of the route rule. By default, the timeout is 15 seconds [...]
So, you must set the http.timeout in the VirtualService configuration.
Take a look at this example from the Virtual Service / Destination official docs:
The following VirtualService sets a timeout of 5s for all calls to
productpage.prod.svc.cluster.local service in Kubernetes.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-productpage-rule
namespace: istio-system
spec:
hosts:
- productpage.prod.svc.cluster.local # ignores rule namespace
http:
- timeout: 5s
route:
- destination:
host: productpage.prod.svc.cluster.local
http.timeout: Timeout for HTTP requests.

Default Load Balancing in Kubernetes

I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:
External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod
For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.
I've got a few questions:
1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?
Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.
Cheers!
The "internal load balancing" between Pods of a Service has already been covered in this question from a few days ago.
Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.
If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of Istio, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.
I see two options that can be used with k8s:
Use istio's traffic management and create a DestinationRule. It currently supports three load balancing modes:
Round robin
Random
Weighted least request
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
...
spec:
...
subsets:
- name: test
...
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Use lb_type in envoy proxy with ambassador on k8s. More info about ambassador is in https://www.getambassador.io.