I'm very new to Istio and not a Kubernete's expert, though I have used the latter. I respectfully ask for your understanding and a bit more details than you might normally include.
For simplicity, say I have two services, both Java/SpringBoot. Service A listens to requests from the outside world, Service B listens to requests from Service A. Service B is scalable, and at points might return 503. I wish to have service A retry calls to service B in a configurable non-programmatic way. Here's a blog/link that I tried to follow that I think is very similar.
https://samirbehara.com/2019/06/05/retry-design-pattern-with-istio/
Two questions:
It may seem obvious, but if I wanted to define a virtual retriable service, do I add it to the existing application.yml file for the project or is there some other file that the networking.istio.io/v1alpha3 goes?
Would I define the retry configuration in the yaml/repo for Service A or Service B? I can think of reasons for architecting Istio either way.
Thanks,
Woodsman
If the scalable service is returning 503, it makes sense to add a virtual service just like the blog example for serviceB and make serviceA connect to virtualServiceB which will do the retries to ServiceB
Now, for this to work (from within the cluster):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceB
spec:
hosts:
– serviceB
http:
– route:
– destination:
host: serviceB
retries:
attempts: 3
perTryTimeout: 2s
These lines:
hosts:
– serviceB
Will tell the default Istio Gateway (mesh) to route all the traffic not to serviceB, but to virtualServiceB first which will then route to ServiceB. Then you will have retries from virtualServiceB to serviceB.
Hope this helps
Related
I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself — NGINX in this case — will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Service’s URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.
I'm using istio and I know that I can define weights in Virtual Service and divert traffic to different services.
My question is: how to amplify some of the traffic and direct the amplified traffic to the validation service? This amplified traffic will not go back to the original source but will be closed within the cluster. In other words, it does not bother the user.
I'm not even sure if there is an ecosystem, feature or application that provides this kind of mechanism. I don't even know if there is an ecosystem or application that provides such a mechanism and I don't know what it is called, so I'm having trouble finding it.
Thanks.
OP found a soultion by themselves, in the comments, hence the CW.
In this scenario, the best solution is to use Istio Mirroring - also called shadowing.
When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow. For example, cluster-1 becomes cluster-1-shadow.
Also, it is important to note that these requests are mirrored as “fire and forget”, which means that the responses are discarded.
To create mirroring rule, you have to create VirtualService with mirror and mirrorPercentage fields.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercentage:
value: 100.0
This route rule sends 100% of the traffic to v1. The last stanza specifies that you want to mirror (i.e., also send) 100% of the same traffic to the httpbin:v2 service.
[source]
I am watching a Pluralsight video on the Istio service mesh. One part of the presentation says this:
The VirtualService uses the Kubernetes service to find the IP addresses of all the pods. The VirtualService doesn't route any traffic through the [Kubernetes] service, but it just uses it to get the list of endpoints where the traffic could go.
And it shows this graphic (to show the pod discovery, not for traffic routing):
I am a bit confused by this because I don't know how an Istio VirtualService knows which Kubernetes Service to look at. I don't see any reference in the example Istio VirtualService yaml files to a Kubernetes Service.
I have theorized that the DestinationRules could have enough labels on them to get down to just the needed pods, but the examples only use the labels v1 and v2. It seems unlikely that a version alone will give only the needed pods. (Many different Services could be on v1 or v2.)
How does an Istio VirtualService know which Kubernetes Service to associate to?
or said another way,
How does an Istio VirtualService know how to find the correct pods from all the pods in the cluster?
When creating a VitualService you define which service to find in route.destination section
port : service running on port
host : name of the service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test
spec:
hosts:
- "example.com"
gateways:
- test-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: app-service
so,
app-pod/s -> (managed by) app-service -> test virtual service
Arfat's answer is correct.
I want to add the following part from the docs about the host, which should make things even more clear.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
[...] Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
So when you write host: app-service and the VirtualService is in the default namespace, the host is interpreted as app-service.default.svc.cluster.local, which is the FQDN of the kubernetes service. If the app-service is in another namespace, say dev, you need to set the host as host: app-service.dev.svc.cluster.local.
Same goes for DestinationRule, where the FQDN of a kubernetes service is defined as host, as well.
https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule
VirtualService and DestinationRule are configured for a host. The VirtualService defines where the traffic should go (eg host, weights for different versions, ...) and the DestinationRule defines, how the traffic should be handled, (eg load balancing algorithm and how are the versions defined.
So traffic is not routed like this
Gateway -> VirtualService -> DestinationRule -> Service -> Pod, but like
Gateway -> Service, considering the config from VirtualService and DestinationRule.
We have a cluster with Istio, but there is this one condition, I can't find how to fulfill.
We need one of the services to have certain restrictions within the mesh as well, and to talk to one external endpoint. Through Sidecar object, I should be able to set the restrictions internally, but I don't know how to restrict to one external endpoint.
I can set the external endpoint in the Sidecar object as well, but I have to create a ServiceEntry anyways, in which case all the services can talk to that external endpoint.
It seems that what I need is to set a ServiceEntry for one specific service, but this is not possible. Is there any other way to achieve this?
I asked this question on GitHub; to Istio team, and the only way to achieve this is putting the service in a different namespace, and make the ServiceEntry to apply to the workloads only in that namespace through exportTo parameter.
The ServiceEntry would look like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-demo
spec:
exportTo:
- . # with ".", we are saying the ServiceEntry to only apply to the workloads in the same namespace.
hosts:
- www.google.com
location: MESH_EXTERNAL
ports:
- name: https
number: 443
protocol: HTTPS
resolution: DNS
I have couple of services named svc A and svc B with request flow as follows:
svc A --> svc B
I have injected sidecar with svc B and then added the routing rules via VirtualServices object as:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: b
namespace: default
spec:
hosts:
- b.default.svc.cluster.local
http:
- route:
- destination:
host: b.default.svc.cluster.local
fault:
abort:
percentage:
value: 100
httpStatus: 403
These rules are only applied when svc A has a sidecar istio proxy. Which makes me think if we need to have istio proxy on the client side as well? I was expecting that the service for which I added rules should only have the sidecar. I can't think of any technical requirement to have it along side svc B.
Yes, Service A needs a sidecar. It's confusing I admit, but the way to think of the VirtualService resource is "where do I find the backends I want to talk to and what service should they appear to provide me?" A's sidecar is its helper which does things on its behalf like load-balancing, and in your case fault injection (Service B is reliable; it's Service A that wants it to seem unreliable).
The comments that A and B both need sidecars in order to communicate at all aren't correct (unless you want mTLS), but if you want the mesh to provide additional services to A, then A needs a sidecar.
yes, you should inject sidecar proxy in service A as well. then only the two services can communicate with each other through proxies
First go ahead and run:
gcloud container clusters describe [Your-Pod-Name] | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
This will give you two IP addresses. Add these into your deployment yaml like shown below (REPLACING THE IP ADDRESSES WITH YOURS)
apiVersion: v1
kind: Pod
metadata:
name: [Your-Pod-Name]
annotations:
sidecar.istio.io/inject: "true"
traffic.sidecar.istio.io/includeOutboundIPRanges: 10.32.0.0/14,10.35.240.0/20
This allows internet connection to your services.