How to replicate some traffic on Kubernetes and divert it to the Service for investigation - kubernetes

I'm using istio and I know that I can define weights in Virtual Service and divert traffic to different services.
My question is: how to amplify some of the traffic and direct the amplified traffic to the validation service? This amplified traffic will not go back to the original source but will be closed within the cluster. In other words, it does not bother the user.
I'm not even sure if there is an ecosystem, feature or application that provides this kind of mechanism. I don't even know if there is an ecosystem or application that provides such a mechanism and I don't know what it is called, so I'm having trouble finding it.
Thanks.

OP found a soultion by themselves, in the comments, hence the CW.
In this scenario, the best solution is to use Istio Mirroring - also called shadowing.
When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow. For example, cluster-1 becomes cluster-1-shadow.
Also, it is important to note that these requests are mirrored as “fire and forget”, which means that the responses are discarded.
To create mirroring rule, you have to create VirtualService with mirror and mirrorPercentage fields.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercentage:
value: 100.0
This route rule sends 100% of the traffic to v1. The last stanza specifies that you want to mirror (i.e., also send) 100% of the same traffic to the httpbin:v2 service.
[source]

Related

Where to put Istio network retry

I'm very new to Istio and not a Kubernete's expert, though I have used the latter. I respectfully ask for your understanding and a bit more details than you might normally include.
For simplicity, say I have two services, both Java/SpringBoot. Service A listens to requests from the outside world, Service B listens to requests from Service A. Service B is scalable, and at points might return 503. I wish to have service A retry calls to service B in a configurable non-programmatic way. Here's a blog/link that I tried to follow that I think is very similar.
https://samirbehara.com/2019/06/05/retry-design-pattern-with-istio/
Two questions:
It may seem obvious, but if I wanted to define a virtual retriable service, do I add it to the existing application.yml file for the project or is there some other file that the networking.istio.io/v1alpha3 goes?
Would I define the retry configuration in the yaml/repo for Service A or Service B? I can think of reasons for architecting Istio either way.
Thanks,
Woodsman
If the scalable service is returning 503, it makes sense to add a virtual service just like the blog example for serviceB and make serviceA connect to virtualServiceB which will do the retries to ServiceB
Now, for this to work (from within the cluster):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceB
spec:
hosts:
– serviceB
http:
– route:
– destination:
host: serviceB
retries:
attempts: 3
perTryTimeout: 2s
These lines:
hosts:
– serviceB
Will tell the default Istio Gateway (mesh) to route all the traffic not to serviceB, but to virtualServiceB first which will then route to ServiceB. Then you will have retries from virtualServiceB to serviceB.
Hope this helps

Within a k8s cluster Should I always call the Ingress Rule Or Node Port Service Name?

I have a number of restful services within our system
Some are our within the kubernetes cluster
Others are on legacy infrasture and are hosted on VM's
Many of our restful services make synchronous calls to each other (so not asynchronously using message queues)
We also have a number of UI's (fat clients or web apps) that make use of these services
We might define a simple k8s manifest file like this
Pod
Service
Ingress
apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
I am really not sure what the best way for restful services on the cluster to talk to each other.
It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule
Two options within the cluster
This might illustrate it further with an example
Caller
Receiver
Example Url
UI
On Cluster
http://clusterip/orders
The UI would use the cluster ip and the ingress rule to reach the order manager
Service off cluster
On Cluster
http://clusterip/orders
Just like the UI
On Cluster
On Cluster
http://clusterip/orders
Could use ingress rule like the above approach
On Cluster
On Cluster
http://orderManager-service:50588/
Could use the service name and port directly
I write cluster ip a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders
So when caller and reciever are both on cluster is it either
Use the ingress rule which is also used by services and apps outside the cluster
Use the nodeport service name which is used in the ingress rule
Or perhaps something else!
One benefit of using nodeport service name is that you do not have to change your base URL.
The ingress rule appends an extra elements to the route (in the above case orders)
When I move a restful service from legacy to k8s cluster it will increase the complexity
It depends on whether you want requests to be routed through your ingress controller or not.
Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself — NGINX in this case — will proxy the request to the Service. The request will then be routed to a Pod.
Sending the request directly to the Service’s URL simply skips your ingress controller. The request is directly routed to a Pod.
The trade offs between the two options depend on your setup.
Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.
However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.
For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.
Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.

Can Kubernetes Service control traffic percentage to a given pod?

Is it possible to control the percentage of traffic going to a particular pod with Kubernetes Service, without controlling the number of underlying pod? By default, kube- chooses a backend via a round-robin algorithm.
Yes, possible with the extra configuration of Service mesh.
If you looking forward to do it using the simple service it's hard to do it based on % as the default behavior is round-robin.
For example, if you are using the Istio service mesh
You can route the traffic based on weight
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
where subset you can consider is more like the label so you are running the deployments with multiple labels and do the routing based on weight using the Istio.
See the example.

How to use variables in Istio VirtualService?

I'm currently working on a case when we need to dynamically create services and provide access to them via URI subpaths of the main gateway.
I'm planning to use virtual services for traffic routing for them. Virtual Service for a particular service should look like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/svc-2345-6789"
route:
- destination:
host: svc-2345-6789.prod.svc.cluster.local
But there may be a huge number of such services (like thousands). All follow the same pattern of routing.
I would like to know if Istio has a mechanism to specify VirtualService with variables/parameters like the following:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/"{{ variable }}
route:
- destination:
host: {{ variable }}.prod.svc.cluster.local
In Nginx, one can do a similar thing by specifying something like this:
location ~ /service/(?<variable>[0-9a-zA-Z\_\-]+)/ {
proxy_pass http://$variable:8080;
}
Is there a way in Istio to accomplish that?
And if there is not, how would thousands of VSs impact the performance of request processing? Is It expensive to keep them in terms of CPU and RAM being consumed?
Thank you in advance!
How to use variables in Istio VirtualService?
As far as I know there is no such option in istio to specify a variable in prefix and host, if it was only a prefix then you could try with regex instead of prefix.
If you would like to automate it in some way, I mean create a variable and put in in both, prefix and host, then you could try do to it with helm.
There are few examples for virtual service in helm.
https://github.com/streamsets/helm-charts/blob/master/incubating/control-hub/templates/istio-gateway-virtualservice.yaml
https://github.com/salesforce/helm-starter-istio/blob/master/templates/virtualService.yaml
how would thousands of VSs impact the performance of request processing?
There is github issue about that, as #lanceliuu mentioned there
When we create ~1k virtualservices in a single cluster, the ingress gateway is picking up new virtualservice slowly.
So that might be one of the issues with thousands of Virtual Services.
Is It expensive to keep them in terms of CPU and RAM being consumed?
I would say it would require testing. I checked in above github issue and they mentioned that there is no mem/cpu pressure for istio components, but I can't say how expensive is that.
In theory you could create 1 big virtual service instead of thousands, but as mentioned in documentation you should rather Split large virtual services into multiple resources.
Additional resources:
https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-2-ff4c26844bfb
https://istio.io/latest/docs/ops/deployment/performance-and-scalability/
https://perf.dashboard.istio.io/

Can a `ServiceEntry` be applied to only 1 service?

We have a cluster with Istio, but there is this one condition, I can't find how to fulfill.
We need one of the services to have certain restrictions within the mesh as well, and to talk to one external endpoint. Through Sidecar object, I should be able to set the restrictions internally, but I don't know how to restrict to one external endpoint.
I can set the external endpoint in the Sidecar object as well, but I have to create a ServiceEntry anyways, in which case all the services can talk to that external endpoint.
It seems that what I need is to set a ServiceEntry for one specific service, but this is not possible. Is there any other way to achieve this?
I asked this question on GitHub; to Istio team, and the only way to achieve this is putting the service in a different namespace, and make the ServiceEntry to apply to the workloads only in that namespace through exportTo parameter.
The ServiceEntry would look like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-demo
spec:
exportTo:
- . # with ".", we are saying the ServiceEntry to only apply to the workloads in the same namespace.
hosts:
- www.google.com
location: MESH_EXTERNAL
ports:
- name: https
number: 443
protocol: HTTPS
resolution: DNS