How to use variables in Istio VirtualService? - kubernetes

I'm currently working on a case when we need to dynamically create services and provide access to them via URI subpaths of the main gateway.
I'm planning to use virtual services for traffic routing for them. Virtual Service for a particular service should look like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/svc-2345-6789"
route:
- destination:
host: svc-2345-6789.prod.svc.cluster.local
But there may be a huge number of such services (like thousands). All follow the same pattern of routing.
I would like to know if Istio has a mechanism to specify VirtualService with variables/parameters like the following:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/"{{ variable }}
route:
- destination:
host: {{ variable }}.prod.svc.cluster.local
In Nginx, one can do a similar thing by specifying something like this:
location ~ /service/(?<variable>[0-9a-zA-Z\_\-]+)/ {
proxy_pass http://$variable:8080;
}
Is there a way in Istio to accomplish that?
And if there is not, how would thousands of VSs impact the performance of request processing? Is It expensive to keep them in terms of CPU and RAM being consumed?
Thank you in advance!

How to use variables in Istio VirtualService?
As far as I know there is no such option in istio to specify a variable in prefix and host, if it was only a prefix then you could try with regex instead of prefix.
If you would like to automate it in some way, I mean create a variable and put in in both, prefix and host, then you could try do to it with helm.
There are few examples for virtual service in helm.
https://github.com/streamsets/helm-charts/blob/master/incubating/control-hub/templates/istio-gateway-virtualservice.yaml
https://github.com/salesforce/helm-starter-istio/blob/master/templates/virtualService.yaml
how would thousands of VSs impact the performance of request processing?
There is github issue about that, as #lanceliuu mentioned there
When we create ~1k virtualservices in a single cluster, the ingress gateway is picking up new virtualservice slowly.
So that might be one of the issues with thousands of Virtual Services.
Is It expensive to keep them in terms of CPU and RAM being consumed?
I would say it would require testing. I checked in above github issue and they mentioned that there is no mem/cpu pressure for istio components, but I can't say how expensive is that.
In theory you could create 1 big virtual service instead of thousands, but as mentioned in documentation you should rather Split large virtual services into multiple resources.
Additional resources:
https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-2-ff4c26844bfb
https://istio.io/latest/docs/ops/deployment/performance-and-scalability/
https://perf.dashboard.istio.io/

Related

How to replicate some traffic on Kubernetes and divert it to the Service for investigation

I'm using istio and I know that I can define weights in Virtual Service and divert traffic to different services.
My question is: how to amplify some of the traffic and direct the amplified traffic to the validation service? This amplified traffic will not go back to the original source but will be closed within the cluster. In other words, it does not bother the user.
I'm not even sure if there is an ecosystem, feature or application that provides this kind of mechanism. I don't even know if there is an ecosystem or application that provides such a mechanism and I don't know what it is called, so I'm having trouble finding it.
Thanks.
OP found a soultion by themselves, in the comments, hence the CW.
In this scenario, the best solution is to use Istio Mirroring - also called shadowing.
When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow. For example, cluster-1 becomes cluster-1-shadow.
Also, it is important to note that these requests are mirrored as “fire and forget”, which means that the responses are discarded.
To create mirroring rule, you have to create VirtualService with mirror and mirrorPercentage fields.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercentage:
value: 100.0
This route rule sends 100% of the traffic to v1. The last stanza specifies that you want to mirror (i.e., also send) 100% of the same traffic to the httpbin:v2 service.
[source]

Can Kubernetes Service control traffic percentage to a given pod?

Is it possible to control the percentage of traffic going to a particular pod with Kubernetes Service, without controlling the number of underlying pod? By default, kube- chooses a backend via a round-robin algorithm.
Yes, possible with the extra configuration of Service mesh.
If you looking forward to do it using the simple service it's hard to do it based on % as the default behavior is round-robin.
For example, if you are using the Istio service mesh
You can route the traffic based on weight
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
where subset you can consider is more like the label so you are running the deployments with multiple labels and do the routing based on weight using the Istio.
See the example.

Can a `ServiceEntry` be applied to only 1 service?

We have a cluster with Istio, but there is this one condition, I can't find how to fulfill.
We need one of the services to have certain restrictions within the mesh as well, and to talk to one external endpoint. Through Sidecar object, I should be able to set the restrictions internally, but I don't know how to restrict to one external endpoint.
I can set the external endpoint in the Sidecar object as well, but I have to create a ServiceEntry anyways, in which case all the services can talk to that external endpoint.
It seems that what I need is to set a ServiceEntry for one specific service, but this is not possible. Is there any other way to achieve this?
I asked this question on GitHub; to Istio team, and the only way to achieve this is putting the service in a different namespace, and make the ServiceEntry to apply to the workloads only in that namespace through exportTo parameter.
The ServiceEntry would look like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-demo
spec:
exportTo:
- . # with ".", we are saying the ServiceEntry to only apply to the workloads in the same namespace.
hosts:
- www.google.com
location: MESH_EXTERNAL
ports:
- name: https
number: 443
protocol: HTTPS
resolution: DNS

Can Ingress route requests based on ip?

I have been with K8s-ingress well so far but I have a question.
Can ingress route requests based on IP?
I've already know that ingress do routing based on hosts like a.com, b.com... to each services and URI like path /a-service/, /b-service/ to each services.
However, I'm curious with the idea that Ingress can route by IP? I'd like requests from my office(certain ip) to route a specific service for tests.
Does it make sense? and any idea for that?
If this is just for testing I would just whitelist the IP. You can read the docs about nginx ingress annotations
You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
Example yaml might look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whitelist
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: echoheaders
servicePort: 80
Also it looks like you can do that in Istio (I did not tried it) in kind ServiceRole and ServiceRoleBinding for specifying detailed access control requirements. For this you would use source.ip property. It's explained on Constraints and Properties
This is not part of the main Ingress abstraction as you noted, however many Ingress Controllers offer extra features through annotations or secondary CRDs. So in theory it could be added like that. I don't think any do routing like this though, so in practical terms, probably not available off the shelf.
As coderanger stated in his answer, ingress does not have it by default.
I'm not sure if IP based routing is the way to proceed, because how will you test/hit actual deployments/services from Office IP's when needed?
I think you can add a check to perform routing based on IP and header. For ex: you can pass a header 'redirect-to-test: true'. So if you set this to false, you can still access the production services.

ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?

I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io
For example, if Tom and Jerry both built an app, they'd have it hosted under:
tom.laska.io
jerry.laska.io
Now, suppose I have 1000 users. Should I create one big ingress that looks like this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: tom.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
- host: jerry.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
...and so forth
I'm worried about downtime - if I have an app with websockets for example. Also the file will become huge with 1000 users. Will reloading the ingress go fast enough? Also, how should I reload it?
A second option in my mind is to simply create one ingress for every web app. My worry about that is, can ingress-nginx handle many ingresses? Or is this an anti-pattern?
Which one is better?
You can create one ingress resource for each web app. If you search the official public charts repo, you'll see that many of the charts define an ingress resource within them. It's normal for each app to define its own ingress resource.
It's worth being clear that an ingress resource is just a definition of a routing rule. (It doesn't add an extra ingress controller or any other extra runtime component.) So it makes a lot of sense for an app to define its own routing rule.
The example you've given has all the ingress routing in one resource definition. This approach is easy to grasp and makes a lot of sense when you've got several related applications as then you might want to see their routing configuration together. You can see this also in the fanout ingress example in the kubernetes docs.
I can't see any performance concerns with defining the rules separately in distinct resources. The ingress controller will listen for new rules and update its configuration only when a new rule is created so there shouldn't be a problem with reading the resources. And I'd expect the combined vs separated options to result in the same routing rules being set in the background in nginx.
The most common pattern in the official charts repo is that the chart for each app defines its ingress resource and also exposes it through the values.yaml so that users can choose to enable or customise it as they wish. You can then compose multiple charts together and configure the rules for each in the relevant section of the values.yaml. (Here's an example I've worked on that does this with wildcard dns.) Or you can deploy each app separately under its own helm release.
An ingress ressource is just a rule. It's better to split your ingress ressources in different files and just reapply the ressources that need change.
I never noticed a downtime when applying a ressource.