Which is invoked first Virtual Service or Destinationrule? - kubernetes

I have a confusion between Virtual Service and Destinationrule on which one is executed first?
Let’s say I have below configs,
Destinationrule -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: movies
namespace: aio
spec:
host: movies
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
---
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: movies
namespace: aio
spec:
hosts:
- movies
http:
- route:
- destination:
host: movies
subset: version-v1
weight: 10
- destination:
host: movies
subset: version-v2
weight: 90
---
I read somewhere that,
A VirtualService defines a set of traffic routing rules to apply when a host is addressed.
DestinationRule defines policies that apply to traffic intended for service after routing has occurred.
Does this mean Destinationrules are invoked after Virtualservices?
I have a small diagram, is my understanding correct?

Yes,
According to istio documentation about DestinationRule:
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
And for VirtualService:
A VirtualService defines a set of traffic routing rules to apply when a host is addressed.
There is an youtube video: Life of a Packet through Istio it explains in detail the order of processes that are applied to a packet going through the istio mesh.

Related

Istio Virtual Service is not working very well

I find that the Rewrite feature of my Virtual Service is not working very well. Here is my Virtual Service and DestinationRule yaml file:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: leads-http
namespace: seldon
spec:
gateways:
- istio-system/seldon-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /seldon/seldon/leads/
rewrite:
uri: /
route:
- destination:
host: leads-leads
port:
number: 8000
subset: leads
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: leads-leads
namespace: seldon
spec:
host: leads-leads
subsets:
- labels:
version: leads
name: leads
trafficPolicy:
connectionPool:
http:
idleTimeout: 60s
When I send an http request:
curl --location --request POST 'http://localhost/seldon/seldon/leads/v2/models/leads-lgb/versions/v0.1.0/infer'
I find that the istio-proxy service prints 404 not found in the logs:
"POST /seldon/seldon/leads/v2/models/leads-lgb/versions/v0.1.0/infer HTTP/1.1" 404
even though I expect:
POST /v2/models/leads-lgb/versions/v0.1.0/infer HTTP/1.1
I am not sure what's happening. Does anyone have any idea? Thanks!
I think your issue is incorrectly configured DestinationRule or service name coneintion.
DestinationRule:
These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
Version specific policies can be specified by defining a named subset and overriding the settings specified at the service level.
Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.
DestinationRule-Subset:
It seems to me that name should go first in the structure. At least I havent seen/met another examples.
So in your case correct(at least I hope) DR is:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: leads-leads
namespace: seldon
spec:
host: leads-leads
subsets:
- name: leads
labels:
version: leads
However, if that wont help - I encourage you to check this self-resolved question:
Dont you have the same situation with named service port? I mean as per Explicit protocol selection you should add sufix in service name...
name: <protocol>[-<suffix>]

Split traffic between 2 ClusterIP k8s services using a virtual service

I have two Pods running in Kubernetes exposed by ClusterIP services, let's say nginx-1 and nginx-2. I want to create a virtual service nginx-split, which route 75% of the traffic to nginx-1 and 25% of the traffic to nginx-2. What I understood from the documentation is that I should create a VirtualService definition file:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- nginx-split
http:
- route:
- destination:
host: nginx-1
weight: 75
- destination:
host: nginx-2
weight: 25
VirtualService definition is not enough, maybe I should also create a ServiceEntry. The problem is that I don't know how to define a Service Entry for nginx-split since it is just virtual and should not be resolved to (one) IP address.
TRAFFIC SPLITTING:How it works
The traffic splitting is handled by two Istio Objects:
VirtualService - defines a set of traffic routing rules to apply when a host is addressed.
DestinationRule - defines policies that apply to traffic intended for a service after routing has occurred.
We create a VirtualService that list the different variation with their the weight:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-a
spec:
hosts:
- service-a
http:
- route:
- destination:
host: service-a
subset: v1
weight: 80
- destination:
host: service-a
subset: v2
weight: 20
Then the DestinationRule is responsible for defining the destination of the traffic and the traffic policy:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: property-business-service
spec:
host: property-business-service
subsets:
- name: v1
labels:
version: "1.0"
- name: v2
labels:
version: "1.1"

Is there a way to proxy calls to an ExternalName service thanks to an Istio VirtualService?

In a project I'm currently working on, I'd like to create a DNS alias for a Kubernetes service located in another namespace. To do so, I created an ExternalName service such as the following:
kind: Service
apiVersion: v1
metadata:
name: connector
namespace: test
spec:
type: ExternalName
externalName: gateway.eventing.svc.cluster.local
So far, so good. When I request the 'connector' DNS, I successfully hit the external name, i.e. gateway.eventing.svc.cluster.local.
Now, I would like to add headers to all http requests sent to the connector ExternalName service, so I created an Istio VirtualService to do so:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: connector
namespace: test
spec:
hosts:
- connector
- connector.test.svc.cluster.local
http:
- match:
- uri:
prefix: /
route:
- destination:
host: connector
port:
number: 80
#headers config ignored for brevity
The problem is that the VirtualService is never called. It seems it does not intercepts request made to the connector DNS or to its fully qualified name, i.e. connector.test.svc.cluster.local.
I figured, after reading the documentation, that this happens because the Istio VirtualService checks the service registry, and the ExternalName service is not part of it, it's just some kind of DNS alias.
I therefore attempted to create an Istio ServiceEntry such as the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: connector
namespace: test
spec:
hosts:
- connector
endpoints:
- address: gateway.eventing.svc.cluster.local
ports:
- number: 80
name: http
protocol: HTTP
location: MESH_INTERNAL
resolution: DNS
It works, and I can see in Kiali that instead of calling the PassthroughCluster when requesting connector, it is the connector ServiceEntry which is called, which is to my understanding what should be happening.
However, my connector VirtualService is still not called. Why is that? Is there a way to make it happen?
If not, what can I do to alias in a given namespace (i.e. test) a service located in another (i.e. eventing) and proxy http request thanks to an Istio VirtualService?
Thanks in advance for your help!
EDIT:
Sidecar injection is enabled namespace-wide (i.e. test)
So, it turns out that all that was missing to make it work was to both specify and name the port on the ExternalName service.
Here's the updated yaml:
kind: Service
apiVersion: v1
metadata:
name: connector
namespace: test
spec:
type: ExternalName
externalName: gateway.eventing.svc.cluster.local
ports:
- name: http
port: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: connector
namespace: test
spec:
hosts:
- connector
- connector.test.svc.cluster.local
http:
- match:
- uri:
prefix: /
route:
- destination:
host: connector
port:
number: 80
#headers config ignored for brevity
Naming the port is absolutely required, as it lets Istio know of the application protocol to use, as defined by the VirtualService.
No need to add a ServiceEntry, it will work with the BYON host specified in the VirtualService.
Note that the answer supplied by #Christoph Raab works as well, but is unhappilly too verbose to be marked as my prefered answer.
Update
I didn't see that the ports list was missing and am not sure how you could apply the yml, because the list should be required.
Anyways, I leave my answer. Maybe it will help someone else in the further.
Original Post (slightly modified)
Docs are not clear, but I think the header manipulation be could done by the receiving sidecar. As far as I understand your setup, the resource behind the ServiceEntry does not have a sidecar, so if that would be true, the manipulation wouldn't work.
In order to add custom headers you can use a EnvoyFilter of type lua that is applied to the sender's sidecar and can manipulate the traffic on the fly.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-custom-header-filter
namespace: test
spec:
configPatches:
- applyTo: CLUSTER
match:
context: SIDECAR_OUTBOUND
cluster:
service: connector
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
response_handle:logInfo("adding custom headers..."");
response_handle:headers():add("X-User-Header", "worked");
end
This filter is applied to every request to the service entry connector by every sidecar in the namespace test on outbound and adds a custom header before any other action done.

Istio traffic routing based on custom headers

I'm trying to implement some sort of traffic routing using Istio in a Kubernetes cluster.
The situattion is the following one:
(customer service) => (preference service) => (recommendation service) which has two versions: v1 and v2.
I want to use a custom header, for example X-Svc-Env from an Istio VirtualService and to specify through this header the version of the Recommendation Service which I want to hit.
The configuration for the VirtualService is the following one:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recommendation
namespace: online
spec:
hosts:
- recommendation
http:
- match:
- headers:
x-svc-env:
regex: v2
route:
- destination:
host: recommendation
subset: version-v2
- route:
- destination:
host: recommendation
subset: version-v1
Also, the DestinationRule that it's being used is the following one:
kind: DestinationRule
metadata:
name: recommendation
namespace: online
spec:
host: recommendation
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
Well... this is not working because somehow, my custom header is not propagated through Envoy Proxy (I suppose).
I should mention that if I am using a well-known HTTP header, eg: baggage-user-agent (which is the User-Agent header from the OpenTracing specification, all the things are going pretty well).
Tx!

Granular policy over istio egress trafic

I have kubernetes cluster with installed Istio. I have two pods, for example, sleep1 and sleep2 (containers with installed curl). I want to configure istio to permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com.
So, I created ServiceEntry:
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
- google.com
ports:
- name: http-port
protocol: HTTP
number: 80
resolution: DNS
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http-port
protocol: HTTP
hosts:
- "*"
two virtualServices (mesh->egress, egress->google)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mesh-to-egress
spec:
hosts:
- www.google.com
- google.com
gateways:
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: egress-to-google-int
spec:
hosts:
- www.google.com
- google.com
gateways:
- istio-egressgateway
http:
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: google.com
port:
number: 80
weight: 100
As result, I can curl google from both pods.
And the question again: can i permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com? I know that this is possible to do with kubernetes NetworkPolicy and black/white lists (https://istio.io/docs/tasks/policy-enforcement/denial-and-list/), but both methods are forbids (permits) traffic to specific ips or maybe I missed something?
You can create different service accounts for sleep1 and sleep2. Then you create an RBAC policy to limit access to the istio-egressgateway policy, so sleep2 will not be able to access any egress traffic through the egress gateway. This should work with forbidding any egress traffic from the cluster, that does not originate from the egress gateway. See https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations.
If you want to allow sleep2 access other services, but not www.google.com, you can use Mixer rules and handlers, see this blog post. It shows how to allow a certain URL path to a specific service account.
I think you're probably on the right track on the denial option.
It is also not limited to IP as we may see attribute-based example for Simple Denial and Attribute-based Denial
So, for example, if we write a simple denial rule for Sleep2 -> www.google.com:
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: denySleep2Google
spec:
compiledAdapter: denier
params:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: denySleep2GoogleRequest
spec:
compiledTemplate: checknothing
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denySleep2
spec:
match: destination.service.host == "www.google.com" && source.labels["app"]=="sleep2"
actions:
- handler: denySleep2Google
instances: [ denySleep2GoogleRequest ]
Please check and see if this helps.
Also, the "match" field in the "rule" entry is based on istio expression language around the attributes. Some vocabulary can be found in this doc.