How can istio mesh-virtual-service manage traffic from ingress-virtual-service? - kubernetes

I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app
namespace: test-ns
spec:
gateways:
- mesh
hosts:
- test-deployment-app.test-ns.svc.cluster.local
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
- port: 8080
headers:
response:
set:
x-canary: "true"
route:
- destination:
host: test-deployment-app-canary.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
- name: stable
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app-internal
namespace: test-ns
spec:
gateways:
- istio-system/default-gateway
hosts:
- myapp.dev.bla
http:
- name: default
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
So I am expecting x-canary:true response header when I call myapp.dev.bla but I don't see that.

Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.
Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. Gateway is object that creates standard deployment + service. Pods in the deployment consist standalone envoy-proxy container.
Gateway object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.
If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your mesh gateway. It's nothing else than k8s Service inside istio-gateway or istio-system namespace
Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in istio-gateway or istio-system namespace. Alternatively you can create new Gateway just for your application and apply VirtualService to the new gateway as well.

Related

Use istio ServiceEntry resource to send traffic to internal kubernetes FQDN over external connection

CONTEXT:
I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).
We run Anthos service mesh (v1.12) in GKE clusters.
PROBLEM:
I need to find a way to do the following:
PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'
Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).
However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.
I'm using an istio ServiceEntry resource to try and achieve this, as follows:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
This doesn't appear to work and I'm getting Error: 14 UNAVAILABLE: upstream request timeout errors on PodA running in the new cluster.
I can confirm that running telnet to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).
Is there a limitation on what you can use in the hosts on a serviceentry? Does it have to be a .com or .org address?
The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.
Any ideas/comments appreciated, thanks.
Fortunately I came across someone with a similar (but not identical) issue, and the answer in this stackoverflow post gave me the outline of what kubernetes (and istio) resources I needed to create.
I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.
The end result was this:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

Nginx ingress sends private IP for X-Real-IP to services

I have created a Nginx Ingress and Service with the following code:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP
selector:
name: my-app
ports:
- port: 8000
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: myingress
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 8000
Nginx ingress installed with:
helm install ingress-nginx ingress-nginx/ingress-nginx.
I have also enabled proxy protocols for ELB. But in nginx logs I don't see the real client ip for X-Forwarded-For and X-Real-IP headers. This is the final headers I see in my app logs:
X-Forwarded-For:[192.168.21.145] X-Forwarded-Port:[80] X-Forwarded-Proto:[http] X-Forwarded-Scheme:[http] X-Real-Ip:[192.168.21.145] X-Request-Id:[1bc14871ebc2bfbd9b2b6f31] X-Scheme:[http]
How do I get the real client ip instead of the ingress pod IP? Also is there a way to know which headers the ELB is sending to the ingress?
One solution is to use externalTrafficPolicy: Local (see documentation).
In fact, according to the kubernetes documentation:
Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client.
...
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
If you want to follow this route, update your nginx ingress controller Service and add the externalTrafficPolicy field:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
...
externalTrafficPolicy: Local
A possible alternative could be to use Proxy protocol (see documentation)
The proxy protocol should be enabled in the ConfigMap for the ingress-controller as well as the ELB.
L4 uses proxy-protocol
For L7 use use-forwarded-headers
# configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
Just expanding on #strongjz answer.
By default, the Load balancer that will be created in AWS for a Service of type LoadBalancer will be a Classic Load Balancer, operating on Layer 4, i.e., proxying in the TCP protocol level.
For this scenario, the best way to preserve the real ip is to use the Proxy Protocol, because it is capable of doing this at the TCP level.
To do this, you should enable the Proxy Protocol both on the Load Balancer and on Nginx-ingress.
Those values should do it for a Helm installation of nginx-ingress:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
The service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" annotation will tell the aws-load-balancer-controller to create your LoadBalancer with Proxy Protocol enabled. I'm not sure what happens if you add it to a pre-existing Ingress-nginx, but it should work too.
The use-proxy-protocol and real-ip-header are options passed to Nginx, to also enable Proxy Protocol there.
Reference:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#proxy-protocol-v2
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol

What is the difference between Istio VirtualService and Kubernetes Service?

As I understand, Istio VirtualService is kind of abstract thing, which tries to add an interface to the actual implementation like the service in Kubernetes or something similar in Consul.
When use Kubernetes as the underlying platform for Istio, is there any difference between Istio VirtualService and Kubernetes Service or are they the same?
Kubernetes service
Kubernetes service manage a pod's networking. It specifies whether your pods are exposed internally (ClusterIP), externally (NodePort or LoadBalancer) or as a CNAME of other DNS entries (externalName).
As an example this foo-service will expose the pods with label app: foo. Any requests sent to the node on port 30007 will be forwarded to the pod on port 80.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 80
nodePort: 30007
Istio virtualservice
Istio virtualservice is one level higher than Kuberenetes service. It can be used to apply traffic routing, fault injection, retries and many other configurations to services.
As an example this foo-retry-virtualservice will retry 3 times with a timeout 2s each for failed requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-retry-virtualservice
spec:
hosts:
- foo
http:
- route:
- destination:
host: foo
retries:
attempts: 3
perTryTimeout: 2s
Another example of this foo-delay-virtualservice will apply a 0.5s delay to 0.1% of requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-delay-virtualservice
spec:
hosts:
- foo
http:
- fault:
delay:
percentage:
value: 0.1
fixedDelay: 5s
route:
- destination:
host: foo
Ref
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/concepts/traffic-management/#virtual-services
Istio's VirtualServices provides, as every Istio's extensions, some additionals features such as external traffic routing/management (Pod to external communication, HTTPS external communication, routing, url rewriting...).
Take a look at this doc about it for more details : https://istio.io/docs/reference/config/networking/virtual-service
They can be both useful, as you need "classic" Services to manage ingress traffic or service-to-service communication.
Virtual Service:
It defines a set of traffic routing rules to apply to a kubernetes service or subset of service based on the matching criteria. This is something similar to kubernetes Ingress object. It plays a key role on Istio's traffic management flexible and powerful.
Kubernetes Service:
It can be a logical set of pods and defined as an abstraction on top of pods which provides single DNS name or IP.

Istio Egresses with Kubernetes Services

(Using Istio 0.5.1, kubectl 1.9.1/1.9.0 for client/server, minikube 0.25.0)
I'm trying to get Istio EgressRules to work with Kubernetes Services, but having some trouble.
I tried to set up EgressRules 3 ways:
An ExternalName service which points to another domain (like
www.google.com)
A Service with no selector, but an associated
Endpoint object (for services that have an IP address but no DNS
name)
(for comparison) No Kubernetes service, just an EgressRule
I figured I could use the FQDN of the kubernetes service as the HTTP-based EgressRule destination service (like ext-service.default.svc.cluster.local), and this is what I attempted for both an ExternalName service as well as a Service with no selectors but an associated Endpoints object.
For the former, I created the following yaml file:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
For the latter, I created this yaml file (I just pinged google and grabbed the IP address):
kind: Endpoints
apiVersion: v1
metadata:
name: ext-service
subsets:
- addresses:
- ip: 216.58.198.78
ports:
- port: 443
---
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-service-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
In both cases, in the application code, I access:
http://ext-service.default.svc.cluster.local:443
My assumption is that the traffic will flow like:
[[ app -> envoy proxy -> (tls origination) -> kubernetes service ]] -> external service
where [[ ... ]] is the boundary of the service mesh (and also the Kubernetes cluster)
Results:
The ExternalName Service almost worked as expected, but it brought me to Google's 404 page (and sometimes the response just seemed empty, not sure how to replicate one or the other specifically)
The Service with the Endpoint object did not work, instead printing this message (when making the request via Golang, but I don't think that matters):
Get http://ext-service.default.svc.cluster.local:443: EOF
This also sometimes gives an empty response.
I'd like to use Kubernetes services (even though it's for external traffic) for a few reasons:
You can't use an IP address for the EgressRule's destination service. From Egress Rules configuration: "The destination of an egress rule ... can be either a fully qualified or wildcard domain name".
For external services that don't have a domain name (some on-prem legacy/monolith service without a DNS name), I'd like the application to be able to access them not by IP address but by a kube-dns (or Istio-related similar) name.
(related to previous) I like the additional layer of abstraction that a Kubernetes service provides, so I can change the underlying destination without changing the EgressRule (unless I'm mistaken and this isn't the right way to architect this). Is the EgressRule meant to replace Kubernetes services for external traffic entirely and without creating additional Kubernetes services?
Using https:// in the app code isn't an option because then the request would have to disable TLS verification since the kube-dns name doesn't match any on the certificate. It also wouldn't be observable.
If I use the following EgressRule (without any Kubernetes Services), accessing Google via http://www.google.com:443 works fine, getting the exact html representation that I expect:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
spec:
destination:
service: www.google.com
ports:
- port: 443
protocol: https
I saw there's a TCP EgressRule, but I would rather not have to specify rules for each block of IPs. From TCP Egress: "In TCP egress rules as opposed to HTTP-based egress rules, the destinations are specified by IPs or by blocks of IPs in CIDR notation.".
Also, I would still like the HTTP-based observability that comes from L7 instead of L4, so I'd prefer an HTTP-based egress. (With TCP Egresses, "The HTTPS traffic originated by the application will be treated by Istio as opaque TCP").
Any help getting a Kubernetes service as "destination service" of an EgressRule (or help understanding why this isn't necessary if that's the case) is appreciated. Thanks!
The solution is:
Define a Kubernetes ExternalName service to point to www.google.com
Do not define any EgressRules
Create a RouteRule to set the Host header.
In your case, define an ExternalName service with the port and the protocol:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- port: 80
# important to set protocol name
name: http
---
Define an HTTP Rewrite Route Rule to set the Host header:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: externalgoogle-rewrite-rule
#namespace: default
spec:
destination:
name: ext-service
rewrite:
authority: www.google.com
---
Then access it with curl, for example: curl ext-service
Without the Route Rule, the request will arrive to google.com, with the Host header being ext-service. The web server does not know where to forward such a request since google.com does not have such a virtual host. This is what you experienced:
it brought me to Google's 404 page