Istio Egresses with Kubernetes Services - kubernetes

(Using Istio 0.5.1, kubectl 1.9.1/1.9.0 for client/server, minikube 0.25.0)
I'm trying to get Istio EgressRules to work with Kubernetes Services, but having some trouble.
I tried to set up EgressRules 3 ways:
An ExternalName service which points to another domain (like
www.google.com)
A Service with no selector, but an associated
Endpoint object (for services that have an IP address but no DNS
name)
(for comparison) No Kubernetes service, just an EgressRule
I figured I could use the FQDN of the kubernetes service as the HTTP-based EgressRule destination service (like ext-service.default.svc.cluster.local), and this is what I attempted for both an ExternalName service as well as a Service with no selectors but an associated Endpoints object.
For the former, I created the following yaml file:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
For the latter, I created this yaml file (I just pinged google and grabbed the IP address):
kind: Endpoints
apiVersion: v1
metadata:
name: ext-service
subsets:
- addresses:
- ip: 216.58.198.78
ports:
- port: 443
---
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-service-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
In both cases, in the application code, I access:
http://ext-service.default.svc.cluster.local:443
My assumption is that the traffic will flow like:
[[ app -> envoy proxy -> (tls origination) -> kubernetes service ]] -> external service
where [[ ... ]] is the boundary of the service mesh (and also the Kubernetes cluster)
Results:
The ExternalName Service almost worked as expected, but it brought me to Google's 404 page (and sometimes the response just seemed empty, not sure how to replicate one or the other specifically)
The Service with the Endpoint object did not work, instead printing this message (when making the request via Golang, but I don't think that matters):
Get http://ext-service.default.svc.cluster.local:443: EOF
This also sometimes gives an empty response.
I'd like to use Kubernetes services (even though it's for external traffic) for a few reasons:
You can't use an IP address for the EgressRule's destination service. From Egress Rules configuration: "The destination of an egress rule ... can be either a fully qualified or wildcard domain name".
For external services that don't have a domain name (some on-prem legacy/monolith service without a DNS name), I'd like the application to be able to access them not by IP address but by a kube-dns (or Istio-related similar) name.
(related to previous) I like the additional layer of abstraction that a Kubernetes service provides, so I can change the underlying destination without changing the EgressRule (unless I'm mistaken and this isn't the right way to architect this). Is the EgressRule meant to replace Kubernetes services for external traffic entirely and without creating additional Kubernetes services?
Using https:// in the app code isn't an option because then the request would have to disable TLS verification since the kube-dns name doesn't match any on the certificate. It also wouldn't be observable.
If I use the following EgressRule (without any Kubernetes Services), accessing Google via http://www.google.com:443 works fine, getting the exact html representation that I expect:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
spec:
destination:
service: www.google.com
ports:
- port: 443
protocol: https
I saw there's a TCP EgressRule, but I would rather not have to specify rules for each block of IPs. From TCP Egress: "In TCP egress rules as opposed to HTTP-based egress rules, the destinations are specified by IPs or by blocks of IPs in CIDR notation.".
Also, I would still like the HTTP-based observability that comes from L7 instead of L4, so I'd prefer an HTTP-based egress. (With TCP Egresses, "The HTTPS traffic originated by the application will be treated by Istio as opaque TCP").
Any help getting a Kubernetes service as "destination service" of an EgressRule (or help understanding why this isn't necessary if that's the case) is appreciated. Thanks!

The solution is:
Define a Kubernetes ExternalName service to point to www.google.com
Do not define any EgressRules
Create a RouteRule to set the Host header.
In your case, define an ExternalName service with the port and the protocol:
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- port: 80
# important to set protocol name
name: http
---
Define an HTTP Rewrite Route Rule to set the Host header:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: externalgoogle-rewrite-rule
#namespace: default
spec:
destination:
name: ext-service
rewrite:
authority: www.google.com
---
Then access it with curl, for example: curl ext-service
Without the Route Rule, the request will arrive to google.com, with the Host header being ext-service. The web server does not know where to forward such a request since google.com does not have such a virtual host. This is what you experienced:
it brought me to Google's 404 page

Related

Use istio ServiceEntry resource to send traffic to internal kubernetes FQDN over external connection

CONTEXT:
I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).
We run Anthos service mesh (v1.12) in GKE clusters.
PROBLEM:
I need to find a way to do the following:
PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'
Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).
However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.
I'm using an istio ServiceEntry resource to try and achieve this, as follows:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
This doesn't appear to work and I'm getting Error: 14 UNAVAILABLE: upstream request timeout errors on PodA running in the new cluster.
I can confirm that running telnet to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).
Is there a limitation on what you can use in the hosts on a serviceentry? Does it have to be a .com or .org address?
The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.
Any ideas/comments appreciated, thanks.
Fortunately I came across someone with a similar (but not identical) issue, and the answer in this stackoverflow post gave me the outline of what kubernetes (and istio) resources I needed to create.
I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.
The end result was this:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

How can istio mesh-virtual-service manage traffic from ingress-virtual-service?

I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app
namespace: test-ns
spec:
gateways:
- mesh
hosts:
- test-deployment-app.test-ns.svc.cluster.local
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
- port: 8080
headers:
response:
set:
x-canary: "true"
route:
- destination:
host: test-deployment-app-canary.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
- name: stable
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app-internal
namespace: test-ns
spec:
gateways:
- istio-system/default-gateway
hosts:
- myapp.dev.bla
http:
- name: default
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
So I am expecting x-canary:true response header when I call myapp.dev.bla but I don't see that.
Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.
Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. Gateway is object that creates standard deployment + service. Pods in the deployment consist standalone envoy-proxy container.
Gateway object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.
If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your mesh gateway. It's nothing else than k8s Service inside istio-gateway or istio-system namespace
Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in istio-gateway or istio-system namespace. Alternatively you can create new Gateway just for your application and apply VirtualService to the new gateway as well.

Use https protocol for endpoints in Kubernetes Services

Tried creating a Kubernetes endpoints service to invoke resource hosted outside the cluster via static IP's over HTTPS protocol.
Below is the endpoint code
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 8081
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX // **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
But the above configuration is giving 400 Bad Request with message as "This combination of host and port requires TLS."
Same is working for http not for https exposed "ip".Could someone please guide how to achieve this.
##Update1
This is how the flow is configured.
Ingress->service->endpoints
This is the error message your get when calling a https endpoint with http. Are you sure that whoever is calling your service, is calling it with https:// at the beginning?
Kubernetes Service is no more than a set of forwarding rules in iptables (most often), and it knows nothing about TLS.
If you want to enforce https redirection you might use ingress controller for this. All major ingress controllers have this capability.
For example, check for nginx-ingress.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect.
Basically, all you need is to add this annotation to your ingress rule.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Easypeasy, just add port 443 to the Service that will make the request TLS/https:
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 443 # <-- this is the way
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX # **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
So you can reach your serviceRequest from your containers on https://serviceRequest url.
Also keep in mind that in yaml the # character is the comment sing not //

What is the difference between Istio VirtualService and Kubernetes Service?

As I understand, Istio VirtualService is kind of abstract thing, which tries to add an interface to the actual implementation like the service in Kubernetes or something similar in Consul.
When use Kubernetes as the underlying platform for Istio, is there any difference between Istio VirtualService and Kubernetes Service or are they the same?
Kubernetes service
Kubernetes service manage a pod's networking. It specifies whether your pods are exposed internally (ClusterIP), externally (NodePort or LoadBalancer) or as a CNAME of other DNS entries (externalName).
As an example this foo-service will expose the pods with label app: foo. Any requests sent to the node on port 30007 will be forwarded to the pod on port 80.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 80
nodePort: 30007
Istio virtualservice
Istio virtualservice is one level higher than Kuberenetes service. It can be used to apply traffic routing, fault injection, retries and many other configurations to services.
As an example this foo-retry-virtualservice will retry 3 times with a timeout 2s each for failed requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-retry-virtualservice
spec:
hosts:
- foo
http:
- route:
- destination:
host: foo
retries:
attempts: 3
perTryTimeout: 2s
Another example of this foo-delay-virtualservice will apply a 0.5s delay to 0.1% of requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-delay-virtualservice
spec:
hosts:
- foo
http:
- fault:
delay:
percentage:
value: 0.1
fixedDelay: 5s
route:
- destination:
host: foo
Ref
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/concepts/traffic-management/#virtual-services
Istio's VirtualServices provides, as every Istio's extensions, some additionals features such as external traffic routing/management (Pod to external communication, HTTPS external communication, routing, url rewriting...).
Take a look at this doc about it for more details : https://istio.io/docs/reference/config/networking/virtual-service
They can be both useful, as you need "classic" Services to manage ingress traffic or service-to-service communication.
Virtual Service:
It defines a set of traffic routing rules to apply to a kubernetes service or subset of service based on the matching criteria. This is something similar to kubernetes Ingress object. It plays a key role on Istio's traffic management flexible and powerful.
Kubernetes Service:
It can be a logical set of pods and defined as an abstraction on top of pods which provides single DNS name or IP.