I have a deployment istio is injected in with access to the google maps distance matrix api. If I run the istioctl kube-inject with --includeIPRanges 10.0.0.0/8 it seems to work. If I remove this flag and instead apply a egress rule it won't work:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
namespace: microservices
spec:
destination:
service: "maps.googleapis.com"
ports:
- port: 443
protocol: https
- port: 80
protocol: http
Both, deployment and Egress rule are in the same namespace (microservices).
Any idea where my fault is?
From what I see by running curl maps.googleapis.com, it redirects to https://developers.google.com/maps/.
Two issues here:
You have specify an additional EgressRule for developers.google.com
Currently you have to access https external sites by issuing http requests to port 443, like curl http://developers.google.com/maps:443. Istio proxy will open an https connection to developers.google.com for you. Unfortunately, currently there is no other way to do it, except for using --includeIPRanges.
Related
I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.
Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.
We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.
Now I am using HostSNI(*) to mapping the TCP service like mysql\postgresql... in traefik 2.2.1 in Kubernetes cluster v1.18 . beacuse I am in my local machine and did not have a valid certification. This is the config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mysql-ingress-tcp-route
namespace: middleware
spec:
entryPoints:
- mysql
routes:
- match: HostSNI(`*`)
services:
- name: report-mysqlha
port: 3306
is config works fine in my local machine. But I still want to know the side effect to using
HostSNI() mapping stratege. What is the disadvantege to using HostSNI() not a domain name? Is it possible to using a fake domain name in my local machine?
For those needing an example of TCP with TLS passthrough and SNI routing
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: test-https
namespace: mynamespace
spec:
entryPoints:
- websecure # maps to port 443 by default
routes:
- match: HostSNI(`my.domain.com`)
services:
- name: myservice
port: 443
tls:
passthrough: true
As of the latest Traefik docs (2.4 at this time):
If both HTTP routers and TCP routers listen to the same entry points, the TCP routers will apply before the HTTP routers
It is important to note that the Server Name Indication is an extension of the TLS protocol. Hence, only TLS routers will be able to specify a domain name with that rule. However, non-TLS routers will have to explicitly use that rule with * (every domain) to state that every non-TLS request will be handled by the router.
Therefore, to answer your questions:
Using HostSNI(`*`) is the only reasonable way to use an ingressRouteTCP without tls -- since you're explicitly asking for a TCP router and TCP doesn't speak TLS.
I've had mixed success with ingressRouteTCP and HostSNI(`some.fqdn.here`) with a tls: section, but it does appear to be a supported configuration as per 2
One possible "disadvantage" to this (airquotes because it's subjective) is: This configuration means that any traffic that routes to your entrypoint (i.e. mysql) will be routed via this ingressRouteTCP
Consider: if for some reason you had another ingressRoute with the same entrypoint, the ingressRouteTCP would take precedence as per 1
Consider: if, for example you wanted to route multiple different mysql services via the same entrypoint: mysql, you wouldn't be able to based on this configuration
Ingress gateway is located behind AWS ELB(classic) using nodeport and I want to route TCP traffic in Virtual Service based on client ip.
Of course Proxy Protocol of ELB is enabled.
When I use HTTP, it works. The configuration is below.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-vservice
namespace: test
spec:
hosts:
- "app-service"
http:
- match:
- headers:
x-forwarded-for:
exact: 123.123.123.123
route:
- destination:
host: app-service
subset: v2
- route:
- destination:
host: app-service
subset: v1
But I can't find headers field of TCP route in official documents.
Is it impossible?
Thank you.
According to docs yes there is no field to pass headers in TCPRoute in Istio. Also to answer your question every header manipulation should be done using envoy filters because Istio, built on envoy supports that and also decreases the complexity.
Using envoy and lua filters as stated in Istio docs. It can be achieved. Please follow envoy docs.
Checkout the Istio Discussion for headers in Virtual Service.
For implementation of the same using Lua. And a blog showing an example how to implement filters on envoy.
i'm trying to reverse proxy using istio virtual service
it is possible forward request in virtual service? (like nginx's proxy_pass)
in result,
http://myservice.com/about/* -> forward request to CDN (external service outside k8s system - aws s3, etc....)
http://myservice.com/* -> my-service-web (internal service includes in istio mesh)
defined serviceentry, but it just "redirect", not forward reqeust.
here is my serviceentry.yaml and virtualservice.yaml
serviceentry.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: my-service-proxy
namespace: my-service
spec:
hosts:
- CDN_URL
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: TLS
resolution: DNS
virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
namespace: my-service
spec:
hosts:
- myservice.com
gateways:
- myservice
http:
- match:
- uri:
prefix: /about
rewrite:
authority: CDN_URL
uri: /
route:
- destination:
host: CDN_URL
- route:
- destination:
host: my-service-web.svc.cluster.local
port:
number: 80
virtualservice can acts like nginx-igress?
Based on that istio discuss
User #palic asked same question here
Shouldn’t it be possible to let ISTIO do the reverse proxy
thing, so that no one needs a webserver (httpd/nginx/
lighthttpd/…) to do the reverse proxy job?
And the answer provided by #Daniel_Watrous
The job of the Istio control plane is to configure a fleet of reverse proxies. The purpose of the webserver is to serve content, not reverse proxy. The reverse proxy technology at the heart of Istio is Envoy, and Envoy can be use as a replacement for HAProxy, nginx, Apache, F5, or any other component that is being used as a reverse proxy.
it is possible forward request in virtual service
Based on that I would say it's not possible to do in virtual service, it's just rewrite(redirect), which I assume is working for you.
when i need function of reverse proxy, then i have to using nginx ingresscontroller (or other things) instead of istio igress gateway?
If we talk about reverse proxy, then yes, you need to use other technology than istio itself.
As far as I'm concerned, you could use some nginx pod, which would be configured as reverse proxy to the external service, and it will be the host for your virtual service.
So it would look like in below example.
EXAMPLE
ingress gateway -> Virtual Service -> nginx pod ( reverse proxy configured on nginx)
Service entry -> accessibility of URLs outside of the cluster
Let me know if you have any more questions.
Following the tutorial for Kubernetes (in my case on GKE) https://docs.traefik.io/v2.0/user-guides/crd-acme/ I am stuck on how to assign the global static IP (GKE wants a forwarding rule). Am I missing something (e.g. adding another ingress)? I understand that annotations are not possible in the IngressRoute. So how would I assign the global reserved IP?
The answer to question 3 on this Q&A online meetup (https://gist.github.com/dduportal/13874113cf5fa1d0901655e3367c31e5) mentions that "classic ingress" is also possible with version 2.x. Does this mean I can set up traefik as in 1.x (like this: https://docs.traefik.io/user-guide/kubernetes/) use 2.x configuration and no need for CRD?
You do it like with every other Ingress Controller.
A good step-by-step instructions on how to assign a Static IP address to an Ingress are given on nginx-ingress`s website.
Follow the section called 'Promote ephemeral to static IP'
If to follow Traefik 2.0's exemplary manifests files made for Kubernetes, once you patch your Traefik's K8S Service (with kubectl patch traefik...), you can verify if IngressRoute's took effect with following command:
curl -i http://<static-ip-address>:8000/notls -H 'Host: your.domain.com'
Update
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 8000
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 4443
selector:
app: traefik
type: LoadBalancer
and patch it with:
kubectl patch svc traefik -p '{"spec": {"loadBalancerIP": "<your_static_ip>"}}'