How do I properly HTTPS secure an application when using Istio? - kubernetes

I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.

Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.

We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.

Related

Real Client IP for TCP services - Nginx Ingress Controller

We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-pre
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
more_set_input_headers 'Client-Id: $client';
}
spec:
tls:
- hosts:
- "*.pre.host1.com"
secretName: pre-host1
- hosts:
- "*.presaas.host2.com"
secretName: presaas-host2
rules:
- host: "*.pre.host1.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
- host: "*.presaas.host2.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.
apiVersion: v1
data:
"12345": pre/service-back:12345
kind: ConfigMap
metadata:
name: tcp-service
namespace: ingress-nginx
All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).
I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:
while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443
I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.
I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.
For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.
To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.

Istio - load balance mesh internal HTTP2 traffic to non-standard port

I want to load balance per request a mesh internal HTTP2 traffic coming to my ClusterIP Service over all its available replicas, using Istio; the first iteration is intended to work between two deployments within a single namespace, but I can't quite get there. I need to load balance on a non-standard port, I'm using standard port as a control group.
I was able to configure Istio so that requests from one long-lived connection to the service FQDN to standard port 80 are round robin'd correctly, but long-lived connection to a non-standard port such as 13080 will not round robin, instead a single pod will get all the requests (the behaviour looks like the K8s "iptables random" approach used in Service which only balances per connection, not per request).
Here's my most successful VirtualService definition yet:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
namespace: example
spec:
gateways:
- mesh
hosts:
- "*.example.com"
http:
- match:
- authority:
regex: "(.*.)?pods.example.com(:80)?"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 80
- match:
- authority:
regex: "(.*.)?pods.example.com:13080"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 13080
Ports are defined in the Service like this:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: http2-nonstd
port: 13080
protocol: TCP
targetPort: 13080
Using Istio 1.6.2. What am I missing?
EDIT: The original question had a typo in the VirtualService definition authority match for the port 13080 - there was exact instead of regex. Nothing changed, however. This supports the hypothesis that for some reason Istio ignores the non-standard port.

what is the disadvantage using hostSNI(*) in traefik TCP route mapping

Now I am using HostSNI(*) to mapping the TCP service like mysql\postgresql... in traefik 2.2.1 in Kubernetes cluster v1.18 . beacuse I am in my local machine and did not have a valid certification. This is the config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mysql-ingress-tcp-route
namespace: middleware
spec:
entryPoints:
- mysql
routes:
- match: HostSNI(`*`)
services:
- name: report-mysqlha
port: 3306
is config works fine in my local machine. But I still want to know the side effect to using
HostSNI() mapping stratege. What is the disadvantege to using HostSNI() not a domain name? Is it possible to using a fake domain name in my local machine?
For those needing an example of TCP with TLS passthrough and SNI routing
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: test-https
namespace: mynamespace
spec:
entryPoints:
- websecure # maps to port 443 by default
routes:
- match: HostSNI(`my.domain.com`)
services:
- name: myservice
port: 443
tls:
passthrough: true
As of the latest Traefik docs (2.4 at this time):
If both HTTP routers and TCP routers listen to the same entry points, the TCP routers will apply before the HTTP routers
It is important to note that the Server Name Indication is an extension of the TLS protocol. Hence, only TLS routers will be able to specify a domain name with that rule. However, non-TLS routers will have to explicitly use that rule with * (every domain) to state that every non-TLS request will be handled by the router.
Therefore, to answer your questions:
Using HostSNI(`*`) is the only reasonable way to use an ingressRouteTCP without tls -- since you're explicitly asking for a TCP router and TCP doesn't speak TLS.
I've had mixed success with ingressRouteTCP and HostSNI(`some.fqdn.here`) with a tls: section, but it does appear to be a supported configuration as per 2
One possible "disadvantage" to this (airquotes because it's subjective) is: This configuration means that any traffic that routes to your entrypoint (i.e. mysql) will be routed via this ingressRouteTCP
Consider: if for some reason you had another ingressRoute with the same entrypoint, the ingressRouteTCP would take precedence as per 1
Consider: if, for example you wanted to route multiple different mysql services via the same entrypoint: mysql, you wouldn't be able to based on this configuration

how to forward request to public service like cdn using istio virtualservice?

i'm trying to reverse proxy using istio virtual service
it is possible forward request in virtual service? (like nginx's proxy_pass)
in result,
http://myservice.com/about/* -> forward request to CDN (external service outside k8s system - aws s3, etc....)
http://myservice.com/* -> my-service-web (internal service includes in istio mesh)
defined serviceentry, but it just "redirect", not forward reqeust.
here is my serviceentry.yaml and virtualservice.yaml
serviceentry.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: my-service-proxy
namespace: my-service
spec:
hosts:
- CDN_URL
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: TLS
resolution: DNS
virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
namespace: my-service
spec:
hosts:
- myservice.com
gateways:
- myservice
http:
- match:
- uri:
prefix: /about
rewrite:
authority: CDN_URL
uri: /
route:
- destination:
host: CDN_URL
- route:
- destination:
host: my-service-web.svc.cluster.local
port:
number: 80
virtualservice can acts like nginx-igress?
Based on that istio discuss
User #palic asked same question here
Shouldn’t it be possible to let ISTIO do the reverse proxy
thing, so that no one needs a webserver (httpd/nginx/
lighthttpd/…) to do the reverse proxy job?
And the answer provided by #Daniel_Watrous
The job of the Istio control plane is to configure a fleet of reverse proxies. The purpose of the webserver is to serve content, not reverse proxy. The reverse proxy technology at the heart of Istio is Envoy, and Envoy can be use as a replacement for HAProxy, nginx, Apache, F5, or any other component that is being used as a reverse proxy.
it is possible forward request in virtual service
Based on that I would say it's not possible to do in virtual service, it's just rewrite(redirect), which I assume is working for you.
when i need function of reverse proxy, then i have to using nginx ingresscontroller (or other things) instead of istio igress gateway?
If we talk about reverse proxy, then yes, you need to use other technology than istio itself.
As far as I'm concerned, you could use some nginx pod, which would be configured as reverse proxy to the external service, and it will be the host for your virtual service.
So it would look like in below example.
EXAMPLE
ingress gateway -> Virtual Service -> nginx pod ( reverse proxy configured on nginx)
Service entry -> accessibility of URLs outside of the cluster
Let me know if you have any more questions.

Egress Istio rule won't work

I have a deployment istio is injected in with access to the google maps distance matrix api. If I run the istioctl kube-inject with --includeIPRanges 10.0.0.0/8 it seems to work. If I remove this flag and instead apply a egress rule it won't work:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
namespace: microservices
spec:
destination:
service: "maps.googleapis.com"
ports:
- port: 443
protocol: https
- port: 80
protocol: http
Both, deployment and Egress rule are in the same namespace (microservices).
Any idea where my fault is?
From what I see by running curl maps.googleapis.com, it redirects to https://developers.google.com/maps/.
Two issues here:
You have specify an additional EgressRule for developers.google.com
Currently you have to access https external sites by issuing http requests to port 443, like curl http://developers.google.com/maps:443. Istio proxy will open an https connection to developers.google.com for you. Unfortunately, currently there is no other way to do it, except for using --includeIPRanges.