Istio - load balance mesh internal HTTP2 traffic to non-standard port - kubernetes

I want to load balance per request a mesh internal HTTP2 traffic coming to my ClusterIP Service over all its available replicas, using Istio; the first iteration is intended to work between two deployments within a single namespace, but I can't quite get there. I need to load balance on a non-standard port, I'm using standard port as a control group.
I was able to configure Istio so that requests from one long-lived connection to the service FQDN to standard port 80 are round robin'd correctly, but long-lived connection to a non-standard port such as 13080 will not round robin, instead a single pod will get all the requests (the behaviour looks like the K8s "iptables random" approach used in Service which only balances per connection, not per request).
Here's my most successful VirtualService definition yet:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
namespace: example
spec:
gateways:
- mesh
hosts:
- "*.example.com"
http:
- match:
- authority:
regex: "(.*.)?pods.example.com(:80)?"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 80
- match:
- authority:
regex: "(.*.)?pods.example.com:13080"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 13080
Ports are defined in the Service like this:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: http2-nonstd
port: 13080
protocol: TCP
targetPort: 13080
Using Istio 1.6.2. What am I missing?
EDIT: The original question had a typo in the VirtualService definition authority match for the port 13080 - there was exact instead of regex. Nothing changed, however. This supports the hypothesis that for some reason Istio ignores the non-standard port.

Related

AKS pod with two services within OSM

We have an application which exposes two ports (for API and WebSocket). Application is deployed in OSM-enabled namespace. We're using nginx-ingress for external access. Currently there are:
two services connected to this pod (one for API and second one for WebSocket)
#api-svc
Type: ClusterIP
IP: [some-ip]
Port: http 80/TCP
TargetPort: 18610/TCP
Endpoints: [some-ip]:18610
-------
#websocket-svc
Type: ClusterIP
IP: [some-ip]
Port: ws 80/TCP
TargetPort: 18622/TCP
Endpoints: [some-ip]:18622
one ingress rule which routes traffic based on path:
paths:
- path: /api
pathType: ImplementationSpecific
backend:
service:
name: api-svc
port:
number: 80
- path: /swiftsockjs
pathType: ImplementationSpecific
backend:
service:
name: websocket-svc
port:
number: 80
one ingressBackend for OSM allowance:
Spec:
Backends:
Name: api-svc
Port:
Number: 18610
Protocol: http
Name: websocket-svc
Port:
Number: 18622
Protocol: http
Sources:
Kind: Service
Name: ingress-nginx-controller
Namespace: ingress
The problem we are facing is that traffic is routed only to one targetPort at a time (i.e. only to 18610 or 18622) regardless the URL path. In the ingress controller logs it’s visible to traffic is routed correctly (/api to 18610 and /swiftsockjs to 18622). The problem is visible in the envoy sidecar logs. Both requests are going to the same upstream_cluster (it should differ by port). This can be seen at line 15th of comparision:
What's the strangest the behavior is changing randomly when service or ingressBackend are redeployed. So one time all requests are forwarded to 18610 and other time to 18622.
We have tried to use multi-port service but according to this OSM PR it's not supported (although results were exactly the same).
Does anyone has any ideas how to fix this? I've read almost whole OSM documentation and MS Docs regarding OMS-Addon but haven't find answer to this problem (or similar example with multiport pod in OSM).
According to Azure support - such a solution is not possible within OSM. Quote:
A restart of the process or the pod sometimes results in the IP:PORT change but also traffic will be consistently forwarded to that IP:PORT.
This appears to be due to the behavior of the proxy. As per OSM github document. It is a 1:1 relationship between the proxy and the endpoint.
It is also a 1:1 relationship between the proxy and the service.
In other words, the proxy will not be able to handle a pod serving multiple services.
Suggestion from MS was to split application logic to separate deployments(pods) so both can server one port at a time.

Real Client IP for TCP services - Nginx Ingress Controller

We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-pre
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
more_set_input_headers 'Client-Id: $client';
}
spec:
tls:
- hosts:
- "*.pre.host1.com"
secretName: pre-host1
- hosts:
- "*.presaas.host2.com"
secretName: presaas-host2
rules:
- host: "*.pre.host1.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
- host: "*.presaas.host2.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.
apiVersion: v1
data:
"12345": pre/service-back:12345
kind: ConfigMap
metadata:
name: tcp-service
namespace: ingress-nginx
All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).
I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:
while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443
I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.
I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.
For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.
To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.

How do I properly HTTPS secure an application when using Istio?

I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.
Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.
We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.

kubernetes expose services with Traefik 2.x as ingress with CRD

What do i have working
I have a Kubernetes cluster as follow:
Single control plane (but plan to extend to 3 control plane for HA)
2 worker nodes
On this cluster i deployed (following this doc from traefik https://docs.traefik.io/user-guides/crd-acme/):
A deployment that create two pods :
traefik itself: which will be in charge of routing with exposed port 80, 8080
whoami:a simple http server thats responds to http requests
two services
traefik service:
whoami servic:
One traefik IngressRoute:
What do i want
I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new Traefik 2.x CDR ingress methods.
My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using IngressRoute Custom resource definitions
What's the problem
If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).
But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...
Also i've seed from the doc of the new igress CRD (https://docs.traefik.io/user-guides/crd-acme/) that:
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default
is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with kubectl.
I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8000
- protocol: TCP
name: admin
port: 8080
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 4443
selector:
app: traefik
have you tried to use tragetPort where every request comes on 80 redirect to 8000 but when you use port-forward you need to always use service instead of pod
You can try to use LoadBalancer service type for expose the Traefik service on ports 80, 443 and 8080. I've tested the yaml from the link you provided in GKE, and it's works.
You need to change the ports on 'traefik' service and add a 'LoadBalancer' as service type:
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80 <== Port to receive HTTP connections
- protocol: TCP
name: admin
port: 8080 <== Administration port
- protocol: TCP
name: websecure
port: 443 <== Port to receive HTTPS connections
selector:
app: traefik
type: LoadBalancer <== Define the type load balancer
Kubernetes will create a Loadbalancer for you service and you can access your application using ports 80 and 443.
$ curl https://35.111.XXX.XX/tls -k
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /tls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
$ curl http://35.111.XXX.XX/notls
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /notls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
Well after some time i've decided to put an haproxy in front of the kubernetes Cluster. It's seems to be the only solution ATM.

Egress Istio rule won't work

I have a deployment istio is injected in with access to the google maps distance matrix api. If I run the istioctl kube-inject with --includeIPRanges 10.0.0.0/8 it seems to work. If I remove this flag and instead apply a egress rule it won't work:
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
namespace: microservices
spec:
destination:
service: "maps.googleapis.com"
ports:
- port: 443
protocol: https
- port: 80
protocol: http
Both, deployment and Egress rule are in the same namespace (microservices).
Any idea where my fault is?
From what I see by running curl maps.googleapis.com, it redirects to https://developers.google.com/maps/.
Two issues here:
You have specify an additional EgressRule for developers.google.com
Currently you have to access https external sites by issuing http requests to port 443, like curl http://developers.google.com/maps:443. Istio proxy will open an https connection to developers.google.com for you. Unfortunately, currently there is no other way to do it, except for using --includeIPRanges.