Istio Custom Ingress Gateway Works 80 only - kubernetes

I want to create my own ingress gateway with Istio. Here's my intention:
traffic on 4000 > my-gateway > my-virtualservice > web service (listening on 4000)
I've deployed the following YAML:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 4000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtualservice
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- route:
- destination:
host: web
port:
number: 4000
This doesn't work, but changing the gateway port number: 4000 to number: 80 does work.
Presumably because the istio-ingressgateway is open on 80.
Which leads me to believe that this chain is actually:
traffic on 4000 > my-gateway > my-virtualservice > istio-ingressgateway > web service
I assume I can fix this by opening 4000 on the istio-ingressgateway but doesn't that defeat the point of creating a custom gateway?
I thought the whole point of creating my-gateway was to avoid using the istio-ingressgateway?
Help me understand! :D

Traffic Flow: Clent -> LoadBalancer(Ingress Gateway Service) -> Ingress Gateway Envoy -> Sidecar Envoy for your application -> Your application.
The ingress gateway is an envoy deployed at the edge of a Kubernetes cluster. All incoming request(HTTP, TCP) to the services inside the cluster arrives at the ingress gateway.The Gateway and Virtual Service kind is what lets you configure Envoy proxy of the ingress gateway.
Creating a gateway object does not really deploy a new gateway,it just configures the same envoy proxy running as the ingress gateway.
Here is a good reference

Related

Accessing Jaeger /tracing on from k8s cluster returns index.html and 503 Service Unavailable

I have a Kubernetes cluster which runs with Istio as a service mesh and load balancing provided by Metallb. I have 4 Istio addons (Prometheus, Kiali, Grafana, and Jaeger) running on the cluster in the istio namespace, but running firefox on the virtual machine is relatively slow and I also don't want to rely on the "istioctl dashboard" command in order to access my monitoring tools.
I've successfully been able to access Kiali and Grafana by tunneling in with putty and utilizing Istio ingressgateway with Gateway/Virtual service resources similar to those found in istio documentation here - https://istio.io/latest/docs/tasks/observability/gateways/. The istio ingressgateway pod is listening on 10.10.1.10 and my putty tunnel is directed to 10.10.1.10:80 with a source port of 90. Everything is done in http for testing at this time
I've listed my specific configuration below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tracing-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http-tracing
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tracing-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- tracing-gateway
http:
- route:
- destination:
host: tracing
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tracing
namespace: istio-system
spec:
host: tracing
trafficPolicy:
tls:
mode: DISABLE
---
Whenever I attempt to access Jaeger by hitting the /tracing , however, I always receive a 503 service unavailable error. I know that the application can be functional though because if I run the istioctl dashboard jaeger command I can access it through the VM's firefox browser. I'm wondering what I need to configure within Jaeger to allow me to access it
Initially, when working with Jaeger I attempted to use a gateway/virtualsservice configuration that was identical to what worked for Grafana and Kiali but replacing names/ports/prefixes. which is shown below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
host: grafana
port:
number: 3000
When running this for jaeger I only ever received HTTP 503 responses. After trying different combinations of ports I used the yaml definition from the Istio page listed in the link above, changing only the hosts line since I don't have a domain and everything is IP based.
At this point, when I navigate to /tracing using my putty tunnel, it returns a blank page which, if inspected, is the jaegers index.html page. Inspecting the page shows that it attempts to redirect to jaeger_tracing but returns the net::ERR_ABORTED 503 (Service Unavailable) code shown in the screenshot below /tracing_error_image
A workaround solution was found by running the kubectl port-forward command on port 16686 but the same can be done with the istioctl dashboard jaeger command.
I ran one of them in the background and tunneled to localhost using my putty instance using the url /jaeger_tracing which is defined in my jaeger manifest. From there I could hit jaeger from my local instance without the sluggish VM performance.

K8s service LB to external services w/ nginx-ingress controller

Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.

Istio Ingress Gateway with TLS termination returning 503 service unavailable

We want to to route https traffic to an https endpoint using Istio Ingress Gateway.
We terminate the TLS traffic at the Ingress Gateway, but our backend service uses https as well.
I have the following manifests:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: datalake-dsodis-istio-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "gw-hdfs-spark.dsodis.domain"
- "spark-history.dsodis.domain"
port:
name: https-wildcard
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gw-spark-history-istio-vs
spec:
gateways:
- default/datalake-dsodis-istio-gateway
hosts:
- "spark-history.dsodis.domain"
http:
- match:
- uri:
prefix: /
route:
- destination:
host: gateway-svc-clusterip.our_application_namespace.svc.cluster.local
port:
number: 8443
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-spark-history
spec:
host: gateway-svc-clusterip.our_application_namespace.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 8443
tls:
mode: SIMPLE
The problem is most likely, that we are sending TLS terminated traffic, (so to say) HTTP traffic, to the HTTPS backend. Therefore we might get 503 Service Unavailable when accessing the service through Istio.
The command accessing it is:
curl -vvvv -H"Host: spark-history.dsodis.domain" --resolve "spark-history.dsodis.domain:31390:IP" https://spark-history.dsodis.domain:31390/gateway/default/sparkhistory -k
My question is, how can I tell Istio to route traffic to the backend service using https?
Thanks in advance.
Best regards,
rforberger
As RonnyForberger mentioned in his comment this can be achieved by creating DestinationRule that tells the traffic to the destination service to be TLS connection.
So in this scenario:
HTTPS request gets TLS terminated at GateWay to HTTP.
Then the HTTP request is translated to TLS with DestinationRule to HTTPS.
HTTPS request reaches HTTPS backend.

Cannot allow external traffic through ISTIO

I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.
I am unable to find a successful way of doing it.
Tried Service entry, policy and destination rule and didnt succeed.
Helps is highly appreciated.
version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
You need to add a DestinationRule and a Policy :
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation https://istio.io/help/ops/setup/app-health-check/
From your question, I understood that you want to control your ingress traffic allow some ports to your services that functioning in your mesh/cluster from outside, but your configuration is for egress traffic.
In order to control and allow ports to your services from outside, you can follow these steps.
1.Make sure that containerPort included to your deployment/pod configuration.
For more info
2.You have to have service pointing to your backends/pods. For more info about Kubernetes Services.
3.Then in your Istio enabled cluster, you have to create Gateway similar to below configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-service-gateway
namespace: foo-namespace # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
4.Then configure route to your service for traffic entering via the this gateway by creating VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: your-service
namespace: foo-namespace # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- your-service-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: your-service # Backend service name
Hope it helps.

TCP Ingress with Istio 0.8 and v1alpha3 Gateway

I am attempting to open a TCP connection into an Istio service mesh using the v1alpha3 routing. I can successfully open a connection with the external load balancer. That traffic is making it into the default IngressGateway as expected; I have verified this with tcpdump on the IngressGateway pod.
Unfortunately, the traffic is never forwarded into the service mesh; it seems to die in the IngressGateway.
The following is an example of my configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 31400
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo-gateway
spec:
hosts:
- "*"
gateways:
- echo-gateway
tcp:
- match:
- port: 31400
route:
- destination:
host: echo.default.svc.cluster.local
port:
number: 6060
I have verified that the IngressGateway can reach the Service via netcat on the specified port. Running tcpdump on the Service pod with the envoy indicates that there is never a communication attempt with the pod or proxy.
I've read over the documentation several times and I'm at a loss as how to proceed. This line from the documentation is suspicious to me:
While Istio will configure the proxy to listen on these ports, it is the responsibility of the user to ensure that external traffic to these ports are allowed into the mesh.
Any thoughts?
You should give the Gateway port a name such as
port:
name: not_http
number: 80
protocol: HTTP
(When I tried to create your cluster without a name in Istio 1.0 it was rejected). Using "not_http" helps to remind us that this is a TCP gateway and won't have access to all of the Istio configuration features.
The VirtualService looks correct. Make sure that you have only one VirtualService for the host "*" (use istioctl get all --all-namespaces).