How configure incoming port in router for Istio Ingress gateway - kubernetes

I try to configure istio ingress using openshift route. As I understand the request path is as follows:
request -> route -> ingress service -> gateway -> virtual service -> app service -> app
So, I apply follow config:
Route.yml:
kind: Route
...
spec:
host: my-app.com
to:
kind: Service
name: ingress-service
weight: 100
port:
targetPort: http
...
ingress-service.yml:
kind: Service
metadata:
name: ingress-service
...
spec:
ports:
- name: status-port
protocol: TCP
port: 15020
targetPort: 15020
- name: http
protocol: TCP
port: 9080
targetPort: 9080
selector:
app: ingressgateway
istio: ingressgateway
type: ClusterIP
ingress-gateway.yml:
kind: Gateway
metadata:
name: ingress-gw
...
spec:
servers:
- hosts:
- my-app.com
port:
name: http
number: 9080
protocol: HTTP
selector:
istio: ingressgateway
ingress-virtual-service.yml
kind: VirtualService
...
spec:
hosts:
- my-app.com
gateways:
- ingress-gw
http:
- route:
- destination:
host: my-app
port: 9080
exportTo:
- .
I dont set up port 9080 in deployment for ingressgateway pod. And it works. But only if I send request to http://my-app.com:80
Where did I go wrong and how to make only path accessible http://my-app.com:9080 ?

The exposed port numbers for external depend on Router(HAProxy) pod listening ports on OpenShift. If you want to 9080 port instead of 80, you should change the port on Router(HAProxy) pod. Or you can handle the port number on LB to use other port numbers.
The access flow is as follows.
LB(80, 443)
-> Router pod(80, 443)
-> Ingress-Gateway pod
-Through Gateway and VirtualService -> Backend pod

Related

Istio using IPv6 instead of IPv4

I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000. However, accessing the gateway is inaccessible from 127.0.0.1:443 and instead is only accessible from [::1]:443.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1
Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.

How to open custom port in Kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

Serving MQTT over WebSocket in Kubernates GCP environment

I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.

How to configure Istio's virtualservice for a service which exposes multiple ports?

I have a container which exposes multiple ports. So, the kubernetes service configured for the deployment looks like the following:
kind: Service
apiVersion: v1
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
name: myapp
ports:
- protocol: TCP
port: 5555
targetPort: 5555
- protocol: TCP
port: 5556
targetPort: 5556
I use Istio to manage routing and to expose this service via istio ingress gateway.
We have one gateway for port 80, do we have to create two different gateways for the same host with two different virtual service?
I want to configure that "example.myhost.com" 's 80 route to 5556 and some other port, let say, "example.myhost.com" 's 8088 route to 5555 of the service.
Is that possible with one virtualservice?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports 80 and 5556:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 8088
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods, you can specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- myapp-gateway
tcp:
- match:
- port: 80
route:
- destination:
host: myapp.prod.svc.cluster.local
port:
number: 5556
- match:
- port: 8088
route:
- destination:
host: myapp.prod.svc.cluster.local
port:
number: 5555
Above VirtualService defines the rules to route network traffic coming on 80 and 8088 ports for example.myhost.com to the myapp service ports 5556, 5555 respectively.
I encourage you to get more information about Istio TCPRoute capabilities and further appliance.

Open an external port into Istio - problem only on docker-for-mac

Update: This problem is only on docker-for-mac
I have been chasing this for some time now - how do you open an external port into Istio.
Note all this works on port 80, why not on port 8080?
Using helm, I have changed value in values.yaml gateways:
- port: 80
targetPort: 80
name: http2
# nodePort: 31380
- port: 8080
targetPort: 8080
name: http2-testport
# nodePort: 31480
I have created a Istion Gateway:
# Istio - Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http-80
protocol: HTTP
hosts:
- "my-service.default.svc.cluster.local"
- port:
number: 8080
name: http-8080
protocol: HTTP
hosts:
- "my-service.default.svc.cluster.local"
Port 8080 is open: kubectl get svc -n istio-system
istio-ingressgateway LoadBalancer 10.106.146.89 localhost 80:31342/TCP,443:31390/TCP,31400:31400/TCP,15011:31735/TCP,8060:32568/TCP,8080:32164/TCP,853:30443/TCP,15030:
You have to define a VirtualService to specify where (to which microservice) the ingress traffic must be directed, see https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway.
Also try to send the Host header with your request, e.g. with curl -H Host:my-service.default.svc.cluster.local.
See https://github.com/istio/istio.github.io/pull/2181.