How does the Gateway port definition work? - kubernetes

I have an example istio cluster on AKS with the default ingress gateway. Everything works as expected I'm just trying to understand how. The Gateway is defined like so:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: some-config-namespace
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https-443
protocol: HTTPS
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
mode: SIMPLE # enables HTTPS on this port
serverCertificate: /etc/certs/servercert.pem
privateKey: /etc/certs/privatekey.pem
Reaching the site on https://uk.bookinfo.com works fine. However when I look at the LB and Service that goes to the ingressgateway pods I see this:
LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:8443
kind: Service
spec:
ports:
- name: http2
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30804
- name: https
protocol: TCP
port: 443
targetPort: 8443
nodePort: 31843
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 10.2.138.74
type: LoadBalancer
Since the targetPort for the istio-ingressgateway pods is 8443 then how does the Gateway definition work which defines a port number as 443?

As mentioned here
port: The port of this service
targetPort: The target port on the pod(s) to forward traffic to
As far as I know targetPort: 8443 points to envoy sidecar, so If I understand correctly envoy listen on 8080 for http and 8443 for https.
There is an example in envoy documentation.
So it goes like this:
LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:443 -> envoy-sidecar:8443
LB-IP:80 -> CLUSTER-IP:80 -> istio-ingressgateway:80 -> envoy-sidecar:8080
For example, for http if you check your ingress-gateway pod with netstat without any gateway configured there isn't anything listening on port 8080:
kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy#istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
Let's create a http gateway now with below yaml.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-gw
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
And check with netstat again:
kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy#istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
As you can see we have configured gateway on port 80, but inside the ingressgateway we can see that it's listening on port 8080.

I think this mechanism of istio is a bit confusing.
In my understanding, the port defined by the gateway should be the port of listen in the pod, and then the service decides the exposed port.
althouth the current mechanism is convenient for users, there will be some confusion and uncertainty.
If istio wants to keep this mechanism, I suggest istio to improve it. When creating a gateway, it is associated with ingress-svc instead of pod, and then the port is clearly defined as the port of svc. If the port does not exist in svc, update the service to open it Or prohibit the creation of the gateway

Related

Using the external IP service of the load balancer for different pod Kubernetes

I have deployed RabbitMQ in Kubernetes using a service with the load balancer type. When creating a service, an external IP is created. Could you please tell me if I can bind another deployment to this IP with other ports? Thanks.
It is possible, you just have to create a service with multiple ports, for example:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: any other port
port: <port-number>
targetPort: <target-port>
selector:
app: app
type: LoadBalancer
And you will output similar to this:
$kubectl get svc
service-name LoadBalancer <Internal-IP> <External-IP> 80:30870/TCP,443:32602/TCP,<other-port>:32388/TCP

Unable to open Istio ingress-gateway for gRPC

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Failing to establish mqtt connection to VerneMQ cluster in k8s behind Istio proxy

I'm setting up k8s on-prem k8s cluster. For tests I use single-node cluster on vm set up with kubeadm.
My requirements include running MQTT cluster (vernemq) in k8s with external access via Ingress (istio).
Without deploying ingress, I can connect (mosquitto_sub) via NodePort or LoadBalancer service.
Istio was installed using istioctl install --set profile=demo
The problem
I am trying to access VerneMQ broker from outside the cluster. Ingress (Istio Gateway) – seems like perfect solution in this case, but I can't establish TCP connection to broker (nor via ingress IP, nor directly via svc/vernemq IP).
So, how do I established this TCP connection from external client through Istio ingress?
What I tried
I've created two namespaces:
exposed-with-istio – with istio proxy injection
exposed-with-loadbalancer - without istio proxy
Within exposed-with-loadbalancer namespace I deployed vernemq with LoadBalancer Service. It works, this is how I know VerneMQ can be accessed (with mosquitto_sub -h <host> -p 1883 -t hello, host is ClusterIP or ExternalIP of the svc/vernemq). Dashboard is accessible at host:8888/status, 'Clients online' increments on the dashboard.
Within exposed-with-istio I deployed vernemq with ClusterIP Service, Istios Gateway and VirtualService.
Immediately istio-after proxy injection, mosquitto_sub can't subscribe through the svc/vernemq IP, nor through the istio ingress (gateway) IP. Command just hangs forever, constantly retrying.
Meanwhile vernemq dashboard endpoint is accessible through both service ip and istio gateway.
I guess istio proxy must be configured for mqtt to work.
Here is istio-ingressgateway service:
kubectl describe svc/istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=installed-state
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.7.0
release=istio
Annotations: Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: 10.100.213.45
LoadBalancer Ingress: 192.168.100.240
Port: status-port 15021/TCP
TargetPort: 15021/TCP
Port: http2 80/TCP
TargetPort: 8080/TCP
Port: https 443/TCP
TargetPort: 8443/TCP
Port: tcp 31400/TCP
TargetPort: 31400/TCP
Port: tls 15443/TCP
TargetPort: 15443/TCP
Session Affinity: None
External Traffic Policy: Cluster
...
Here are debug logs from istio-proxy
kubectl logs svc/vernemq -n test istio-proxy
2020-08-24T07:57:52.294477Z debug envoy filter original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug envoy filter tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug envoy filter http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug envoy filter [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug envoy pool creating a new connection
2020-08-24T07:57:52.294671Z debug envoy pool [C5646] connecting
2020-08-24T07:57:52.294684Z debug envoy connection [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug envoy connection [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug envoy pool queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug envoy conn_handler [C5645] new connection
2020-08-24T07:57:52.294768Z debug envoy connection [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug envoy connection [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug envoy pool [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug envoy connection [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug envoy connection [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug envoy conn_handler [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug envoy pool [C5646] connection destroyed
This are logs from istio-ingressagateway. IP 10.244.243.205 belongs to VerneMQ pod, not service (probably it's intended).
2020-08-24T08:48:31.536593Z debug envoy filter [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug envoy filter [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug envoy pool creating a new connection
2020-08-24T08:48:31.536778Z debug envoy pool [C13237] connecting
2020-08-24T08:48:31.536784Z debug envoy connection [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug envoy connection [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug envoy pool queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug envoy conn_handler [C13236] new connection
2020-08-24T08:48:31.537181Z debug envoy connection [C13237] connected
2020-08-24T08:48:31.537204Z debug envoy pool [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug envoy filter TCP:onUpstreamEvent(), requestedServerName:
2020-08-24T08:48:31.537880Z debug envoy misc Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug envoy connection [C13237] remote close
2020-08-24T08:48:31.537913Z debug envoy connection [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug envoy pool [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug envoy connection [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug envoy connection [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug envoy conn_handler [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug envoy pool [C13237] connection destroyed
My configurations
vernemq-istio-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
name: exposed-with-istio
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
namespace: exposed-with-istio
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
namespace: exposed-with-istio
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
namespace: exposed-with-istio
labels:
app: vernemq
spec:
selector:
app: vernemq
type: ClusterIP
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
- port: 8888
name: http-dashboard
- port: 1883
name: tcp-mqtt
targetPort: 1883
- port: 9001
name: tcp-mqtt-ws
targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vernemq
namespace: exposed-with-istio
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: vernemq/vernemq
ports:
- containerPort: 1883
name: tcp-mqtt
protocol: TCP
- containerPort: 8080
name: tcp-mqtt-ws
- containerPort: 8888
name: http-dashboard
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100-9109 # shortened
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
value: "3,4,5"
- name: DOCKER_VERNEMQ_allow_register_during_netsplit
value: "on"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
vernemq-loadbalancer-service.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
istio.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: vernemq-destination
namespace: exposed-with-istio
spec:
host: vernemq.exposed-with-istio.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: vernemq-gateway
namespace: exposed-with-istio
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
http:
- match:
- uri:
prefix: /status
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 8888
tcp:
- match:
- port: 31400
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 1883
Does Kiali screenshot imply that ingressgateway only forwards HTTP traffic to the service and eats all TCP?
UPD
Following suggestion, here's the output:
** But your envoy logs reveal a problem: envoy misc Unknown error code 104 details Connection reset by peer and envoy pool [C5648] client disconnected.
istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio
first with | grep 8888 and | grep 1883
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
... Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000 App: HTTP Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000 ALL Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369 ALL Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 9001 ALL Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 9090 App: HTTP Route: 9090
0.0.0.0 9090 ALL PassthroughCluster
10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 9411 App: HTTP ...
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 App: TCP TLS Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 App: TCP TLS Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15010 App: HTTP Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.106.166.154 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 App: HTTP Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.100.213.45 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.100.213.45 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0 20001 App: HTTP Route: 20001
0.0.0.0 20001 ALL PassthroughCluster
10.100.213.45 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
** Furthermore please run: istioctl proxy-config endpoints and istioctl proxy-config routes .
istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio
grep 1883
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.101.200.113:9411 HEALTHY OK zipkin
10.106.166.154:15012 HEALTHY OK xds-grpc
10.211.55.14:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053 HEALTHY OK outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443 HEALTHY OK outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369 HEALTHY OK outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888 HEALTHY OK outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001 HEALTHY OK outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053 HEALTHY OK outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369 HEALTHY OK inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888 HEALTHY OK inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001 HEALTHY OK inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
127.0.0.1:44053 HEALTHY OK inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS HEALTHY OK sds-grpc
istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021 istio-ingressgateway.istio-system /*
istiod.istio-system.svc.cluster.local:853 istiod.istio-system /*
20001 kiali.istio-system /*
15010 istiod.istio-system /*
15014 istiod.istio-system /*
vernemq.exposed-with-istio.svc.cluster.local:4369 vernemq /*
vernemq.exposed-with-istio.svc.cluster.local:44053 vernemq /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system /*
8888 vernemq /*
80 istio-egressgateway.istio-system /*
80 istio-ingressgateway.istio-system /*
80 tracing.istio-system /*
grafana.istio-system.svc.cluster.local:3000 grafana.istio-system /*
9411 zipkin.istio-system /*
9090 kiali.istio-system /*
9090 prometheus.istio-system /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
* /stats/prometheus*
InboundPassthroughClusterIpv4 * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
InboundPassthroughClusterIpv4 * /*
* /healthz/ready*
First I would recommand to enable envoy logging for the pod
kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace
No follow the istio sidecar logs by
kubectl logs <pod-name> -c isito-proxy -f
Update
Since your envoy proxy logs a problem on both sides, the connection works, but can not be established.
Regarding port 15006: In istio all traffic is routed over the envoy proxy (istio-sidecar). For that istio maps each port with on 15006 for inbound (meaning all incoming traffic to the sidecar from somewhere) and on 15001 for outbound (meaning from the sidecar to somewhere).
More on that here: https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration
The config of istioctl proxy-config listeners <pod-name> looks got so far. Let's try to find the error.
Istio is sometimes very strict with it's configuration requirments. To rule that out, could you please first adjust your service to type: ClusterIP and add a targetport for the mqtt port:
- port: 1883
name: tcp-mqtt
targetPort: 1883
Furthermore please run: istioctl proxy-config endpoints <pod-name>and istioctl proxy-config routes <pod-name>.
I had the same problem when using VerneMQ over an istio gateway. The problem was that the VerneMQ process resets the TCP connection if the listener.tcp.default contains the default value of 127.0.0.1:1883. I fixed it by using DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT with "0.0.0.0:1883".
Based on your configuration I would say you would have to use ServiceEntry to enable communication between pods in the mesh and pods outside the mesh.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.
You use a service entry to add an entry to the service registry that Istio maintains internally. After you add the service entry, the Envoy proxies can send traffic to the service as if it was a service in your mesh. Configuring service entries allows you to manage traffic for services running outside of the mesh
For more informations and more examples visit istio documentation about service entry here and here.
Let me know if you have any more questions.
Here's my Gateway, VirtualService and Service config that work without issues
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mqtt-domain-tld-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- mqtt.domain.tld
- port:
number: 15443
name: tls
protocol: TLS
hosts:
- mqtt.domain.tld
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mqtt-domain-tld-vs
spec:
hosts:
- mqtt.domain.tld
gateways:
- mqtt-domain-tld-gw
tcp:
- match:
- port: 31400
route:
- destination:
host: mqtt
port:
number: 1883
tls:
- match:
- port: 15443
sniHosts:
- mqtt.domain.tld
route:
- destination:
host: mqtt
port:
number: 8883
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mqtt
name: mqtt
spec:
ports:
- name: tcp-mqtt
port: 1883
protocol: TCP
targetPort: 1883
appProtocol: tcp
- name: tls-mqtt
port: 8883
protocol: TCP
targetPort: 8883
appProtocol: tls
selector:
app: mqtt
type: LoadBalancer

Istio mTLS working just between some services even though tls-check prints STATUS OK for everyone

I am trying to enable mTLS in my mesh that I have already working with istio's sidecars.
The problem I have is that I just get working connections up to one point, and then it fails to connect.
This is how the services are set up right now with my failing implementation of mTLS (simplified):
Istio IngressGateway -> NGINX pod -> API Gateway -> Service A -> [ Database ] -> Service B
First thing to note is that I was using a NGINX pod as a load balancer to proxy_pass my requests to my API Gateway or my frontend page. I tried keeping that without the istio IngressGateway but I wasn't able to make it work. Then I tried to use Istio IngressGateway and connect directly to the API Gateway with VirtualService but also fails for me. So I'm leaving it like this for the moment because it was the only way that my request got to the API Gateway successfully.
Another thing to note is that Service A first connects to a Database outside the mesh and then makes a request to Service B which is inside the mesh and with mTLS enabled.
NGINX, API Gateway, Service A and Service B are within the mesh with mTLS enabled and "istioctl authn tls-check" shows that status is OK.
NGINX and API Gateway are in a namespace called "gateway", Database is in "auth" and Service A and Service B are in another one called "api".
Istio IngressGateway is in namespace "istio-system" right now.
So the problem is that everything work if I set STRICT mode to the gateway namespace and PERMISSIVE to api, but once I set STRICT to api, I see the request getting into Service A, but then it fails to send the request to Service B with a 500.
This is the output when it fails that I can see in the istio-proxy container in the Service A pod:
api/serviceA[istio-proxy]: [2019-09-02T12:59:55.366Z] "- - -" 0 - "-" "-" 1939 0 2 - "-" "-" "-" "-" "10.20.208.248:4567" outbound|4567||database.auth.svc.cluster.local 10.20.128.44:35366 10.20.208.248:4567
10.20.128.44:35364 -
api/serviceA[istio-proxy]: [2019-09-02T12:59:55.326Z] "POST /api/my-call HTTP/1.1" 500 - "-" "-" 74 90 60 24 "10.90.0.22, 127.0.0.1, 127.0.0.1" "PostmanRuntime/7.15.0" "14d93a85-192d-4aa7-aa45-1501a71d4924" "serviceA.api.svc.cluster.local:9090" "127.0.0.1:9090" inbound|9090|http-serviceA|serviceA.api.svc.cluster.local - 10.20.128.44:9090 127.0.0.1:0 outbound_.9090_._.serviceA.api.svc.cluster.local
No messages in ServiceB though.
Currently, I do not have a global MeshPolicy, and I am setting Policy and DestinationRule per namespace
Policy:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: gateway
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: auth
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: api
spec:
peers:
- mtls:
mode: STRICT
DestinationRule:
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-gateway"
namespace: "gateway"
spec:
host: "*.gateway.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-api"
namespace: "api"
spec:
host: "*.api.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-auth"
namespace: "auth"
spec:
host: "*.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Then I have some DestinationRule to disable mTLS for Database (I have some other services in the same namespace that I want to enable with mTLS) and for Kubernetes API
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "myDatabase"
namespace: "auth"
spec:
host: "database.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "k8s-api-server"
namespace: default
spec:
host: "kubernetes.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
Then I have my IngressGateway like so:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my-api.example.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- my-api.example.com
And lastly, my VirtualServices:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ingress-nginx
namespace: gateway
spec:
hosts:
- my-api.example.com
gateways:
- ingress-gateway.istio-system
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: ingress.gateway.svc.cluster.local # this is NGINX pod
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-gateway
namespace: gateway
spec:
hosts:
- my-api.example.com
- api-gateway.gateway.svc.cluster.local
gateways:
- mesh
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: api-gateway.gateway.svc.cluster.local
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
One thing that I don't understand is why do I have to create a VirtualService for my API Gateway and why do I have to use "mesh" in the gateways block. If I remove this block, I don't get my request in API Gateway, but if I do, it works and my requests even get to the next service (Service A), but not the next one to that.
Thanks for the help. I am really stuck with this.
Dump of listeners of ServiceA:
ADDRESS PORT TYPE
10.20.128.44 9090 HTTP
10.20.253.21 443 TCP
10.20.255.77 80 TCP
10.20.240.26 443 TCP
0.0.0.0 7199 TCP
10.20.213.65 15011 TCP
0.0.0.0 7000 TCP
10.20.192.1 443 TCP
0.0.0.0 4568 TCP
0.0.0.0 4444 TCP
10.20.255.245 3306 TCP
0.0.0.0 7001 TCP
0.0.0.0 9160 TCP
10.20.218.226 443 TCP
10.20.239.14 42422 TCP
10.20.192.10 53 TCP
0.0.0.0 4567 TCP
10.20.225.206 443 TCP
10.20.225.166 443 TCP
10.20.207.244 5473 TCP
10.20.202.47 44134 TCP
10.20.227.251 3306 TCP
0.0.0.0 9042 TCP
10.20.207.141 3306 TCP
0.0.0.0 15014 TCP
0.0.0.0 9090 TCP
0.0.0.0 9091 TCP
0.0.0.0 9901 TCP
0.0.0.0 15010 TCP
0.0.0.0 15004 TCP
0.0.0.0 8060 TCP
0.0.0.0 8080 TCP
0.0.0.0 20001 TCP
0.0.0.0 80 TCP
0.0.0.0 10589 TCP
10.20.128.44 15020 TCP
0.0.0.0 15001 TCP
0.0.0.0 9000 TCP
10.20.219.237 9090 TCP
10.20.233.60 80 TCP
10.20.200.156 9100 TCP
10.20.204.239 9093 TCP
0.0.0.0 10055 TCP
0.0.0.0 10054 TCP
0.0.0.0 10251 TCP
0.0.0.0 10252 TCP
0.0.0.0 9093 TCP
0.0.0.0 6783 TCP
0.0.0.0 10250 TCP
10.20.217.136 443 TCP
0.0.0.0 15090 HTTP
Dump clusters in json format: https://pastebin.com/73zmAPWg
Dump listeners in json format: https://pastebin.com/Pk7ddPJ2
Curl command from serviceA container to serviceB:
/opt/app # curl -X POST -v "http://serviceB.api.svc.cluster.local:4567/session/xxxxxxxx=?parameters=hi"
* Trying 10.20.228.217...
* TCP_NODELAY set
* Connected to serviceB.api.svc.cluster.local (10.20.228.217) port 4567 (#0)
> POST /session/xxxxxxxx=?parameters=hi HTTP/1.1
> Host: serviceB.api.svc.cluster.local:4567
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host serviceB.api.svc.cluster.local left intact
curl: (52) Empty reply from server
If I disable mTLS, request gets from serviceA to serviceB with Curl
General tips for debugging Istio service mesh:
Check the requirements for services and pods.
Try a similar task to what you are trying to perform from the list of Istio tasks. See if that task works and find the differences with your task.
Follow the instructions on Istio common problems page.

Open an external port into Istio - problem only on docker-for-mac

Update: This problem is only on docker-for-mac
I have been chasing this for some time now - how do you open an external port into Istio.
Note all this works on port 80, why not on port 8080?
Using helm, I have changed value in values.yaml gateways:
- port: 80
targetPort: 80
name: http2
# nodePort: 31380
- port: 8080
targetPort: 8080
name: http2-testport
# nodePort: 31480
I have created a Istion Gateway:
# Istio - Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http-80
protocol: HTTP
hosts:
- "my-service.default.svc.cluster.local"
- port:
number: 8080
name: http-8080
protocol: HTTP
hosts:
- "my-service.default.svc.cluster.local"
Port 8080 is open: kubectl get svc -n istio-system
istio-ingressgateway LoadBalancer 10.106.146.89 localhost 80:31342/TCP,443:31390/TCP,31400:31400/TCP,15011:31735/TCP,8060:32568/TCP,8080:32164/TCP,853:30443/TCP,15030:
You have to define a VirtualService to specify where (to which microservice) the ingress traffic must be directed, see https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway.
Also try to send the Host header with your request, e.g. with curl -H Host:my-service.default.svc.cluster.local.
See https://github.com/istio/istio.github.io/pull/2181.