Can't connect to external Kafka Service from Istio Mesh - kubernetes

I can't connect to an "external" (outside the mesh) Kafka service from inside the mesh. Inside the Istio mesh I have an Spring Boot app, which should connect to an platform Kafka service.
The Kafka service is reachable via an DNS name (I tried it, it is possible). To access the service I created an ServiceEntry with the following configuration:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: kafka
namespace: test
spec:
hosts:
- kafka-service.foo.baa
ports:
- number: 37000
name: tls-6
protocol: tls
- number: 36200
name: tls-5
protocol: tls
- number: 36201
name: tls-4
protocol: tls
- number: 36202
name: tls-3
protocol: tls
- number: 36203
name: tls-2
protocol: tls
- number: 36204
name: tls-1
protocol: tls
resolution: DNS
location: MESH_EXTERNAL
Sometime I get an connection, but sometime not and I don't know the issue and why it's working sometime.
These are some error message:
org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
Removing server kafka-service.foo.baa:37000 from bootstrap.servers as DNS resolution failed for kafka-service.foo.baa
I also tried to use this anntotation traffic.sidecar.istio.io/excludeOutboundPorts: 37000,36200,36201,36202,36203,36204,36205 to bypass the traffic, but this is not working as well.
The Bypass is only working, if I use the IP address instread of the DNS name as bootstrap server.
Please kindly help.

Related

How to connect to IBM MQ deployed to OpenShift?

I have a container with IBM MQ (Docker image ibmcom/mq/9.2.2.0-r1) exposing two ports (9443 - admin, 1414 - application).
All required setup in OpenShift is done (Pod, Service, Routes).
There are two routes, one for each port.
https://route-admin.my.domain
https://route-app.my.domain
pointing to the ports accordingly (external ports are default http=80, https=443).
Admin console is accessible through the first route, hence, MQ is up and running.
I tried to connect as a client (JMS 2.0, com.ibm.mq.allclient:9.2.2.0) using standard approach:
var fctFactory = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
var conFactory = fctFactory.createConnectionFactory();
// ... other props
conFactory.setObjectProperty(WMQConstants.WMQ_HOST_NAME, "route-app.my.domain");
conFactory.setObjectProperty(WMQConstants.WMQ_PORT, 443);
and failed to connect. Also tried to redefine route as HTTP and use port 80, and again without success.
If it helps let's assume we use the latest version of MQ Explorer as a client.
Each time the same connection error appears:
...
Caused by: com.ibm.mq.MQException: JMSCMQ0001:
IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2009' ('MQRC_CONNECTION_BROKEN').
...
Caused by: com.ibm.mq.jmqi.JmqiException:
CC=2;RC=2009;AMQ9204: Connection to host 'route-app.my.domain(443)' rejected.
[1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2009;AMQ9208:
Error on receive from host 'route-app.my.domain/10.227.248.2:443 (route-app.my.domain)'.
[1=-1,2=ffffffff,3=route-app.my.domain/10.227.248.2:443 (route-app.my.domain),4=TCP]],
3=route-app.my.domain(443),5=RemoteConnection.receiveTSH]
...
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2009;AMQ9208:
Error on receive from host 'route-app.my.domain/10.227.248.2:443
Maybe, this article could give some hints about error code 2009, but still not sure what exactly affects connection errors from the OpenShift side.
Previously, I always connected to IBM MQ specifying a port value explicitly, but here is a bit different situation.
How to connect to IBM MQ in OpenShift cluster through TCP?
Configurations in OpenShift are as follows:
kind: Pod
apiVersion: v1
metadata:
name: ibm-mq
labels:
app: ibm-mq
spec:
containers:
- resources:
limits:
cpu: '1'
memory: 600Mi
requests:
cpu: '1'
memory: 600Mi
name: ibm-mq
ports:
- containerPort: 1414
protocol: TCP
- containerPort: 9443
protocol: TCP
containerStatuses:
image: 'nexus-ci/docker-lib/ibm_mq:latest'
---
kind: Service
apiVersion: v1
metadata:
name: ibm-mq
spec:
ports:
- name: admin
protocol: TCP
port: 9443
targetPort: 9443
- name: application
protocol: TCP
port: 1414
targetPort: 1414
selector:
app: ibm-mq
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-mq-admin
spec:
host: ibm-mq-admin.my-domain.com
to:
kind: Service
name: ibm-mq
weight: 100
port:
targetPort: admin
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-mq-app
spec:
host: ibm-mq-app.my-domain.com
to:
kind: Service
name: ibm-mq
weight: 100
port:
targetPort: application
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
---
UPDATE: Ended up with creating and deploying to OpenShift a small web-application receiving HTTP requests and interacting with MQ via JMS (put/get text messages), like:
POST /queue/{queueName}/send + <body>;
GET /queue/{queueName}/receive.
It interacts with MQ inside the OpenShift cluster using TCP, and accepts external HTTP connections as a regular web application.
Other solutions seem to take too much efforts, but I accepted one of them as it is theoretically correct and straightforward.
I'm not sure to fully understand your setup, but"Routes"only route HTTP traffic (On ports 80 or 443 onyl), not TCP traffic.
If you want to access your MQ server from outside the cluster, there are a few solutions, one is to create a service of type: "NodePort"
Doc: https://docs.openshift.com/container-platform/4.7/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.html
Your Service is not a NodePort Service. In your case, it should be something like
kind: Service
apiVersion: v1
metadata:
name: ibm-mq
spec:
type: NodePort
ports:
- port: 1414
targetPort: 1414
nodePort: 30001
selector:
app: ibm-mq
Then access from outside with anyname.<cluster domaine>:30001
And delete the useless corresponding route. As said before, I assumed you read in the doc I pointed to you that says that route only route HTTP traffic on port 80 or 443.
Doc: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
The following Java system property will be read by IBM MQ classes for JMS at 9.2.1 and higher to tell it to set the SNI header to the hostname of the remote system when initiating a TLS connection:
com.ibm.mq.cfg.SSL.OutboundSNI=HOSTNAME
To set this programmatically just use the System.setProperty method for example:
System.setProperty("com.ibm.mq.cfg.SSL.OutboundSNI","HOSTNAME");
NOTE: the string HOSTNAME is literal and not meant to be replaced by a actual hostname.
If you can not move to a com.ibm.mq.allclient.jar from 9.2.1 or later, then in 9.2.0.0 and later you could instead use com.ibm.mq.cfg.SSL.AllowOutboundSNI=NO, but this is deprecated in 9.2.1 and later.

Cannot connect to the external ip of the k8s service

I have the following service:
apiVersion: v1
kind: Service
metadata:
name: hedgehog
labels:
run: hedgehog
spec:
ports:
- port: 3000
protocol: TCP
name: restful
- port: 8982
protocol: TCP
name: websocket
selector:
run: hedgehog
externalIPs:
- 1.2.4.120
In which I have specified an externalIP.
I'm also seeing this IP under EXTERNAL-IP when running kubectl get services.
However, when I do curl http://1.2.4.120:3000 I get a timeout. However the app is supposed to give me a response because the jar running inside the container in the deployment does respond to localhost:3000 requests when run locally.
if you see the type of your service might be cluster IP try changing the type to LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
clusterIP: 172.30.163.110
externalIPs:
- 192.168.132.253
externalTrafficPolicy: Cluster
ports:
- name: highport
nodePort: 31903
port: 30102
protocol: TCP
targetPort: 30102
selector:
app: web
sessionAffinity: None
type: LoadBalancer
something like this where type: LoadBalancer.
First of all you have to understand you cannot place any random address in your ExternalIP field. Those addresses are not managed by Kubernetes and are the responsibility of the cluster administrator or you. External IP addresses specified with externalIPs are different than the external IP address assigned to a service of type LoadBalancer by a cloud provider.
I checked the address that you mentioned in the question and it does not look like it belongs to you. That why I suspect that you placed a random one there.
The same address appears in this article about ExternalIP. As you can see here the address in this case are the IP addresses of the nodes that Kubernetes runs on.
This is potential issue in your case.
Another one is too verify if your application is listening on localhost or 0.0.0.0. If it's really localhost then this might be another potential problem for you. You can change where the server process is listening. You do this by listening on 0.0.0.0, which means “listen on all interfaces”.
Lastly please verify that your selector/ports of the services are correct and that you have at least one endpoint that backs your service.

How do I properly HTTPS secure an application when using Istio?

I'm currently trying to wrap my head around how the typical application flow looks like for a kubernetes application in combination with Istio.
So, for my app I have an asp.net application hosted within a Kubernetes cluster, and I added Istio on top. Here is my gateway & VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appgateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appvservice
spec:
hosts:
- "*"
gateways:
- appgateway
tls:
- match:
- port: 443
sniHosts:
- "*"
route:
- destination:
host: frontendservice.default.svc.cluster.local
port:
number: 443
This is what I came up with after reading through the Istio documentation.
Note that my frontendservice is a very basic ClusterIP service routing to an Asp.Net application which also offers standard 80 / 443 ports.
I have a few questions now:
Is this the proper approach to securing my application? In essence I want to redirect incoming traffic on port 80 straight to https enabled 443 right at the edge. However, when I try this, there's no redirect going on on port 80 at all.
Also, the tls route on my VirtualService does not work. There's just no traffic ending up on my pod
I'm also wondering, is it necessary to even manually add HTTPs to my internal applications, or is this something where Istios internal CA functionality comes in?
I have imagined it to work like this:
Request comes in. If it's on port 80, send a redirect to the client in order to send a https request. If it's on port 443, allow the request.
The VirtualService providers the instructions what should happen with requests on port 443, and forward it to the service.
The service now forwards the request to my app's 443 port.
Thanks in advance - I'm just learning Istio, and I'm a bit baffled why my seemingly proper setup does not work here.
Your Gateway terminates TLS connections, but your VirtualService is configured to accept unterminated TLS connections with TLSRoute.
Compare the example without TLS termination and the example which terminates TLS. Most probably, the "default" setup would be to terminate the TLS connection and configure the VirtualService with a HTTPRoute.
We are also using a similar setup.
SSL is terminated on ingress gateway, but we use mTLS mode via Gateway CR.
Services are listening on non-ssl ports but sidecars use mTLS between them so that any container without sidecar cannot talk to service.
VirtualService is routing to non-ssl port of service.
Sidecar CR intercepts traffic going to and from non-ssl port of service.
PeerAuthentication sets mTLS between sidecars.

Istio - load balance mesh internal HTTP2 traffic to non-standard port

I want to load balance per request a mesh internal HTTP2 traffic coming to my ClusterIP Service over all its available replicas, using Istio; the first iteration is intended to work between two deployments within a single namespace, but I can't quite get there. I need to load balance on a non-standard port, I'm using standard port as a control group.
I was able to configure Istio so that requests from one long-lived connection to the service FQDN to standard port 80 are round robin'd correctly, but long-lived connection to a non-standard port such as 13080 will not round robin, instead a single pod will get all the requests (the behaviour looks like the K8s "iptables random" approach used in Service which only balances per connection, not per request).
Here's my most successful VirtualService definition yet:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
namespace: example
spec:
gateways:
- mesh
hosts:
- "*.example.com"
http:
- match:
- authority:
regex: "(.*.)?pods.example.com(:80)?"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 80
- match:
- authority:
regex: "(.*.)?pods.example.com:13080"
route:
- destination:
host: pods.example.svc.cluster.local
port:
number: 13080
Ports are defined in the Service like this:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: http2-nonstd
port: 13080
protocol: TCP
targetPort: 13080
Using Istio 1.6.2. What am I missing?
EDIT: The original question had a typo in the VirtualService definition authority match for the port 13080 - there was exact instead of regex. Nothing changed, however. This supports the hypothesis that for some reason Istio ignores the non-standard port.

how to forward request to public service like cdn using istio virtualservice?

i'm trying to reverse proxy using istio virtual service
it is possible forward request in virtual service? (like nginx's proxy_pass)
in result,
http://myservice.com/about/* -> forward request to CDN (external service outside k8s system - aws s3, etc....)
http://myservice.com/* -> my-service-web (internal service includes in istio mesh)
defined serviceentry, but it just "redirect", not forward reqeust.
here is my serviceentry.yaml and virtualservice.yaml
serviceentry.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: my-service-proxy
namespace: my-service
spec:
hosts:
- CDN_URL
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: TLS
resolution: DNS
virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
namespace: my-service
spec:
hosts:
- myservice.com
gateways:
- myservice
http:
- match:
- uri:
prefix: /about
rewrite:
authority: CDN_URL
uri: /
route:
- destination:
host: CDN_URL
- route:
- destination:
host: my-service-web.svc.cluster.local
port:
number: 80
virtualservice can acts like nginx-igress?
Based on that istio discuss
User #palic asked same question here
Shouldn’t it be possible to let ISTIO do the reverse proxy
thing, so that no one needs a webserver (httpd/nginx/
lighthttpd/…) to do the reverse proxy job?
And the answer provided by #Daniel_Watrous
The job of the Istio control plane is to configure a fleet of reverse proxies. The purpose of the webserver is to serve content, not reverse proxy. The reverse proxy technology at the heart of Istio is Envoy, and Envoy can be use as a replacement for HAProxy, nginx, Apache, F5, or any other component that is being used as a reverse proxy.
it is possible forward request in virtual service
Based on that I would say it's not possible to do in virtual service, it's just rewrite(redirect), which I assume is working for you.
when i need function of reverse proxy, then i have to using nginx ingresscontroller (or other things) instead of istio igress gateway?
If we talk about reverse proxy, then yes, you need to use other technology than istio itself.
As far as I'm concerned, you could use some nginx pod, which would be configured as reverse proxy to the external service, and it will be the host for your virtual service.
So it would look like in below example.
EXAMPLE
ingress gateway -> Virtual Service -> nginx pod ( reverse proxy configured on nginx)
Service entry -> accessibility of URLs outside of the cluster
Let me know if you have any more questions.