Serving MQTT over WebSocket in Kubernates GCP environment - kubernetes

I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.

Related

Cluster IP service isn't working as expected

I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.

How configure incoming port in router for Istio Ingress gateway

I try to configure istio ingress using openshift route. As I understand the request path is as follows:
request -> route -> ingress service -> gateway -> virtual service -> app service -> app
So, I apply follow config:
Route.yml:
kind: Route
...
spec:
host: my-app.com
to:
kind: Service
name: ingress-service
weight: 100
port:
targetPort: http
...
ingress-service.yml:
kind: Service
metadata:
name: ingress-service
...
spec:
ports:
- name: status-port
protocol: TCP
port: 15020
targetPort: 15020
- name: http
protocol: TCP
port: 9080
targetPort: 9080
selector:
app: ingressgateway
istio: ingressgateway
type: ClusterIP
ingress-gateway.yml:
kind: Gateway
metadata:
name: ingress-gw
...
spec:
servers:
- hosts:
- my-app.com
port:
name: http
number: 9080
protocol: HTTP
selector:
istio: ingressgateway
ingress-virtual-service.yml
kind: VirtualService
...
spec:
hosts:
- my-app.com
gateways:
- ingress-gw
http:
- route:
- destination:
host: my-app
port: 9080
exportTo:
- .
I dont set up port 9080 in deployment for ingressgateway pod. And it works. But only if I send request to http://my-app.com:80
Where did I go wrong and how to make only path accessible http://my-app.com:9080 ?
The exposed port numbers for external depend on Router(HAProxy) pod listening ports on OpenShift. If you want to 9080 port instead of 80, you should change the port on Router(HAProxy) pod. Or you can handle the port number on LB to use other port numbers.
The access flow is as follows.
LB(80, 443)
-> Router pod(80, 443)
-> Ingress-Gateway pod
-Through Gateway and VirtualService -> Backend pod

How to open custom port in Kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

Can't send log into Graylog kubernetes

How to expose node port on ingress?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logs-graylog NodePort 10.20.8.187 <none> 80:31300/TCP,12201:31301/UDP,1514:31302/TCP 5d3h
logs-graylog-elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 5d3h
logs-graylog-master ClusterIP None <none> 9000/TCP 5d3h
logs-graylog-slave ClusterIP None <none> 9000/TCP 5d3h
logs-mongodb-replicaset ClusterIP None <none> 27017/TCP 5d3h
This is how my service look like where there are some node ports.
Graylog web interface is expose on port 80.
But i am not able to send logs on URL. my graylog weburl is https://logs.example.com
it's running on https cert-manager is there on kubernertes ingress.
i am not able to send Glef UDP logs on URl. am i missing something to open port from ingress or UDP filter something ?
this is my ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logs-graylog-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: graylog
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- logs.example.io
secretName: graylog
rules:
- host: logs.example.io
http:
paths:
- backend:
serviceName: logs-graylog
servicePort: 80
- backend:
serviceName: logs-graylog
servicePort: 12201
- backend:
serviceName: logs-graylog
servicePort: 31301
Service :
apiVersion: v1
kind: Service
metadata:
labels:
app: graylog
chart: graylog-0.1.0
component: graylog-service
heritage: Tiller
name: graylog
release: logs
name: logs-graylog
spec:
clusterIP: 10.20.8.187
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31300
port: 80
protocol: TCP
targetPort: 9000
- name: udp-input
nodePort: 31301
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
nodePort: 31302
port: 1514
protocol: TCP
targetPort: 1514
selector:
graylog: "true"
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
UDP services aren't normally exposed via an Ingress Controller like TCP HTTP(S) services are. I'm not sure any ingress controllers even support UDP, certainly not with 3 protocols combined in a single ingress definition.
If the cluster is hosted on a cloud service, most support a Service with type LoadBalancer to map external connections into a cluster.
apiVersion: v1
kind: Service
metadata:
name: logs-direct-graylog
spec:
selector:
graylog: "true"
ports:
- name: udp-input
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
port: 1514
protocol: TCP
targetPort: 1514
type: LoadBalancer
If service of type LoadBalancer is not available in your environment you can use the NodePort service. The nodePorts you have defined will be available on the external IP of each of your nodes.
A nodePort is not strictly required for the http port, as the nginx Ingress Controller takes care of that for you elsewhere in it's own service.
apiVersion: v1
kind: Service
metadata:
name: logs-graylog
spec:
selector:
graylog: "true"
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
The ports other than 80 can be removed from your ingress definition.

How to expose multiple port using a load balancer services in Kubernetes

I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.