Deploying Mosquitto MQTT broker on Minikube, error while configuring an Ingress - minikube

My goal in to install a simple Mosquitto broker on Minikube (without certificate).
Mu problem is that I cannot add an Ingress.
I am a real beginer in Kubernetes I hope this question is not stupid, I have searched for a solution without success.
So I have a VM (Vmware) with Ubuntu 20.04 on it.
Minikube 1.26 is running perfectly well on it (with Docker that is intalled on the VM) :
I have deployed Mosquitto on Minikube using the following yaml files :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: mosquitto
template:
metadata:
labels:
name: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:2.0.12
ports:
- containerPort: 1883
volumeMounts:
- name: mosquitto-config
mountPath: /mosquitto/config/mosquitto.conf
subPath: mosquitto.conf
- mountpath: etc/mosquitto
name: data
volumes:
- name: mosquitto-config
configMap:
name: mosquitto-configmap
- name: data
pesristentVolumeclaim:
claimName: moquitto-data
Persistent volume :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mosquitto-data
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
name: mosquitto-configmap
namespace: default
data:
mosquitto.conf: |
listener 1883
allow_anonymous true
protocol mqtt
persistence true
persistence_location /mosquitto/data
log_dest stdout
Service (NodePort type):
apiVersion: v1
kind: Service
metadata:
name: mosquitto-service
namespace: default
spec:
type: NodePort
selector:
name: mosquitto
ports:
- name: mosquitto
protocol: TCP
port: 1883
targetPort: 1883
I have then installed a ingress-ngninx controller to be able to do TCP o (ingress by itself only allows http and https).
minikube addons enable ingress
All seems to run good :
Now when I add my Ingress with the following yaml,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
k8s.io/ingress-nginx : nginx
name: mosquitto-ingress
spec:
rules:
- host: talenddemo.net
tcp:
paths:
- path: /mqtt
backend:
serviceName: mosquitto-service
servicePort: 1883
the computer says "no". The error message is "The server could not find the requested resource"
Question 1 : What did I do wrong ?
Question 2 : With which address should I request connection to the MQTT broker from a MQTT client ?
Note : the namespace for the ingress controller is different from moquitto namesapce, would that be the cause ?
Many Thanks !!

You can not use normal ingress to expose native MQTT.
Ingress is used to expose HTTP based protocols, native MQTT is not HTTP based.
Nginx will not help in this case because you are still trying to set up http virtual host and http path proxying which still won't work.
You have 2 choices
Modify your mosquitto.conf to set the protocol to websockets. MQTT over Websockets is bootstrapped over HTTP so will work. This does mean you're clients will also need to use MQTT over Websockets.
Enable metallb and expose the service a load balancer on a dedicated IP address. This is a lot more work and will require a bunch of network planning.
You might be able to get nginx to work as a load balancer instead of metallb, but you will still need to expose as a load balanced service.
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

Related

How can I resolve connection issues between my frontend and backend pods in Minikube?

I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.

Exposing UDP and TCP ports for sftp server using Ingress in GKE

I am trying to set up a multi-cluster deployment in which there are multiple clusters and one ingress is load balancing the requests between them.
HTTP services work well with the set-up the problem here is the sftp server.
Referring to this answer and this documentation I am trying to access port 22 of the sftp service.
Deployment of sftp is being exposed on port 22. Below is the manifest:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
labels:
environment: production
app: sftp
spec:
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
containers:
- name: sftp
image: atmoz/sftp:alpine
imagePullPolicy: Always
args: ["user:1001:100:upload"]
ports:
- containerPort: 22
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
Here is the simple manifest for the sftp-service using NodePort service:
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: NodePort
ports:
- name: sftp-port
targetPort: 9000
port: 9000
nodePort: 30063
protocol: TCP
selector:
app: sftp
ConfigMap create to referring to the above mentioned documentation and answer looks like below:
apiVersion: v1
kind: ConfigMap
metadata:
name: sftp-service
data:
9000: "default/sftp-service:22"
And finally the ingress manifest is something like below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-foo
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip
kubernetes.io/ingress.class: gce-multi-cluster
spec:
backend:
serviceName: http-service-zone-printer
servicePort: 80
rules:
- http:
paths:
- path: /sftp
backend:
serviceName: sftp-service
servicePort: 22
template:
spec:
containers:
- name: proxy-port
args:
- "--tcp-services-configmap=default/sftp-service"
I feel, I have not understood the way to expose the TCP/UDP port for sftp server on kubernetes using ingress. What am I doing wrong here?
Is there any other way to simple setup an sftp using ingress and NodePort service in a multicluster deployment?
Here is the official document I am referring to do the set-up.
looks like this isn't supported with ingress which is the reason that this issue exist
A possible solution could be to use nodeport for sftp as described in this document
You need to run an HTTP server.
You can run an HTTP server that exposes the same files maybe with a side container in the same pod

SFTP server is not accessible when deployed to Kubernetes (GKE)

SFTP server is not accessible when exposed using a NodePort service and an Kubernetes Ingress. However, if the same deployment is exposed using a Service of type LoadBalancer it works fine.
Below is the deployment file for SFTP in GKE using atmoz/sftp Dockerfile.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
labels:
environment: production
app: sftp
spec:
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
containers:
- name: sftp
image: atmoz/sftp:alpine
imagePullPolicy: Always
args: ["user:pass:1001:100:upload"]
ports:
- containerPort: 22
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
If I expose this deployment normally using a Kubernetes Service of type LoadBalancer like below:
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: LoadBalancer
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
selector:
app: sftp
Above Service gets an external IP which I can simply use in the command sftp xxx.xx.xx.xxx command to get access using the pass password.
However, I try to expose the same deployment using GKE Ingress it does not work. Below is the manifest for the ingress:
# First I create a NodePort service to expose the deployment internally
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: NodePort
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
nodePort: 30063
selector:
app: sftp
# Ingress service has SFTP service as it's default backend
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress-2
spec:
backend:
serviceName: sftp-service
servicePort: 22
rules:
- http:
paths:
# "http-service-sample" is a service exposing a simple hello-word app deployment
- path: /sample
backend:
serviceName: http-service-sample
servicePort: 80
After an external IP is assigned to the Ingress (I know it takes a few minutes to completely set up) and xxx.xx.xx.xxx/sample starts working but sftp -P 80 xxx.xx.xx.xxx doesn't work.
Below is the error I get from the server:
ssh_exchange_identification: Connection closed by remote host
Connection closed
What am I doing wrong in the above set-up? Why does LoadBalancer service is able to allow access to SFTP service, while Ingress fails?
That's currently not fully supported to route in Kubernetes Ingress any other traffic than HTTP/HTTPS protocols (see docs).
You can try to make some workaround as described there: Kubernetes: Routing non HTTP Request via Ingress to Container

Minikube Kubernetes: two pods and service

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). But I can't communicate client with server. Infact if lucky-word-client communicates with lucky-word-server, the result is the word "Evviva", else the word is "Default". When I run on terminal: minikube service lucky-client the output is Default, instead of Evviva. I want communicate client with server through DNS. I saw the guide: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ but without success. How i can modify the service or pods to have the link between client and server?
This is the pod of lucky-word-client:
apiVersion: v1
kind: Pod
metadata:
name: lucky-client
namespace: default
spec:
containers:
- image: lucky-client-img
imagePullPolicy: IfNotPresent
name: lucky-client
This is the pod of lucky-word-server:
apiVersion: v1
kind: Pod
metadata:
name: lucky-server
namespace: default
spec:
containers:
- image: lucky-server-img
imagePullPolicy: IfNotPresent
name: lucky-server
This is the service, where the lucky-word-client communicate with lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
If you want to have DNS based service discovery to communicate with the server follow the below steps:
Enable kube-dns addon via minikube addons enable kube-dns command. This will enable the service discovery in your kubernetes cluster.
Make sure that kube-dns addon is enable using minikube addons list command.
In your client application code change the server URL endpoint to the following : http://lucky-server:8888. "lucky-server" is the metadata.name property of your Kubernetes server service yaml definition.
Or else instead of lucky-server you can use fully qualified name lucky-server.default.svc.cluster.local in the server URL since you are deploying your service in default namespace.
You need a service for your lucky-server :
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
type: NodePort

Kubernetes/GCE Ingress controller fails

I'm new to Kubernetes and I'm trying to do HTTP Load Balancing on Google Container Engine with TLS (using the included GCE Ingress Controller). The error I have is repeatable even following Google's official tutorial. For readability I summarize the procedure in config.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
name: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
Then:
kubectl create -f config.yaml
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nginx)
gcloud compute firewall-rules create allow-130-211-0-0-22 --source-ranges 130.211.0.0/22 --allow tcp:$NODE_PORT
curl <ip_of_load_balancer>
(I removed the tags on the firewall rule so it will apply for all).
But I get a 502 Server Error, which according to the docs means it's likely bootstrapping (but it always stays like this). I can see on the console that the backend is unhealthy.
In the docs, to avoid this one needs:
a firewall rule (which is done above)
Service must respond with 200 (but I tested the nginx image locally and the service via a general Load Balancer, works fine)
So what is the cause of this error and how can I further debug this?
I left the cluster overnight and it is now working. It either seems it takes quite some to bootstrap or something was fixed on the Google Cloud side.