Can't connect Mosquitto Broker on Public Domain - kubernetes

I have a Mosquitto Broker on my Kubernetes. I can connect to Mosquitto Broker in Private Network. It works well.
But when We use a Public Domain ( We use Sophos UTM 9 ), The client can't connect to Mosquitto Broker.
I'm a new with Kubernetes. This is mosquitto.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
selector:
matchLabels:
app: mosquitto
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:v1.16.10
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
imagePullPolicy: Always
ports:
- containerPort: 1883
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
externalIPs:
- xxx.xxx.xxx.xxx
type: ClusterIP
ports:
- name: mqtt
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
I use NodeJS to connect with public domain. This NodeJS code is:
var mqtt = require('mqtt');
var client = mqtt.connect('mqtt://mydomain.com:1883');
client.on('connect', function () {
client.subscribe(topic)
console.log("Subscribed topic " + topic);
})
I wonder what the problem is kubernetes or Sophos UTM 9. Do I miss anything?
What do I have to do for Mosquitto on Kubernetes to use the Public Domain ?
I am most grateful.

After test your yaml file, I've concluded that you configuration is almost correct, I mean that because:
The image you are using eclipse-mosquitto:v1.16.10 does not exists. You can check all tag available here.
So, the most probable issue, is that your pod might not be running. You can check it by running the command below and checking the column STATUS.
$ kubectl get pods -l=app=mosquitto
NAME READY STATUS RESTARTS AGE
mosquitto-c9dc57d59-98l8r 1/1 Running 0 5m53s
Here the yaml that worked for me. Note: I've removed the externalIP and resource limits from service and deployment for tests purposes and replaced the image for eclipse-mosquitto:1.6.10:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
selector:
matchLabels:
app: mosquitto
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
containers:
- name: mosquitto
image: eclipse-mosquitto:1.6.10
imagePullPolicy: Always
ports:
- containerPort: 1883
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
type: ClusterIP
ports:
- name: mqtt
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
After deployed I've tested using a dnsutil container (you can find the spec here):
kubectl exec dnsutils -- sh -c 'apk update && apk add mosquitto-clients'
kubectl exec dnsutils -- mosquitto_pub -h mosquitto -t 'test/topic' -m 'upvoteIt'
Check the logs in mosquitto pod:
kubectl logs mosquitto-xxxxx
1597829622: New client connected from 172.17.0.4 as mosqpub|88-dnsutils (p1, c1, k60).
1597829622: Client mosqpub|88-dnsutils disconnected.
If you want to see the message before test, open a second terminal and run this command to see the message being received by mosquitto server:
$ kubectl exec mosquitto-xxxxx -- mosquitto_sub -v -t 'test/topic'
test/topic upvoteIt
Where mosquitto-xxxxx is the name of your pod.

Related

Why my Nodeport service change its port number?

I am trying to install the velero for k8s. During the installation when try to install mini.io I changes its service type from cluster IP to Node Port. My Pods run successfully and also I can see the node Port services is up and running.
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get pods -n velero -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
minio-8649b94fb5-vk7gv 1/1 Running 0 16m 10.244.1.102 node1k8s-virtual-machine <none> <none>
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get svc -n velero NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.111.72.207 <none> 9000:31481/TCP 53m
When I try to access my services port number changes from 31481 to 45717 by it self. Every time when I correct port number and hit enter it changes back to new port and I am not able to access my application.
These are my codes from mini.io service file.
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
What I have done so far?
I look for the log and everything show successful No error. I also try it with Load balancer service. With Load balancer port not not changes but I am not able to access the application.
Noting found on google about this issue.
I also check all the namespaces pods and services to check if these Port numbers are being used. No services use these ports.
What Do I want?
Can you please help me to find out what cause my application to change its port. Where is the issue and how to fix it.? How can I access application dashbord?
Update Question
This is the full codes file. It may help to find my mistake.
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9002
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9002
nodePort: 31482
targetPort: 9002
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
Edit2 Logs Of Pod
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.1.108:9000 http://127.0.0.1:9000
Console: http://10.244.1.108:33045 http://127.0.0.1:33045
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
Edit 3 Logs of Pod
master-k8s#masterk8s-virtual-machine:~/velero-1.9.5$ kubectl logs minio-8649b94fb5-qvzfh -n velero
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.2.131:9000 http://127.0.0.1:9000
Console: http://10.244.2.131:36649 http://127.0.0.1:36649
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
You can set the nodePort number inside the port config so that it won't be automatically set.
Try this Service:
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
nodePort: 31481
targetPort: 9000
protocol: TCP
selector:
component: minio

I expose my pod in kubernetes but I can´t seem to establish a connection with it

I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80

In kubernetes services, not able to use ports 6000 and 8081

I have a Kubernetes service (a Python Flask application) exposed publicly on port 6000 using the LoadBalancer type.
When I am using kubectl to send the YAML file to Kubernetes by running the following command:
kubectl apply -f deployment.yaml
I got the status of the service as running.
When I am navigating to http://localhost:6000, I am not able to see the “Hello from Python!”
I tried using port 8081, and that is also not working. But when I am using port 8088, it's working.
deployment.yaml file which I am using:
apiVersion: v1
kind: Service
metadata:
name: hello-python-service
spec:
selector:
app: hello-python
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-python
spec:
selector:
matchLabels:
app: hello-python
replicas: 4
template:
metadata:
labels:
app: hello-python
spec:
containers:
- name: hello-python
image: hello-python:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
I am using the following example:
Kubernetes Using Python
Why some ports like 6000 or 8081 are not working and why some ports like 8088 or 9000 are working?

kubernetes cannot ping another service

DNS resolution looks fine, but I cannot ping my service. What could be the reason?
From another pod in the cluster:
$ ping backend
PING backend.default.svc.cluster.local (10.233.14.157) 56(84) bytes of data.
^C
--- backend.default.svc.cluster.local ping statistics ---
36 packets transmitted, 0 received, 100% packet loss, time 35816ms
EDIT:
The service definition:
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- name: api
protocol: TCP
port: 10000
selector:
app: backend
The deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
run: backend
replicas: 1
template:
metadata:
labels:
run: backend
spec:
containers:
- name: backend
image: nha/backend:latest
imagePullPolicy: Always
ports:
- name: api
containerPort: 10000
I can curl my service from the same container:
kubectl exec -it backend-7f67c8cbd8-mf894 -- /bin/bash
root#backend-7f67c8cbd8-mf894:/# curl localhost:10000/my-endpoint
{"ok": "true"}
It looks like the endpoint on port 10000 does not get exposed though:
kubectl get ep
NAME ENDPOINTS AGE
backend <none> 2h
Ping doesn't work with service's cluster IPs like 10.233.14.157, as it is a virtual IP. You should be able to ping a specific pod, but no a service.
You can't ping a service. You can curl it.

Kafka deployment on minikube

Hi i am new to kubernets, i am using minikube single node cluster for local development and testing.
Host: Ubuntu 16.04 LTS.
Minikube: Virtual box running minikube cluster
My requirement is i need to deploy kafka and zookeeper on minikube and should be used to produce or consume messages.
I followed this link and successfully deployed it on minikube its details are below
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service 10.0.0.15 <pending> 9092:30244/TCP 46m
kubernetes 10.0.0.1 <none> 443/TCP 53m
zoo1 10.0.0.43 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo2 10.0.0.226 <none> 2181/TCP,2888/TCP,3888/TCP 50m
zoo3 10.0.0.6 <none> 2181/TCP,2888/TCP,3888/TCP 50m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-deployment-3583985961-f2301 1/1 Running 0 48m
zookeeper-deployment-1-1598963595-vgx1l 1/1 Running 0 52m
zookeeper-deployment-2-2038841231-tdsff 1/1 Running 0 52m
zookeeper-deployment-3-2478718867-5vjcj 1/1 Running 0 52m
$ kubectl describe service kafka-service
Name: kafka-service
Namespace: default
Labels: app=kafka
Annotations: <none>
Selector: app=kafka
Type: LoadBalancer
IP: 10.0.0.15
Port: kafka-port 9092/TCP
NodePort: kafka-port 30244/TCP
Endpoints: 172.17.0.7:9092
Session Affinity: None
Events: <none>
and i have set KAFKA_ADVERTISED_HOST_NAME to minikube ip(192.168.99.100).
Now for message producer i am using $cat textfile.log | kafkacat -b $(minikube ip):30244 -t mytopic its not publishing the message giving below message
% Auto-selecting Producer mode (use -P or -C to override)
% Delivery failed for message: Local: Message timed out
can any one help how to publish and consume message.
I know that this is quite an old post. Were you able to resolve and run kafka + zookeeper within minikube? I was able to run a simple single cluster kafka and zookeeper deployment successfully using minikube v0.17.1 and produce and consume messages using kafkacat producer and consumer respectively. I was able to run these successfully on uBuntu and Mac OSX. The deployment and service yamls are as below:
zookeeper-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
zookeeper-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
kafka-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kafka
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: "192.168.99.100"
- name: KAFKA_ADVERTISED_PORT
value: "30092"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: 192.168.99.100:30181
- name: KAFKA_CREATE_TOPICS
value: "test-topic:1:1"
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka
ports:
- containerPort: 9092
kafka-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka
You can test your deployment by installing kafkacat client and running the following commands on separate terminal windows:
echo "Am I receiving this message?" | kafkacat -P -b 192.168.99.100:30092 -t test-topic
kafkacat -C -b 192.168.99.100:30092 -t test-topic
% Reached end of topic test-topic [0] at offset 0
Am I receiving this message?
I was able to successfully run this on minikube versions v0.17.1 and v0.19.0. If you want to run this on minikube versions v0.21.1 and v0.23.0, please refer to my reply to the post here: Kafka inaccessible once inside Kubernetes/Minikube
Thanks.
I followed as shown below.
$ git clone https://github.com/d1egoaz/minikube-kafka-cluster
$ cd minikube-kafka-cluster
$ kubectl apply -f 00-namespace/
$ kubectl apply -f 01-zookeeper/
$ kubectl apply -f 02-kafka/
$ kubectl apply -f 03-yahoo-kafka-manager/
$ kubectl get svc -n kafka-ca1 (get the port of kafka manager like 31445)
$ minikube ip
Put http://minikube-ip:port in browser to get ui page of kafka manager.
source https://technology.amis.nl/2019/03/24/running-apache-kafka-on-minikube/
You used a service with type = LoadBalancer which is used for cloud provider (you can see the service waiting for an external ip address .. pending state ... which will never happen). In your case you should try with NodePort.
Going off the previous answer and your comment here is very basic sample code for a Kafka service with type=Nodeport.
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
type: NodePort