MQTT Mosquitto bridging configuration - raspberry-pi

I am trying to bridge my mosquitto broker running in raspberry pi to the cloud Mosquitto mqtt broker(test.mosquitto.org:1883).
I am not getting message in my client connected to the local broker, When I publish from a client connected to the cloud broker . But I am getting messages on the client connected to the cloud mosquitto broker from the client connected to the local MQTT Broker. I don't have any firewalls to block messages.
my mosquitto.conf file is like this
connection bridge-01
address test.mosquitto.org:1883
topic # out 0
topic # in 0
And I also have
listener 1883
allow_anonymous true
in my config
How to solve this issue?. Where am I going wrong??

Global wildcard subscriptions (subscribing to #) is disabled on the test.mosquitto.org broker because it generates too much load.
e.g.
$ mosquitto_sub -h test.mosquitto.org -v -t '#'
All subscription requests were denied.
You can change the topic # in 0 line to only pull in only the topics you are actually using.
Please also remember this broker is only really meant for testing, you should not be using it as a free cloud relay for anything long term.

Related

Mqtt broker and mqtt bridge on same system

I am working on a project where I have to connect local mqtt broker i.e. mosquitto and a cloud based mqtt broker via mqtt bridge. Mosquitto(local broker) is running on raspberry pi4 and I also want to run mosquitto mqtt bridge on same raspberry pi. So the question is, Can I run local mqtt broker i.e. mosquitto and the mqtt bridge both simultaneously in a single system which is raspberry pi4. If yes please tell the process how can i do it.
You only need to run a single MQTT broker (e.g. mosquitto). This will act as the local broker and can also be configured to bridge out to a remote broker.
The bridge can be configured to
mirror messages out to the remote broker
mirror messages in from the remote broker
or both
Depending on what you need. Details of how to configure the bridge can be found in the mosquitto docs here
But if you want to run multiple brokers on the same machine this is also perfectly possible, they will just need to bind to different ports as only one will be able to bind to 1883.

unable to connect to kafka broker (via zookeeper) using Conduktor client

Able to connect successfully to local kafka broker/cluster running locally (dockerized) using Conduktor, but when trying to connect to Kafka cluster running on Unix VM, getting below error.
Error:
"The broker [...] is reachable but Kafka can't connect. Ensure you have access to the advertised listeners of the the brokers and the proper authorization"
Appreciate any assistance.
running locally (dockerized)
When running in docker, you need to ensure that the ports are accessible from outside of your container. To verify this, try doing a telnet <ip> <port> and check if you are able to connect.
Since the error message says, the broker is reachable, I suppose you would be able to successfully telnet to the broker.
Next, check your broker config called advertised.listeners. Here you need to mention your IP:Port combination where IP is what you will be giving in your client program i.e. Conduktor.
An example for that would be
advertised.listeners=PLAINTEXT://1.2.3.4:9092
and then restart your broker and reconnect. If you are using ssl then you need to provide some extra configuration. See Configuring Kafka brokers for more.
Try to add in /etc/hosts (Unix-like) or C:\Windows\System32\drivers\etc\hosts (windows-like) the Kafka server in such manner kafka_server_ip kafka_server_name_in_dns (e.g. 10.10.0.1 kafka).

How to Setup a Public Kafka Broker Using a Dynamic DNS?

I configured a Kafka Cluster with 3 brokers using 3 Zookeepers along with each broker. Figure bellow presents a graphical representation of my cluster.
A producer and consumer test in the same network using the host 192.168.0.10 worked perfectly via kafka-console-producer and kafka-console-consumer commands.
Based on that context, when I try to produce some data via kafka-console-producer.sh --broker-list DYNAMIC_DNS_ADDR:30192,DYNAMIC_DNS_ADDR:30292,DYNAMIC_DNS_ADDR:30392 --topic twitter_tweets through the Internet, I am getting the following error:
[2018-12-10 09:59:20,772] ERROR Error when sending message to topic twitter_tweets with key: null, value: 16 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for twitter_tweets-1: 1505 ms has passed since batch creation plus linger time
[2018-12-10 09:59:22,273] WARN [Producer clientId=console-producer] Connection to node 1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Broker listeners are configured with the following properties:
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9443
advertised.listeners=PLAINTEXT://192.168.0.241:9092,SSL://192.168.0.241:9443
Obviously, the IP address changed in each broker for the advertised.listeners property. I am using CentOS 6.10 and Kafka 2.0.1 for that setup. A telnet test worked. Another forward to a Kafka REST Proxy port is working via the Internet and listing all topics.
See https://rmoff.net/2018/08/02/kafka-listeners-explained/
You need two listeners—one responding to and advertising the internal addresses, one for the external one.
The key thing is that the listener that your client connects to will return the host address and port of that listener.
At the moment you're spoofing your external one to your internal one, and your external traffic is thus hitting the internal listener.
You need something like this (varying the IP/hostname of the aws_internal_listener as required per broker):
KAFKA_LISTENERS: aws_internal_listener://192.168.0.241:9092,external_listener://192.168.0.241:29092
KAFKA_ADVERTISED_LISTENERS: aws_internal_listener://192.168.0.241:9092,external_listener://DYNAMIC_DNS_ADDR:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: aws_internal_listener:PLAINTEXT,external_listener:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: aws_internal_listener
Then your port forwarder for DYNAMIC_DNS_ADDR should redirect connections to 29092 on the AWS node. The key thing is that external connections should not end up at the listener port on the host matching the internal listener (which advertises an internal 192.168.0 address)
Use kafkacat -L -b DYNAMIC_DNS_ADDR:29092 to debug and validate your config, as described in the article here.

Can't reach kafka producer with node-red

I have installed node-red on raspberry pi 3 to collect data from sensors and then store them in kafka but now I have some issue with kafka producer node.
I've setup a kafka server on my laptop that correctly works in console: if I send messages on kafka producer console I correctly receive it on consumer console.
Unfortunately when I try to inject a timestamp in kafka producer in node-red on raspberry, server gives no response.
Debug page of node-red says: "BrokerNotAvailableError: Broker not available"
In producer node ZKQuorum field I've typed the ip of the laptop and set port to 9092, as I seen in example on npm site.
I'm sure the topic is correct.
I'm sure zookeeper is running and kafka server also. Indeed if at same time I try to use kafka with laptop console it works great.
I've also tried to reach kafka producer port with telnet: connections are accepted.
I've already posted same question on node-red community, without success for now.
Any hint about this issue?
UPDATE:
Update. I've tried to implement a python function in node-red to send a simple message to the kafka producer and I have obtained a deeper error log in:
/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 808
I opened file and at line 808 there is a function with this comment:
It can be helpful?
You have to configure the accepted listeners field on the kafka server properties to the IP address of your laptop. Try to change zookeeper connect to actual ip, not localhost.
Try this property in etc/kafka/server.properties: listeners=PLAINTEXT://<your ip here>:<kafka port here>. you will have to restart kafka for this to take affect

How to fetch messages from Kafka that host in virtualbox vm

I tried to fetch messages by Java client from host OS . I configured bridge network between host and guest , kafka is running but java client was stuck when try to do ConsumerRecords<String,String> records = consumers.poll(100);
My kafka in guest OS listening on localhost:9092
Could you share your Consumer properties?
There are a couple of properties to validate in you server.properties file: listeners and advertised.listeners. You can check this: https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whycan'tmyconsumers/producersconnecttothebrokers?