I have the IBM tutorial on using Arduino+Bluemix, up and running. I would now like to take a decision on the cloud and let the Arduino subscribe to a topic containing the decision message. For this, I am trying to use the IBM IOT out node in my Node-RED editor. However, I am not sure of how to configure this node.
Are there any tutorials that cover this use-case (IBM IOT out node + Arduino, MQTT)- or documentation on the node properties?
Thanks.
NK
This is the picture of an IoT output node I'm using on the Coursera IoT course, works for me sending data to the Sense HAT device plugged onto my pi.
At the client side I'm using NodeRED (using a Pi this is easy :-) and an IoT input node which does the subscription to the command.
The docs describe how to subscribe to commands HERE under the heading Subscribing to Commands.
The subscription topic should look like iot-2/cmd/[command_id]/fmt/[format_string], so for my pi to subscribe to the command in the picture the topic should be iot-2/cmd/display/fmt/json - although I haven't tried that myself, it should work.
You need to configure the IBM IoT out node to publish messages to the device ID of your Arduino. The msg.payload will be the message you send to your Arduino. You will also need to set the topic as well, the topic contains your deviceId also. Then you will need to have your Arduino subscribe to the topic to receive the messages.
Related
Quick question. Is there a way to send a message from a microservice using a kafka and receive a message on another microservice? I've seen some articles on github, dzone, etc, but everyone is using Docker which is not supported on my PC (I'm a Windows 10 Home loser :)
and the docker toolbox is ...
Thanks for the help.
Regards.
"Microservice" != Docker
Yes, you can use Kafka to communicate between any two applications, given there are compatible Kafka clients
I have some IoT devices which need to be updated sometimes, based on configuration done in web or mobile clients. So I need to give capability to be updated based on a configuration.
I have the following architecture when Clients communicates over HTTPS to an API Gateway. This Gateway is responsible to fetch data from several micro-services that interact with Kafka and some databases.
In this context, it is a good idea to create a Kafka consumer in IoT devices that will consumes messages from a Kafka Configuration Topic ?
Based on each new message received on this topic, the IoT device will be responsible to apply the change on the configuration.
Any advise ?
Usually, IoT devices have strong CPU/RAM and/or battery restrictions. The most widely used solution for messaging over IoT is MQTT and https://mosquitto.org/ the currently most widespread MQTT broker, so I would try to use https://mosquitto.org/ on the IoT devices and link it with Kafka through a "Confluent MQTT Proxy", you have more information at https://www.confluent.io/confluent-mqtt-proxy/
It is also not difficult to create your own "MQTT proxy" in python (or the language you prefer)
Kafka does not push. Consumers poll.
You can embed Kafka consumers in IoT devices, yes (assuming you are able to deploy such apps into them), however, MQTT is often documented as more used in those environments, and you could route Kafka events to an MQTT broker through various methods.
I have kafka installed in a ubuntu server, and node-red is in my personal laptop. I want to send data from node-red to the kafka topic.
I tried using the kafka node in node-red to connect, but I am getting error like "Client is not a constructor". Also I am bit confused between the listeners and advertised listeners configuration. How should I configure the server.properties file for the same, and also which nodes should I include the node-red flow to achieve this, please suggest me some ways.
I expect that if I send some message from node-red, I should be able to see the same in kafka topic.
I am trying to build a CDC pipeline using : DB2--IBM CDC --Kafka
and I am trying to figure out the right way to setup this .
I tried below things -
1.Setup a 3 node kafka cluster on linux on prem
2.Installed IIDR CDC software on linux on prem using - setup-iidr-11.4.0.1-5085-linux-x86.bin file . The CDC instance is up and running .
The various online documentation suggest to install 'IIDR management console ' to configure the source datastore and CDC server configuration and also Kafka subscription configuration to build the pipeline .
Currently I do not have the management console installed .
Few questions on this -
1.Is there any alternative to IBM CDC management console for setting up the kafka-CDC pipeline ?
2.How can I get the IIDR management console ? and if we install it on our local windows dekstop and try to connect to CDC/Kafka which are on remote linux servers, will it work ?
3.Any other method to setup the data ingestion IIDR CDC to Kafka ?
I am fairly new to CDC/ IIDR , please help !
I own the development of the IIDR Kafka target for our CDC Replication product.
Management Console is the best way to setup the subscription initially. You can install it on a windows box.
Technically I believe you can use our scripting language called CHCCLP to setup a subscription as well. But I recommend using the GUI.
Here are links to our resources on our IIDR (CDC) Kafka Target. Search for the "Kafka" section.
"https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W8d78486eafb9_4a06_a482_7e7962f5ac59/page/IIDR%20Wiki"
An example of setting up a subscription and replicating is this video
https://ibm.box.com/s/ur8jokg6tclsx5fcav5g86a3n57mqtd5
Management console and access server can be obtained from IBM fix central.
I have installed MC/Access server on my VM and on my personal windows box to use it against my linux VMs. You will need connectivity of course.
You can definitely follow up with our Support and they'll be able to sort you out. Plus we have docs in our knowledge centre on MC starting here.... https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.mcadminguide.doc/concepts/overview_of_cdc.html
You'll find our Kafka target is very flexible it comes with five different formats to write data into Kafka, and you can choose to capture data in an audit format, or the Kafka compaction compatible key, null for a delete method.
Additionally you can even use the product to write several records to several different topics in several formats from a single insert operation. This is useful if some of your consumer apps want JSON and others Avro binary. Additionally you can use this to put all the data to more secure topics, and write out just some of the data to topics that more people have access to.
We even have customers who encrypt columns in flight when replicating.
Finally the product's transformations can be parallelized even if you choose to only use one producer to write out data.
Actually one more finally, we additionally provide the option to use a special consumer which produces database ACID semantics for data written into Kafka and shred across topics and partitions. It re-orders it. we call it the transactionally consistent consumer. It provides operation order, bookmarks for restarting applications, and allows parallelism in performance but ordered, exactly once, deduplicated consumption of data.
From my talk at the Kafka Summit...
https://www.confluent.io/kafka-summit-sf18/a-solution-for-leveraging-kafka-to-provide-end-to-end-acid-transactions
I am using an eclipse paho client to send mqtt messages to a mosquitto broker. The payload is in JSON format. The broker parses the payload and updates it with some more information and publishes to a subscriber. The subscriber in my case is a BDAS/SPARK instance.
the client, broker and SPARK instance are running in different boxes.
in this sequence i want to integrate my mosquitto broker with mongoDB..i tried to do it with nodered but not successful.
Could you point me to some suggestion on this ?
If mosquitto is not a hard requirement, you could also use a MQTT broker with a plugin system (like HiveMQ) to do this. You can see an example architecture in this blog post.
It should be pretty trivial to write such a plugin for HiveMQ, you only need to implement the OnPublishCallback (see the documentation)
An example where you can start is e.g. This Github Repository.