I want to activate the Kafka Spark pipeline for the Thingsboard platform (community edition).
As per the mentioned Stack question "Couldn't able to find plugins in ThingsBoard 2.0.3 Home screen"
It's been said that we can do it via Rule chains itself since the plugin section has been removed, but I am not able to understand how to configure it using rule chains. I am not able to get the complete documentation to configure Kafka via rule chains. So need help on that.
I figured it out. By using this link it can be done easily "https://thingsboard.io/docs/samples/analytics/kafka-streams/"
The thing is that using the Thingsboard CE we can get data into Kafka-topic. However, to fetch data from Kafka you will need to have TB Professional Edition integration.
The alternate option to Thingsboard PE is to write your own REST API script to push the insights back to ThingsBoard.
Related
I am trying to create a new Processor Group from the latest version of one of the Processor Groups in my Nifi-Registry. However, I want to do this via REST API, but I am not able to find the rest command that works.
As suggested online in one of the forums, I tried using chrome developer tools to check the REST calls, but when I use developer tools, the drag and drop feature does not work in the UI! I don't know what am I doing wrong in the developer tools.
There should be no reason why dev tools interferes with the application. Here is the request shown in dev tools when creating a PG and selecting to import from registry...
Here is the content of the request, it should be the same as creating a regular PG, except the version control information is specified:
There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.
I am trying to connect my raspberri pi with Google IoT Cloud solutions using Weave. I have done it already using AWS and IBM Bluemix, but could not find a way to do the same using Google Cloud. As per their documentation, it seems that some of the fies have been deprecated or not been updated.
Moreover, they have been written in C language and I am not much of a C guy. I used Python for both the IBM Bluemix and AWS to connect my Pi to IoT and then establish the subscriber and exchange messages using MQTT gateway.
Can anyone suggest anything regarding this?
Google Weave getting started
To be more specific, certain packages which I saw in error logs while installing the below step:
make -C examples/host/light
it showed in logs the message like
could not find lldap
could not find llssh2
Even after installing them in my developer machine.
Due to error above, the below command
./out/host/examples/light/light
is not executed as the location
/out/host/examples/light/light
is not created by the above make command. Any suggestions for this?
You might want to try instead to use the new Google Cloud IoT Core product instead of Weave - full disclosure, I worked on it. It's currently in public beta and enables the scenarios you're trying to address. You should be able to use MQTT to communicate to/from your device.
There's a high-level overview of the platform on YouTube as well as an industrial applications focused talk from Google I/O.
i have two question
1) I want to use Kafka with Google cloud Dataflow Pipeline program. in my pipeline program I want to read data from kafka is it possible?
2) I created Instance with BigQuery enabled now i want to enable Pubsub how can i do ?
(1) Ad mentioned by Raghu, support for writing to/reading from Kafka was added to Apache Beam in mid-2016 with the KafkaIO package. You can check the package's documentation[1] to see how to use it.
(2) I'm not quite sure what you mean. Can you provide more details?
[1] https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/kafka/KafkaIO.html
Kafka support was added to Dataflow (and Apache Beam) in mid 2016. You can read and write to Kafka streaming pipelines. See JavaDoc for KafkaIO in Apache Beam.
(2) As of April 27, 2015, you can enable Cloud Pub/Sub API as follows:
Go to your project page on the Developer Console
Click APIs & auth -> APIs
Click More within Google Cloud APIs
Click Cloud Pub/Sub API
Click Enable API
Can I create and break cross-cluster replication links in some way other than the web console?
Web console is just UI for Couchbase REST API, therefore you can try to create CLI utility that manipulates HTTP requests like "controller/createReplication".
Unfortunately neither official tools nor numerous sdk support this feature.
Docs about the Rest API can be found here:
Managing Cross Data Center Replication (XDCR)
http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-admin-restapi-xdcr.html
Creating replications:
http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-admin-restapi-xdcr-create-repl.html