I've been following the Gather, visualize, analyze and detect anomalies in IoT data tutorial and, although I was able to go through it flawlessly, I'm stuck in the second step of the "Create a data connector to store the historical data" section. In my Watson IoT left menu, there is no entry called "Extensions", my last option is the "Configuration" one. As far as I know, I have re-checked all the steps twice and I have tried to configure different regions (I'm located in Spain) for both Watson IoT and cloudant services (all within the "Lite" program), but I can't, for the life of me, forward the data received in Watson IoT to Cloudant.
Is there anything that has changed in the Watson IoT platform since the tutorial was written? Do I need to activate anything in my account that allows me to see the "Extensions" option?
Thank you for your support and if you need more information about my setup, don't hesitate to ask.
Best regards,
Aitor
As mentioned in the Solution tutorial,
Setting up a new connection is a four-step process:
Create a service binding that provides Watson IoT Platform with the necessary information to connect to the Cloudant service.
Create a connector instance for the service binding.
Configure one or more destinations on the connector.
Set up one or more forwarding rules for each destination.
You can refer to the IBM Watson IoT Platform - Historical Data Storage Extension APIs
swagger UI as mentioned in the tutorial.
You can also access the interactive API docs directly from the Watson IoT Platform service dashboard by selecting the menu bar help icon in the upper right corner of the window and then clicking API > historian connector > View APIs. To store the historical data to Cloudant, you will be passing the IBM Cloudant credentials to create a Watson IoT Platform service binding.
You can find the extensions deprecation information in the post here
Related
I'm doing a small project that I don't know how-to connect IBM Watson with Django backend and even looking for the docs: I can't find examples, documentation or tutorials.
Basically, I want to create Jobs (Notebooks running) remotely, but I need to send an ID to each notebook because when I run a notebook I need to specify which file are going to process from Cloud Storage ("MY-PROJECT-COS"). The situation shown in the Figure below describes that.
The pipeline that I want to implement is like the Figure below. And this problem just stopped the whole project. I will really appreciate any suggestion, recommendations and solutions.
You should check the Watson Data APIs. Especially, Create a job and Start a run for a job API calls. Use the request body to pass the specific ID.
You can use a collection of Watson Data REST APIs associated with
Watson Studio and Watson Knowledge Catalog to manage data-related
assets and connections in analytics projects and catalogs on IBM Cloud
Pak for Data.
Catalog data Use the catalog and asset APIs to create catalogs to
administer your assets, associate properties with those assets, and
organize the users who use the assets. Assets can be notebooks or
connections to files, database sources, or data assets from a
connection.
Govern data Use the governance and workflows APIs to implement data
policies and a business glossary that fits to your organization to
control user access rights to assets and to uncover data quality and
data lineage.
Add and find data Use the discovery, search, and connections APIs to
add and find data within your projects and catalogs.
You can also access a local version of this API docs on each Cloud Pak
for Data installation:
https://{cpd_cluster_host}/data-api/api-explorer
I currently have a Watson chatbot set up and also have a DB2 database with some tables set up.
Could someone please assist me in how to develop a code in the IBM Cloud function to connect both the chatbot and DB2 services, as well as how the code in the Dialog nodes needs to be to either read or write to the tables in DB2?
This IBM Cloud solution tutorial shows how to build a Db2-driven chatbot with IBM Watson Assistant and IBM Cloud Functions for the app code. The related GitHub repository has working code for serverless actions that either insert a new record into Db2 on Cloud or retrieve data based on criteria entered within the chat.
The file eventFetchDate.js would search for a specific event record within a given date range. You can use any supported programming language. The most important part is to pack the result into the structure expected by Watson Assistant. The workspace file has the full set of dialog nodes and demonstrates how to interact with the user and Db2.
A POC in our project uses IBM Watson Discovery services hosted in cloud for natural language analysis but our company wants an on premise solution for natural language analysis instead of cloud based one.
Is it possible to replace IBM Watson discovery API completely with IBM Watson Explorer.
I did some research and found that Explorer does not have Node JS APIs.
Also IBM Watson Explorer Rest API can be used for simpler use cases like searching.
Please help me in this regard as my knowledge on these two tools is limited.
To answer your question in short, yes it can be done.
Watson Explorer does have an API interface and yes it can be integrated with Node JS as well, it does not have a native npm package...I personally have implemented a very complex solution for a very reputed an automotive client using WEX as the Backend and Data Ingestion engine and Node JS on top, acting as the orchestrator and the UI.
You may want to see this post: https://developer.ibm.com/answers/questions/259089/rest-apis-for-wex-search/
> On Linux: {hostname}/vivisimo/cgi-bin/velocity?v.app=api-run
> Windows : {Hostname }/vivisimo/cgi-bin/velocity.exe?v.app=api-run
The api-runner have all the api's listed and one can also test them against the search collections (search collections are the equivalent to a table where the data is ingested and there are a lot of custom configurations which can be applied to make use of, for advance use).
So for using WEX with Node JS, you can make use of the api-runner url's and directly query WEX Engine.
This is how a sample GET query url may look like:
var link1 = 'http://' + WEX_IP + ':9080/vivisimo/cgi-bin/velocity?v.function=query-search&v.username='+username+'&v.password='+password+'&v.app=api-rest&v.indent=true&sources=' + WEX_col_name + '&start=0&num=15&v.app=api-rest&query=sortby:sort_severity%20AND%20sortby:Create_Date_desc%20AND%20Create_Date:>=03/30/2018%20AND%20case_flag:1%20AND%20NOT%20case_flag:0'
Hope this helps.
PS: The WEX API's return data in xml format, so if one is comfortable in xml parsing, one can use that or as in my case, I used xml2json package on node to covert xml to json object and then parse that to display the required fields on the UI.
I have sensors connected to a Raspberry Pi and using Node-Red on the Pi, I have them connected to IBM Watson-IOT. I created a board with 2 cards showing nice gauges. I want to 'share' this with a 'public' url - does anyone know how to easily do this?
There is not currently a way to share the boards and cards you create within the Watson IoT platform dashboard to a public url - those are only viewable by users who have access to view your IoT dashboard.
You could possibly create your own application to visualize the data and publish that externally, but there is not a way to open your dashboard configured cards for public viewing.
Currently that can't be done....but you can use freeboard for a similar effect
As answered above this is currently not possible but I can highly recommend using NodeRED and the new dash-boarding function for this...(comes also as a ready to use boilerplate in IBM Bluemix)
Tutorial
Source
I am using Watson Conversation services on Bluemix. We have multiple Conversation workspaces within the service to enable better segmentation of the problem space.
I need to load information on the set of available workspaces within the Conversation service (e.g. name, workspace ID) to allow me to target the appropriate Conversation API endpoint. I've been trying to find a Watson or Bluemix API to allow me to retrieve the information directly but have not had any success.
Does anyone know if it is possible to retrieve this information programmatically and if there are any best practices for doing so?
We don't have an exposed endpoint for this capability at this point. It is something being discussed internally, however.
The API for managing Conversation workspaces is now available. It is possible to list workspaces, to create/update/delete a workspace and to download an entire workspace. The API is supported by the Watson SDKs.
Using the new API, I wrote a small tool for managing Conversation workspaces. The tool shows the API in action. The source is available on GitHub to demonstrate how the API can be of use.