Reading the documentation at https://github.com/watson-developer-cloud/conversation-enhanced#local
it states "Ensure that you have a Bluemix account. While you can do part of this deployment locally, you must still use Bluemix.". If I use a custom dataset will any of this data be sent to IBM's servers ?
Specifically I think im referring to step circled red ('Create or import training data') :
I plan to build a similar app to conversation-enhanced (https://github.com/watson-developer-cloud/conversation-enhanced) but want to ensure local data will not be sent to IBM.
Yes, you will need to send data to IBM.
If you are planning on using the IBM BlueMix services, then I recommend you read the terms of use, found here, to understand your full agreement:
http://www-03.ibm.com/software/sla/sladb.nsf/pdf/6606-08/$file/i126-6606-08_05-2016_en_US.pdf
Related
I'm doing a small project that I don't know how-to connect IBM Watson with Django backend and even looking for the docs: I can't find examples, documentation or tutorials.
Basically, I want to create Jobs (Notebooks running) remotely, but I need to send an ID to each notebook because when I run a notebook I need to specify which file are going to process from Cloud Storage ("MY-PROJECT-COS"). The situation shown in the Figure below describes that.
The pipeline that I want to implement is like the Figure below. And this problem just stopped the whole project. I will really appreciate any suggestion, recommendations and solutions.
You should check the Watson Data APIs. Especially, Create a job and Start a run for a job API calls. Use the request body to pass the specific ID.
You can use a collection of Watson Data REST APIs associated with
Watson Studio and Watson Knowledge Catalog to manage data-related
assets and connections in analytics projects and catalogs on IBM Cloud
Pak for Data.
Catalog data Use the catalog and asset APIs to create catalogs to
administer your assets, associate properties with those assets, and
organize the users who use the assets. Assets can be notebooks or
connections to files, database sources, or data assets from a
connection.
Govern data Use the governance and workflows APIs to implement data
policies and a business glossary that fits to your organization to
control user access rights to assets and to uncover data quality and
data lineage.
Add and find data Use the discovery, search, and connections APIs to
add and find data within your projects and catalogs.
You can also access a local version of this API docs on each Cloud Pak
for Data installation:
https://{cpd_cluster_host}/data-api/api-explorer
I've been following the Gather, visualize, analyze and detect anomalies in IoT data tutorial and, although I was able to go through it flawlessly, I'm stuck in the second step of the "Create a data connector to store the historical data" section. In my Watson IoT left menu, there is no entry called "Extensions", my last option is the "Configuration" one. As far as I know, I have re-checked all the steps twice and I have tried to configure different regions (I'm located in Spain) for both Watson IoT and cloudant services (all within the "Lite" program), but I can't, for the life of me, forward the data received in Watson IoT to Cloudant.
Is there anything that has changed in the Watson IoT platform since the tutorial was written? Do I need to activate anything in my account that allows me to see the "Extensions" option?
Thank you for your support and if you need more information about my setup, don't hesitate to ask.
Best regards,
Aitor
As mentioned in the Solution tutorial,
Setting up a new connection is a four-step process:
Create a service binding that provides Watson IoT Platform with the necessary information to connect to the Cloudant service.
Create a connector instance for the service binding.
Configure one or more destinations on the connector.
Set up one or more forwarding rules for each destination.
You can refer to the IBM Watson IoT Platform - Historical Data Storage Extension APIs
swagger UI as mentioned in the tutorial.
You can also access the interactive API docs directly from the Watson IoT Platform service dashboard by selecting the menu bar help icon in the upper right corner of the window and then clicking API > historian connector > View APIs. To store the historical data to Cloudant, you will be passing the IBM Cloudant credentials to create a Watson IoT Platform service binding.
You can find the extensions deprecation information in the post here
I need to configure the multi-region Kubernetes deployment. My services use IBM Watson. But Watson does not provide a global instance. It based on the region. Am I to use two different Watson for two regions?
Depending on why you want to deploy a multi-region app and what type of IBM Watson service you want to integrate, there are different options available. There is an IBM Cloud solution tutorial with strategies for resilient applications which might be a good introduction and with related links.
If it is for resiliency, you would need to check what SLAs and deployment model the service in question offers. Depending on the IBM Watson service the APIs are stateless or require to open a session. Thus, the application design needs to take that into account.
If it is for performance for a global audience (app users), you might need to look into how to split traffic, cache answers, or deploy services and app instances closer to the user.
Without any details from your side it is a pretty broad question and hard to answer.
The supported list of transformations in IBM's ETL service DataConnect in Bluemix Cloud are these ones here: https://console.ng.bluemix.net/docs/services/dataworks1/using_operations.html#concept_h4k_5tf_xw
I have looked and looked but with no luck, what if I want to transform some of my data with an operation that is not included here? For example run custom code in a column and get some specific output?
Data Connect does not currently support refine operations outside of those provided with the service. We are adding new features and functionality weekly, but if you have a specific operation in mind, please let us know.
I will find out for you if we have the ability to execute custom code on our roadmap.
Regards,
Wesley - IBM Bluemix Data Connect Engineering
As Wes mentions above in the short term we will continue to add new data preparation and transformation capabilities to the service. Currently there is no extensibility that allows you to code new transformations.
In the longer term we are considering allowing users to edit/extend pipelines using languages like Scala and Python. We don't have a defined date for these new capabilities.
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager
I am a h/w engineer interested in using Bluemix for an IOT application. Other than C, I do not know any programming language but I am willing to learn whatever necessary. My application is as follows:
My sensor nodes would upload data to an existing h/w server that has the capability to upload the data to an external SQL server. I want to analyze this data on the SQL server on a periodic basis and generate reports that I can publish to a mobile application or even a web-page to begin with.
Questions:
Is it possible to implement the "SQL server --> Data analysis --> Report generation + data visualization --> HTML(?) Publish" flow on Bluemix?
What modern/efficient languages can I learn in order to do this with the least effort?
Is there a standard implementation/example that I can use as reference for the flow described above?
This question actually has little to do with IoT--that just happens to be the source of the data--and focuses on how to process data for analysis, report generation, and publishing. You can do this mostly using services in Bluemix such that there's little if any code to write and so the programming language of the runtime may not matter.
First, to store the data, you could use SQL Database or dashDB. The former is "just" a database, whereas the latter includes R and R-Studio for data analysis. Second, for report generation, you can use Embeddable Reporting, which has Cognos (e.g. IBM Cognos Business Intelligence reports) built in.
The way Cloud Foundry in Bluemix works, you'll need to create a runtime with some language, then bind the service instances to it so you can use them. But you may not have any code to write, in which case the language doesn't matter. In case you do need to write some code, choose whichever language you think you can learn most easily. Java programmers prefer that, but it requires compiling; they may also prefer Go. You'll probably have an easier time with Node.js and PHP, which are popular interpreted languages.
A couple of resources for further info:
"Embed rich reports in your applications" shows how to use Embeddable Reporting with dashDB.
"Leverage IBM Cognos on IBM Bluemix using the Embeddable Reporting service" shows how to use Embeddable Reporting with SQL Database.
"Embed Reports and visualize Data in your Bluemix Applications" gives an overview of both approaches.
BTW, Bluemix also has a neat service called Internet of Things, which helps connect your Bluemix app to lots of things all over the Internet. Sounds like you already have this handled for this example, but as you continue to use Bluemix for IoT applications, you might want to look into this service too. The Internet of Things Foundation Starter helps you get started using Node.js, Cloudant, and Node-RED.