I would like a way to build a dialogue through classes. For example, I could create and instantiate a class called dialog and a class named node. When instantiating the Dialog class, I could have a method that would add nodes to the dialog. So I would build a whole dialog programmatically without having to use the Conversation Tool or API/Watson conversation service, in other words without use the front-end web of IBM Cloud for dialogues. Is it possible? What are all the possibilities of working with Watson Conversation Service?
Watson Assistant (WA) has a "Workspace" API which allows you to programmatically build/edit your workspace.
You can either add some components directly, or build the whole workspace locally and then push it to WA.
The documentation starts here.
https://www.ibm.com/watson/developercloud/assistant/api/v1/curl.html?curl#workspaces-api
There are also a number of SDKs which will allow you to do this through code, reducing the need to build from scratch.
https://github.com/watson-developer-cloud
Related
I've been following the Gather, visualize, analyze and detect anomalies in IoT data tutorial and, although I was able to go through it flawlessly, I'm stuck in the second step of the "Create a data connector to store the historical data" section. In my Watson IoT left menu, there is no entry called "Extensions", my last option is the "Configuration" one. As far as I know, I have re-checked all the steps twice and I have tried to configure different regions (I'm located in Spain) for both Watson IoT and cloudant services (all within the "Lite" program), but I can't, for the life of me, forward the data received in Watson IoT to Cloudant.
Is there anything that has changed in the Watson IoT platform since the tutorial was written? Do I need to activate anything in my account that allows me to see the "Extensions" option?
Thank you for your support and if you need more information about my setup, don't hesitate to ask.
Best regards,
Aitor
As mentioned in the Solution tutorial,
Setting up a new connection is a four-step process:
Create a service binding that provides Watson IoT Platform with the necessary information to connect to the Cloudant service.
Create a connector instance for the service binding.
Configure one or more destinations on the connector.
Set up one or more forwarding rules for each destination.
You can refer to the IBM Watson IoT Platform - Historical Data Storage Extension APIs
swagger UI as mentioned in the tutorial.
You can also access the interactive API docs directly from the Watson IoT Platform service dashboard by selecting the menu bar help icon in the upper right corner of the window and then clicking API > historian connector > View APIs. To store the historical data to Cloudant, you will be passing the IBM Cloudant credentials to create a Watson IoT Platform service binding.
You can find the extensions deprecation information in the post here
I want to train the Watson Conversation service without using the toolkit? I want the chatbot to be trained by code.
I want to develop a system from which the administrator of a web page can edit or create intents and entities, so that in this way I do not have to be the one to edit if something is wanted to change. IBM Watson Virtual Agent is something similar to what I want to create
You can create your own tooling or integration into conversation using the Workspace API.
https://www.ibm.com/watson/developercloud/conversation/api/v1/#workspaces
I have some experience building chat and voice agents for other platforms, but I’m not using API.AI to understand natural language and parse intents. Do I have to replace my existing solution with API.AI?
Not at all. The advantages of using API.AI in creating a Conversation Action include Natural Language Understanding and grammar expansion, form filling, intent matching, and more.
That said, the Actions on Google platform includes a CLI, client library, and Web Simulator, all of which can be used to develop an Action entirely independent of API.AI. To do this you’ll need to build your own Action Package, which describes your Action and expected user grammars, and an endpoint to serve Assistant’s requests and provide responses to your users queries. The CLI can be used to deploy your Action Package directly to Google, and you can host your endpoint on any hosting service you wish. Google recommends App Engine on Google Cloud Platform.
I found this explanation from the official page most helpful.
API.AI
Use this option for most use cases. Understanding and parsing natural, human language is a very hard task, and API.AI does all that for you. API.AI also wraps the functionality of the Actions SDK into an easy-to-use web IDE that has conveniences such as generating and deploys action packages for you.
It also lets you build conversational experiences once and deploy to many other platforms other than Actions on Google.
ACTIONS SDK
Use this option if you have simple actions that have very short conversations with limited user input variability. These type of actions typically don't require robust language understanding and typically accomplish one quick use case.
In addition, if you already have an NLU that you want to use and just want to receive raw text and pass it to your own NLU, you will also need to use the Actions SDK.
Finally, the Actions SDK doesn't provide modern conveniences of an IDE, so you have to manually create action packages with a text editor and deploy them to your Google Developer project with a command-line utility.
Google is pushing aggressively everybody to API.AI. The only SDK they have (Node.js) no longer supports expected events for instance. Of course, you don't need to rely on their SDK (you can talk to the API directly) but they may change the API too. So proceed with caution.
I am using Watson Conversation services on Bluemix. We have multiple Conversation workspaces within the service to enable better segmentation of the problem space.
I need to load information on the set of available workspaces within the Conversation service (e.g. name, workspace ID) to allow me to target the appropriate Conversation API endpoint. I've been trying to find a Watson or Bluemix API to allow me to retrieve the information directly but have not had any success.
Does anyone know if it is possible to retrieve this information programmatically and if there are any best practices for doing so?
We don't have an exposed endpoint for this capability at this point. It is something being discussed internally, however.
The API for managing Conversation workspaces is now available. It is possible to list workspaces, to create/update/delete a workspace and to download an entire workspace. The API is supported by the Watson SDKs.
Using the new API, I wrote a small tool for managing Conversation workspaces. The tool shows the API in action. The source is available on GitHub to demonstrate how the API can be of use.
I just want to use a login component, that handle the process for login, signin and recover password. I want to reuse this component in different openui5 applications.
What I cannot understand is how can I change from one component to another. I mean, once I authenticate in my login compomnent, and a controller validate authenticate the user, how can I change to another openui5 component.
I was trying to understand CrossApplicationNavigation in the AppNavSample but I cannot understand it. https://sapui5.hana.ondemand.com/sdk/test-resources/sap/ushell/demoapps/AppNavSample/localMinimalRenderer.html
Any ideas?
Do you mean SAPUI5 rather than OpenUI5? The control (and SDK) you reference is the Unified shell (ushell), which is a component of SAPUI5 only.
This control is the "Launchpad" for the SAP Fiori apps and is built from metadata of associated user roles from the backend SAP system which is why it is only part of SAPUI5 not OpenUI5.
If you do mean you want to use the Launchpad and are developing on a SAP system, you can find the documentation about "Cross-app navigation" here - http://help.sap.com/saphelp_uiaddon10/helpdata/en/09/4d968eb7c442208303427e82da92c9/content.htm?frameset=/en/09/4d968eb7c442208303427e82da92c9/frameset.htm¤t_toc=/en/e4/843b8c3d05411c83f58033bac7f072/plain.htm&node_id=187&show_children=true#jump187.
However, your example - a login form, then the main application - suggests this is actually not what you want to do and instead you want to navigate in within specific views of an app. Check out this tutorial in the OpenUI5 documentation on "Navigation and Routing" - https://openui5.hana.ondemand.com/#docs/guide/1b6dcd39a6a74f528b27ddb22f15af0d.html
Further, your example of login + app actually sounds like it might need to be handled outside of UI5 itself on the specific backend/application/web server that you are developing against. UI5 can provide the appropriate frontend though.