Get Watson Conversation Workspaces - ibm-cloud

I am using Watson Conversation services on Bluemix. We have multiple Conversation workspaces within the service to enable better segmentation of the problem space.
I need to load information on the set of available workspaces within the Conversation service (e.g. name, workspace ID) to allow me to target the appropriate Conversation API endpoint. I've been trying to find a Watson or Bluemix API to allow me to retrieve the information directly but have not had any success.
Does anyone know if it is possible to retrieve this information programmatically and if there are any best practices for doing so?

We don't have an exposed endpoint for this capability at this point. It is something being discussed internally, however.

The API for managing Conversation workspaces is now available. It is possible to list workspaces, to create/update/delete a workspace and to download an entire workspace. The API is supported by the Watson SDKs.
Using the new API, I wrote a small tool for managing Conversation workspaces. The tool shows the API in action. The source is available on GitHub to demonstrate how the API can be of use.

Related

How to connect an external Django backend with IBM Watson (Notebook)?

I'm doing a small project that I don't know how-to connect IBM Watson with Django backend and even looking for the docs: I can't find examples, documentation or tutorials.
Basically, I want to create Jobs (Notebooks running) remotely, but I need to send an ID to each notebook because when I run a notebook I need to specify which file are going to process from Cloud Storage ("MY-PROJECT-COS"). The situation shown in the Figure below describes that.
The pipeline that I want to implement is like the Figure below. And this problem just stopped the whole project. I will really appreciate any suggestion, recommendations and solutions.
You should check the Watson Data APIs. Especially, Create a job and Start a run for a job API calls. Use the request body to pass the specific ID.
You can use a collection of Watson Data REST APIs associated with
Watson Studio and Watson Knowledge Catalog to manage data-related
assets and connections in analytics projects and catalogs on IBM Cloud
Pak for Data.
Catalog data Use the catalog and asset APIs to create catalogs to
administer your assets, associate properties with those assets, and
organize the users who use the assets. Assets can be notebooks or
connections to files, database sources, or data assets from a
connection.
Govern data Use the governance and workflows APIs to implement data
policies and a business glossary that fits to your organization to
control user access rights to assets and to uncover data quality and
data lineage.
Add and find data Use the discovery, search, and connections APIs to
add and find data within your projects and catalogs.
You can also access a local version of this API docs on each Cloud Pak
for Data installation:
https://{cpd_cluster_host}/data-api/api-explorer

Watson IoT: "Extensions" entry is not available in left menu

I've been following the Gather, visualize, analyze and detect anomalies in IoT data tutorial and, although I was able to go through it flawlessly, I'm stuck in the second step of the "Create a data connector to store the historical data" section. In my Watson IoT left menu, there is no entry called "Extensions", my last option is the "Configuration" one. As far as I know, I have re-checked all the steps twice and I have tried to configure different regions (I'm located in Spain) for both Watson IoT and cloudant services (all within the "Lite" program), but I can't, for the life of me, forward the data received in Watson IoT to Cloudant.
Is there anything that has changed in the Watson IoT platform since the tutorial was written? Do I need to activate anything in my account that allows me to see the "Extensions" option?
Thank you for your support and if you need more information about my setup, don't hesitate to ask.
Best regards,
Aitor
As mentioned in the Solution tutorial,
Setting up a new connection is a four-step process:
Create a service binding that provides Watson IoT Platform with the necessary information to connect to the Cloudant service.
Create a connector instance for the service binding.
Configure one or more destinations on the connector.
Set up one or more forwarding rules for each destination.
You can refer to the IBM Watson IoT Platform - Historical Data Storage Extension APIs
swagger UI as mentioned in the tutorial.
You can also access the interactive API docs directly from the Watson IoT Platform service dashboard by selecting the menu bar help icon in the upper right corner of the window and then clicking API > historian connector > View APIs. To store the historical data to Cloudant, you will be passing the IBM Cloudant credentials to create a Watson IoT Platform service binding.
You can find the extensions deprecation information in the post here

Organizing Microsoft Azure DevOps Projects

I have a question about Microsoft DevOps (formerly Visual Studio Team Services or VSTS). I have multiple applications that are set up as separate projects, but we have basically one team of devs. Some of the older projects are TFS based some are git.
Ideally I would like to create a board based on the team and 'attach' projects to the board. Or something that ends up being roughly the equivalent of this.
I can't seem to find anything close to this. Does anyone have any ideas? Or any suggestions?
Thanks for your help!
As I mentioned in my comment you can use the AzureDevOps Rest API.
Representational State Transfer (REST) APIs are service endpoints that
support sets of HTTP operations (methods), which provide create,
retrieve, update, or delete access to the service's resources
Most REST APIs are accessible through our client
libraries,
which can be used to greatly simplify your client code.
Once you created your own board, you can fill up the details using the REST API response.

How make a chatbot with Watson Conversation and Slack?

I want to build a Slack bot that can answer support queries. I've designed the conversation in Watson Conversation, but now I want to deploy it to Slack channels.
Ideally I don't want to have to develop and host an application to broker messages between the two systems.
Is there any platform or solution that I can use?
There are two possible solutions that I can think of.
Watson Conversation offers some basic integration with Slack through an application you can deploy yourself in a container. I believe they have a repo in their github (https://github.com/IBM/slack-watson-bot). You'll have to host this somewhere, though Bluemix offers some basic free hosting for a limited time. There are plenty of tutorials out there for spinning up containers in Bluemix.
An alternative solution (disclosure - this is my company) that wouldn't require hosting or development would be to use Bothaus (https://bothaus.io). Bothaus lets you configure integration between Slack and Watson Conversation without hosting or coding anything.
In Watson Conversation while in your conversation workspace, click on the Deploy icon.
After that, click deploy on the Slack card. Click "Deploy to Slack App", and follow the steps.
You should not need to code anything, just fill related fields with data.
Just be aware, if your URLs you are requested to enter into slack contain spaces, change them to %20 so that Slack will recognise the URL.

Can we configure our Bluemix chatbot Application with Multiple Conversation Workspace?

Can we configure our Bluemix chatbot Application with Multiple Conversation Workspace? If yes then how we can call to particular conversation service on the basis of user questions asked on chatbot?
This can be done by your application. A scenario could be that
the user input is analyzed by your app or by sending it to the national language classifier or NL understanding service.
based on the results of the analysis your app would then send the input to the specific workspace
calls into a conversation workspace are stateless, but have an ID for the individual conversation (chat) and metadata about where in the dialog you are
that info could be used to later jump back to where in the conversation for a workspace the user was
IMHO, that technique could be used to support multiple spoken languages or to separate different more complex subjects into individual workspaces. Take a look at the architecture diagram in the documentation for the general idea.