I want to attach an image as IBM Watson Assistant response.
It must be placed in a public repository, but I want to know if it is possible to have it on IBM Cloud Object Storage because I have my images there.
If I could not make it possible. How can I send an image as response in watson assistant? I could not find anything in the documentation.
You can define image responses in Watson Assistant. This can be done either through the dialog builder or by using the JSON response editor. When using the dialog builder, there is a form for the image title, description and the URL.
To access an image on IBM Cloud Object Storage from within Watson Assistant and to display it, the image needs to be publicly accessible. You can either enable public access on the entire bucket or on individual storage objects. The first could be a security concern, the latter is more work.
The URL for the image would be composed of the public endpoint, the bucket and the image name, e.g., https://s3.eu.cloud-object-storage.appdomain.cloud/your-bucket-name/this-is-the-image.png.
Here is how in my test it looks in the Try it out window with the image coming from my public IBM COS bucket:
Related
Do you know if by making the images "Public" in IBM Cloud Object Storage it really becomes public meaning you can find the resources through web browser as google? or it becomes like "shared via link" because IBM COS provides a link, and that works just fine for the Watson Assistant image responses, but are those images unsafe somehow?
When making a folder in S3 / IBM Cloud Object Storage (COS) PUBLIC, the content is accessible. Because there are tools (and attackers) that scan for host names, IP addresses and available services, there is a chance that a scanner might find the offered resources (images). Public is public.
I have used images stored on COS in a public folder for chatbots developed with IBM Watson Assistant in image responses. If you use the web chat feature and users access the chatbot, they could download the images - the images are "public".
I'm creating a bot using IBM Watson Assistant. I am trying to use a webhook, but don't know the format of the POST request JSON/HTML which is sent to the webhook.
My case study is a shop where user can pre-order. I want to send the order details to my back-end server and give the user a reference number for the pre-order. I found nothing in the documentation about what POST request format is sent from IBM Watson Assistant and in what format response should be returned.
I know IBM Watson Assistant does not require a particular response format. It allows the developer to manipulate the response as the developer wants.
IBM Watson Assistant has a documented API. There are the recommended V2 Assistant API which can be used to create session and then send messages. The older V1 Assistant API has more functions and is deeper into the system. Both APIs can be used to write chatbots.
If you mean a Webhook as the Watson Assistant feature to reach out from a dialog node to an external service, the process is the following:
in the global configuration, you define the URL and the header
for a dialog node, you enable webhooks, then define the key / value pairs that are sent as payload. They can differ by dialog node.
Typically, the expected result is JSON data because it is the easiest to process.
This IBM Cloud Solution tutorial on building a Slack bot with Watson Assistant uses webhooks to call out to a Db2 database. The code is available in a GitHub repo.
How can I extract a specific user conversation from all chat logs? I noticed that the chat-log JSON response contains a field named conversation-id.
My goal is to obtain, via Cloud Function, that specific conversation id for the current conversation. So, when the conversation starts how can I retrieve the current conversation id?
The answer is similar to this one:
You can access context variables using either context[variableName] or $variableName. See the documentation on expressions for accessing objects in IBM Watson Assistant.
The conversation_id is in the set of context variables (context.conversation_id). You can access it as part of that structure.
We have integrated IBM Watson Assistant skill/workspace with a Facebook page using the Watson features. We did this using an integrated approach from Virtual Assistants tab.
We are able to get the response in Facebook Messenger from Watson skill/workspace FAQS. Now we want to add a few more questions to skill/workspace and get the response from a database.
We know that we can use IBM Cloud Functions to get DB data and respond back with the data, but Cloud Functions action types (web_action and cloud_function or server) incur a cost, hence we are looking for another approach.
We have our own APIs developed for the DB and want use those in Watson Assistant dialogue node actions. Please let us know how we can add it in actions and get a response from the API without using client application/cloud functions.
Note: we haven't developed any application for this chatbot, we directly integrated Watson skill/workspace with the Facebook page and trying to call API calls wherever we require them from the dialogue nodes.
As you can see, IBM Watson Assistant allows to invoke three different types of actions from a dialog node.
client,
server (cloud_function),
web_action.
Because for cloud_function and web_action the action is hosted as Cloud Function on IBM Cloud, the computing resources are charged. For type client, your app would handle the API call and the charges depend on where your app is hosted. Thus, there are always costs.
What you could do is to write a wrapper function that is deployed as web_action or cloud_function. Thus, there isn't much of computing resource needed and the charges would be minimal. But again, independent of the action type, there are always costs (maybe not charges) - one way or another...
I have created a chatbot with IBM Watson Assistant. But currently I have hardcoded all values in the dialog
e.g : When some user will ask "Who created Computer ?" then in the dialog flow I have written "XYZ created computer".
But suppose the user will ask about some other person and that value is not hardcoded in the dialogs on IBM Watson Assistant then is there any way by which I can provide Google search results?
You can make programmatic calls from within IBM Watson Assistant dialog nodes. Both server-side actions (IBM Cloud Functions) or client side calls (within the app) are supported. That way you can react to such queries as described and could call a search engine, database or somethin else.
This IBM Cloud solution tutorial on how to build a database-driven Slackbot uses server side actions to interact with a Db2 database. Instead of calling out to a database to fetch data, in your example, you would open a request to Google search.
I saw that you tagged it "facebook-apps". If you use Botkit middleware to build an integration with Facebook Messenger, then check out this blog on how to enable actions in the Botkit Middleware for Watson Assistant.