Watson Assistant image responses - ibm-cloud

Do you know if by making the images "Public" in IBM Cloud Object Storage it really becomes public meaning you can find the resources through web browser as google? or it becomes like "shared via link" because IBM COS provides a link, and that works just fine for the Watson Assistant image responses, but are those images unsafe somehow?

When making a folder in S3 / IBM Cloud Object Storage (COS) PUBLIC, the content is accessible. Because there are tools (and attackers) that scan for host names, IP addresses and available services, there is a chance that a scanner might find the offered resources (images). Public is public.
I have used images stored on COS in a public folder for chatbots developed with IBM Watson Assistant in image responses. If you use the web chat feature and users access the chatbot, they could download the images - the images are "public".

Related

IBM Watson Assistant: How can I send an image as chatbot response

I want to attach an image as IBM Watson Assistant response.
It must be placed in a public repository, but I want to know if it is possible to have it on IBM Cloud Object Storage because I have my images there.
If I could not make it possible. How can I send an image as response in watson assistant? I could not find anything in the documentation.
You can define image responses in Watson Assistant. This can be done either through the dialog builder or by using the JSON response editor. When using the dialog builder, there is a form for the image title, description and the URL.
To access an image on IBM Cloud Object Storage from within Watson Assistant and to display it, the image needs to be publicly accessible. You can either enable public access on the entire bucket or on individual storage objects. The first could be a security concern, the latter is more work.
The URL for the image would be composed of the public endpoint, the bucket and the image name, e.g., https://s3.eu.cloud-object-storage.appdomain.cloud/your-bucket-name/this-is-the-image.png.
Here is how in my test it looks in the Try it out window with the image coming from my public IBM COS bucket:

IBM Watson Assistant: How to make API calls from dialog

We have integrated IBM Watson Assistant skill/workspace with a Facebook page using the Watson features. We did this using an integrated approach from Virtual Assistants tab.
We are able to get the response in Facebook Messenger from Watson skill/workspace FAQS. Now we want to add a few more questions to skill/workspace and get the response from a database.
We know that we can use IBM Cloud Functions to get DB data and respond back with the data, but Cloud Functions action types (web_action and cloud_function or server) incur a cost, hence we are looking for another approach.
We have our own APIs developed for the DB and want use those in Watson Assistant dialogue node actions. Please let us know how we can add it in actions and get a response from the API without using client application/cloud functions.
Note: we haven't developed any application for this chatbot, we directly integrated Watson skill/workspace with the Facebook page and trying to call API calls wherever we require them from the dialogue nodes.
As you can see, IBM Watson Assistant allows to invoke three different types of actions from a dialog node.
client,
server (cloud_function),
web_action.
Because for cloud_function and web_action the action is hosted as Cloud Function on IBM Cloud, the computing resources are charged. For type client, your app would handle the API call and the charges depend on where your app is hosted. Thus, there are always costs.
What you could do is to write a wrapper function that is deployed as web_action or cloud_function. Thus, there isn't much of computing resource needed and the charges would be minimal. But again, independent of the action type, there are always costs (maybe not charges) - one way or another...

How to connect between IBM Watson Assisstant and IBM Watson Discovery?

I am using IBM Watson Assisstant to create a chatbot, and created a Watson Discovery collection too in the project.
I need help in that how the dialogs works to take a response from the Discovery collection when an intent along with the entities are detected in the try section.
in the response section we have to define something or there is something else similar like text response
is the discovery response is only available in the app on which we work
There are a couple of options to link up a IBM Watson Assistant chatbot to IBM Watson Discovery.
The first and oldest is to have the application interact with Watson Assistant and, depending on the flow, context and response send a request to Watson Discovery. Basically, the integration is done in the application layer.
The second option is to use server or client dialog actions in Assistant to directly call into Discovery. See my blog on a barebone news chatbot and the related code on GitHub on how to implement such an action. My example uses client actions (basically let the app handle it again), but server actions are similar. This IBM Cloud solution tutorial covers server actions for a database-driven bot.
The newest option and currently in beta is to use the direct integration of Assistant and Discovery. See "Building a search skill" for an introduction into this direct linkage between IBM Watson Assistant and IBM Watson Discovery.
The option 2 (server action) and 3 should work from the "Try it" in the tooling, the others not because of the app-based coordination.
As usual in IT, there are different way to achieve the goal, the choice is yours... ;-)

Is it possible to output google search results in IBM Watson Chatbot?

I have created a chatbot with IBM Watson Assistant. But currently I have hardcoded all values in the dialog
e.g : When some user will ask "Who created Computer ?" then in the dialog flow I have written "XYZ created computer".
But suppose the user will ask about some other person and that value is not hardcoded in the dialogs on IBM Watson Assistant then is there any way by which I can provide Google search results?
You can make programmatic calls from within IBM Watson Assistant dialog nodes. Both server-side actions (IBM Cloud Functions) or client side calls (within the app) are supported. That way you can react to such queries as described and could call a search engine, database or somethin else.
This IBM Cloud solution tutorial on how to build a database-driven Slackbot uses server side actions to interact with a Db2 database. Instead of calling out to a database to fetch data, in your example, you would open a request to Google search.
I saw that you tagged it "facebook-apps". If you use Botkit middleware to build an integration with Facebook Messenger, then check out this blog on how to enable actions in the Botkit Middleware for Watson Assistant.

Can I serve AMP pages with firebase-storage?

Since firebase hosting only serve static pages, can I serve [dynamically generated] AMP pages with firebase-storage?
Now with firebase cloud functions, you can use an all-firebase solution:
As an answer to an http request, you can query the database and dynamically generate a amp/html page that is send to the browser.
Here is my approach generating an html page Firebase HTTP Cloud Functions - Read database once
By now I have developed it further to serve an amp page.
If I understand AMP correctly, it can be used entirely statically, so there's no reason an AMP page couldn't be hosted on Firebase Hosting.
If you're doing dynamic rendering, you'll want to use App Engine, Compute Engine, or Kubernetes (or similar tech on different cloud platforms).
You can set up a bucket on Google Cloud Storage as a website. See this article:
https://cloud.google.com/storage/docs/hosting-static-website
You can also use the firebase storage bucket for that, because firebase storage === google cloud storage.
Regards, Peter