I have already created a function in IBM Cloud Functions, but how would I implement the parameters from user input?
What I'm trying to do is
For example: When a user types in "I need product" / "Buy product now" / Show me products. The product input is taken as a parameter and implements it into my Cloud Function, which displays all products that uses product as a keyword.
The response text would get info from the Cloud Function return output (which is a JSON array)
(res.body.items[?].name)
Example layout from IBM:
{
"context": {
"variable_name" : "variable_value"
},
"actions": [
{
"name":"getProducts",
"type":"client | server",
"parameters": {
"<parameter_name>":"<parameter_value>"
},
"result_variable": "<result_variable_name>",
"credentials": "<reference_to_credentials>"
}
],
"output": {
"text": "response text"
}
}
There is a full tutorial I wrote available in the IBM Cloud docs which features IBM Cloud Functions and a backend database. The code is provided on GitHub in this repository: https://github.com/IBM-Cloud/slack-chatbot-database-watson/.
Here is the relevant part from the workspace file that shows how a parameter could be passed into the function:
{
"type": "response_condition",
"title": null,
"output": {
"text": {
"values": []
}
},
"actions": [
{
"name": "_/slackdemo/fetchEventByShortname",
"type": "server",
"parameters": {
"eventname": [
"<? $eventName.substring(1,$eventName.length()-1) ?>"
]
},
"credentials": "$private.icfcreds",
"result_variable": "events"
}
],
"context": {
"private": {}
},
Later on, the result is presented, e.g., in this way:
"output": {
"text": {
"values": [
"ok. Here is what I got:\n ```<? $events['result'] ?>```",
"Data:\n ``` <? $events['data'] ?> ```"
],
"selection_policy": "sequential"
},
"deleted": "<? context.remove('eventDateBegin') ?><? context.remove('eventDateEnd') ?> <? context.remove('queryPredicate') ?>"
},
Some fancier formatting can be done, of course, by iterating over the result. Some tricks are here. The code also shows how to use a child node to process the result and to clear up context variables.
To obtain the parameter, in your case a product name or type, you would need to access either the input string and find the part after "product". Another way is to use the beta feature "contextual entity" which is designed for such cases.
Related
I'm developing an integration that will programmatically create product entries in Salesforce, and part of that process needs to be the addition of product images. I'm using the Connect API and am able to make a GET call to the right folder like this (I've scrambled the IDs and what not for this example):
https://example.salesforce.com/services/data/v52.0/connect/cms/delivery/channels/0591G0000000006/contents/query?folderId=9Pu1M000000fxUMSYI
That returns a payload like this:
{
"currentPageUrl": "/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000006/contents/query?page=0&pageSize=250",
"items": [
{
"contentKey": "MCZ2YVCGLNSBETNIG5P5QMIS4KNA",
"contentNodes": {
"source": {
"fileName": "PET Round.jpg",
"isExternal": false,
"mediaType": "Image",
"mimeType": "image/jpeg",
"nodeType": "MediaSource",
"referenceId": "05T0R000005MthL",
"resourceUrl": "/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000007/media/MCY2YVCGLNSBETNIG5P4QMIS4KNA/content",
"unauthenticatedUrl": "/cms/delivery/media/MCZ2YVCGLNSBETNIG5P4QMIS4KNA",
"url": "/cms/delivery/media/MCY2YVCGLNSBETNIG5P4QMIS4KNA"
},
"title": {
"nodeType": "NameField",
"value": "844333"
}
},
"contentUrlName": "844333",
"language": "en_US",
"managedContentId": "20T0R0000008U9qUAE",
"publishedDate": "2021-08-18T16:20:57.000Z",
"title": "844333",
"type": "cms_image",
"typeLabel": "Image",
"unauthenticatedUrl": "/cms/delivery/v52.0/0DB1G0000008tfOWAU/contents/20Y0R0000008y9qUAE?oid=00D0R000000OI7GUAW"
}
]
}
I am also able to retrieve images by contentKey with a GET call like this:
https://example.salesforce.com/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000007/media/MCZ2ZVCGLNSBETMIG5P4QMIS4KNA/content
Anyone know what the endpoint should look like and what parameters etc it should have? I'm having trouble finding anything for this specific scenario in the docs but surely there's a way.
Thanks!
IBM Watson Assistant apidoc v2 says that output.user_defined as
"An object containing any custom properties included in the response. This object includes any arbitrary properties defined in the dialog JSON editor as part of the dialog node output."
But it does not say where in JSON editor to set it up. Is it under output?
{
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"xxx": "aaa"
},
"context": {}
}
Can't set it to root level in JSON editor as the editor would complain that only output, output.generic, actions, context are allowed.
Where should I put it in JSON editor so that it appears in output.user_defined in response to /message REST call?
You can move that into the output section as user_defined. Here is what I tried:
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"user_defined": {
"test": "henrik"
}
}
I then used the V2 API of my test tool to verify. Here is the relevant part of how it was reported:
"output": {
"generic": [
{
"text": "Ok, checking the event information.",
"response_type": "text"
},
{
"text": "ok.",
"response_type": "text"
}
],
"debug": {...
},
"intents": [...
],
"user_defined": {
"test": "henrik"
},
"entities": [
{...
See also this section in the IBM Watson Assistan documentation with some more information on the response JSON.
As #data_henrik stated above, extra json elements added via the JSON editor to the output section of the response are moved into the user_defined section of the V2 output response.
These "extra" json elements do not have to be labelled user_defined. In my own case I have output.extra elements within my dialog responses. In V1 they remain output.extra, but in V2 they become output.user_defined.extra.
As you are just starting, it would be best to keep consistent and use output.user_defined as your starting point.
I need to show an unknown number of buttons in a Watson Assistant dialogue node. The data for the buttons comes from a IBM Cloud Function.
If I manually set up a response type "option" answer in my node the JSON-object looks like this:
{
"output": {
"generic": [
{
"title": "Välj mötestyp",
"options": [
{
"label": "Rådgivning familjerätt 30 min",
"value": {
"input": {
"text": "447472"
}
}
},
{
"label": "Rådgivning familjerätt 60 min",
"value": {
"input": {
"text": "448032"
}
}
}
],
"description": "Välj typ av möte du vill boka",
"response_type": "option",
"preference": "dropdown"
}
]
}
}
My cloud function can create this JSON with x no of options. But how can I use this data in Assistant?
The easiest would be to let the cloud function generate the complete JSON and then just output the returned JSON like this:
{
$context.output"
}
..but that's not allowed.
Generated output-object from my function:
[{"serviceId":447472,"serviceName":"Rådgivning Familjerätt 30 min"},{"serviceId":448032,"serviceName":"Rådgivning Familjerätt 60 min"}]
Any advice on how to do this?
I don't see a simple way of generating the entire output and options. What you could do is this:
Generate the option labels and values
Pass them into a generic output node that has predefined structures for 1, 2, 3, etc. options. Check based on the array size of your context variable which predefined response structure to fill.
I tested the following:
{
"context": {"my": [ {
"label": "First option",
"value": "one"
},
{
"label": "Second",
"value": "two" }]},
"output": {
"generic": [
{
"title": "This is a test",
"options": [{"label": "<? $my[0].label ?>",
"value": {
"input": {
"text": "my[0].value"
}
}
},{"label": "<? $my[1].label ?>", "value": {
"input": {
"text": "<? $my[1].value ?>"
}
}
}],
"response_type": "option"
}
]
}
}
It defined a context variable with the options, analogous to the options structure. In the output access the labels and values, later modifying the to proove that they are used and can be modified.
I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}
When calling the Magento 2 REST API to get the schema for working with products using:
..rest/all/schema?services=catalogProductRepositoryV1
The response back includes:
...
"paths": {
"/V1/products": {
"post": {
"tags": [
"catalogProductRepositoryV1"
],
"description": "Create product",
"operationId": "catalogProductRepositoryV1SavePost",
"parameters": [
{
"name": "$body",
"in": "body",
"schema": {
"required": [
"product"
],
"properties": {
"product": {
"$ref": "#/definitions/catalog-data-product-interface"
},
"saveOptions": {
"type": "boolean"
}
},
"type": "object"
}
}
],...
How do you go about getting the definition/schema for the "product" object that it expects during a POST call? i.e. the following definition:
"properties": {
"product": {
"$ref": "#/definitions/catalog-data-product-interface"
},
Looks like its only possible using the swagger GUI. Essentially replicate these steps and change your search term for whatever you're searching for:
Go to: http://devdocs.magento.com/swagger/index.html
Ctrl + f >> search for whatever you're after. In the case of the above: catalogProductRepositoryV1
Expand the API interface by clicking it
Choose your REST Method
Under "Parameters" there will be a Model/Model Schema showing you what payload it expects.
Welcome to Swagger! It's great when you know how to use it!