Just like Suggestion( Quick button ) like in Actions on Google, do we also have any class/Function to use in Amazon Lex ?
Actions on Google : agent.add(new Suggestion(' Your text here ')); // to provide quick suggestion to the user.
Like Google amazon lex also have a quick suggestion,i.e response cards. Below one is the sample image and JSON object you will get from lex with response cards.
Sample JSON Object from aws-lex which will generate response cards. i.e quick suggestions
for slots.
{
"sampleUtterances": [
],
"slotType": "",
"slotTypeVersion": "4",
"slotConstraint": "Optional",
"valueElicitationPrompt": {
"messages": [
{
"contentType": "PlainText",
"content": "what do you want to schedule"
}
],
"responseCard": "{\"version\":1,\"contentType\":\"application/vnd.amazonaws.card.generic\",\"genericAttachments\":[{\"buttons\":[{\"text\":\"sales\",\"value\":\"sales consultation\"}]}]}",
"maxAttempts": 2
},
"priority": 11,
"name": "appointment"
}
Related
I have an endpoint that returns a list of questions. Each question has the following properties
Input field name eg: Select from the following list
Validation eg: IsRequired
controlType eg: (dropdown, text, image, file)
values eg (dart, javascript, python)
I want to build input forms on the fly on mobile when i get this endpoint. How do i go about this on flutter. I already have the api.
Think of it like a quiz app but with different controls. eg. dropdown, textfield, checkbox.
Here is an example of the api response
[
{
"question": "what is your name ?",
"cotrollType": "textField",
"values": [],
"validations": [
{
"validation": "IsRequired"
}
]
},
{
"question": "select from the dropdown:",
"cotrollType": "dropdown",
"values": ["one", "two", "three"],
"validations": [
{
"validation": "IsRequired"
}
]
},
]
yeah more code is needed but probably you could just use BloC to get the API response and create the Widgets there and then just emit them as new state, that way you can show some sort of progress indidcator in the meantime, also are you using some kind of model to parse the response and make it a bit easier?
We are currently setting un a Smarthome action, and we would like to provide roomHint on the first sync (not on request sync) as it's really tedious to set up rooms on the first sync, but it does not work.
We tried to name rooms in english and also in italian, (as it's not really clear from the documentation if there is a list on room names that we can use?) but no way.
So can you please give us a hint how to use the roomHint field?
Also in the API doc we've found structureHint, does it work? The documentation for SYNC intent does not mention this field.
Here is our SYNC intent with one device and room, we took office from the example JSON:
{
"requestId": "3582198904737125163",
"payload": {
"agentUserId": "xyz#qwertyz.com",
"devices": [
{
"id": "deviceID",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Lampadina",
"defaultNames": [
"Lampadina_XYZ"
],
"nicknames": [
"Lampadina"
]
},
"willReportState": false,
"customData": {
"modelType": "DEVICE"
},
"roomHint": "office"
}
]
}
}
Thanks
Unfortunately, I believe the structureHint is only in the HomeGraph API sync response.
It cannot be used in the Sync intent.
If someone can tell me I'm wrong and how to use it, you'd be a hero.
So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.
i am trying to build a simple bot using dialog flow and fb messenger. i have requirement to show two buttons to the end user to pick a cake type. i am able to show the options using the below custom response:
{
"facebook": {
"attachment": {
"type": "template",
"payload": {
"template_type": "button",
"text": "What kind of cake would you like?",
"buttons": [
{
"type": "postback",
"payload": "witheggs",
"title": "Contain Eggs"
},
{
"type": "postback",
"payload": "noeggs",
"title": "Eggless"
}
]
}
}
}
}
once user tap one of the two buttons then how do i set it to some variable in dialog flow and then ask next set of question?
I don't know what language you're using but here's what you're looking for:
The payload part is returned by api.ai as resolvedQuery when the user clicks any button.
For example, if the user clicks on Contain Eggs, api.ai will return witheggs through the resolvedQuery node of the json.
Does that help? Feel free to ask if there's anything else.
I am using following code to post message back to Amazone Lex
....
var objItem = {
"title": `(£${item.price} pw) ${item.street_name}`,
"image_url": item.image_url,
"subtitle": `${item.displayable_address}`,
"buttons": [
{
"type": "web_url",
"url": `${item.details_url}`,
"title": "View"
}, {
"type": "postback",
"title": "Book Item",
"payload": {vid:"CAL00002"}
}
]
}
....
When the button "Book Item",is clicked,currently, the message "payload": {vid:"CAL00002"} will be sent back to Amazon Lex. it seems that Amazon Lex don't know this message, so I can NOT get this object in Amazon Lambda functions. here I am using Amazon Lex as AI to learn user's intent, and then all business logic is implemented in Amazon Lambda. in this situation, how can I post message back to Lambda? or Is there a way to post structured message back to Lex?
I'm not sure how you are sending message back to Lambda, but I'm using AWS SDK to do this. Whatever information I want Lambda to get, I put it in sessionAttributes to pass.