I recently setup the "Get Started" button using the following POST method using Postman:
https://graph.facebook.com/v2.6/me/messenger_profile?access_token=<my token>
With the raw text:
{
"get_started": {"payload": "<postback_payload>"}
}
It returns:
{
"result": "success"
}
But nothing show up in my Messenger (deleted the chat already)
If I send the raw data to setup the greeting text, it works normally and showed up:
{
"greeting": [
{
"locale":"default",
"text":"Hello!"
}, {
"locale":"en_US",
"text":"Timeless apparel for the masses."
}
]
}
Can somebody help me, thanks!
Related
So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.
I'm trying to code a Media Response in a Custom Payload with no luck. I'm surely doing something wrong but I have no idea :) The Media Response does not show up when testing. (Please note that I'm trying this in an english action). Here's the JSON code:
{
"platform": "google",
"type": "custom_payload",
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "Exercises",
"description": "ex",
"largeImage": {
"url": "https://firebasestorage.googleapis.com/...",
"accessibilityText": "image..."
},
"contentUrl": "https://firebasestorage.googleapis.com/..."
}
]
}
}
]
}
}
}
}
UPDATE:
I've updated the JSON to something like this. But I get an error :"API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: ": Cannot find field."
{
"platform":"google",
"type":"custom_payload",
"payload":{
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"https://firebasestorage...",
"accessibilityText":"..."
},
"contentUrl":"https://firebasestorage.googleapis.com/..."
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
And here's the debug information:
{
"audioResponse": "//NExAARWG...",
"conversationToken": "GidzaW11bG...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": "{\"message\":\"Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response\",\"apiResponse\":{\"id\":\"cd7204ac-ab80-42aa-9755-6123cbb938a6\",\"timestamp\":\"2018-03-11T09:02:35.827Z\",\"lang\":\"en-us\",\"result\":{},\"status\":{\"code\":200,\"errorType\":\"success\"},\"sessionId\":\"1520758955600\"}}"
},
"assistantToAgentDebug": {
"assistantToAgentJson": "{\"user\":{\"userId\":\"AA9douaa4XGkqtmcU_EDjPy7PQ_9\",\"locale\":\"en-US\",\"lastSeen\":\"2018-03-11T09:02:09Z\"},\"conversation\":{\"conversationId\":\"1520758955600\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to Zen Coach\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}",
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=0def1bb6be4b4bf2810ec68bf6f37a6a' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6ImFjMmI2M2ZhZWZjZjgzNjJmNGM1MjhlN2M3ODQzMzg3OTM4NzAxNmIifQ.eyJhdWQiOiJ6ZW4tY29hY2giLCJhenAiOiI0OTYwOTIwOTE1NzEtMGNhY3VtczVkZ3F1OWpkM2k0dHZpOGFiOTVydXQ2NnQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJleHAiOjE1MjA3NTkwNzUsImlzcyI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbSIsImp0aSI6IjY4NDc0NThhNTNhZGExODAxZjMwMjAyYjkxZGIyODZhMjk1NzA2YmIiLCJpYXQiOjE1MjA3NTg5NTUsIm5iZiI6MTUyMDc1ODY1NX0.e1cqg96F5L-BvD0yJz3UFgsnX_0TRox0Lu8R9K5NhhXcQVfC7mq1QwCqs2DGrUJGquGdW2GhzBU2lzf4ro2TUeieg4ozak1OmiYAMqtiCH0EodeHy59AXXqzb3a35YuD7CmSDu6qVQRfEp8uaaH2t-Sq9lUchudNOgjucip3ex9Rr2XacHm0qWtV69H1o-Yq5INl5HHR0kNqtEIsxUox961imKvDLN5s--F35yTbAhIWibr6OmaACyzSQW5X7OjrJ2781DSmEdYn73poDbuwMS9E2l9B-QTUHAIpUM5b4WqrFkD6XKALdf2pQFwZlRRhDzRiDKWLA-i1w-mcak0LWw' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"AA9douaa4XGkqtmcU_EDjPy7PQ_9\",\"locale\":\"en-US\",\"lastSeen\":\"2018-03-11T09:02:09Z\"},\"conversation\":{\"conversationId\":\"1520758955600\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to Zen Coach\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
},
"response": "Zen coach isn't responding right now. Try again soon.",
"visualResponse": {
"visualElements": []
}
}
Are you adding "platform":"google" and "type":"custom_payload" in the custom payload? If so, try removing that.
I made the following work with my Voice Metronome application:
{
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText":"..."
},
"contentUrl":"https://freepd.com/Chill/Chill Air.mp3"
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
}
The issue is that the richResponse property still needs to follow the rules of the RichResponse object. The first item in it must be a SimpleResponse object. (I haven't tested, but you can probably have that say nothing, but it is a good spot to have an introduction to your audio.)
The error message Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response suggests that it might also be looking for a speech parameter on the top-level object in the response, which is what Dialogflow v1 expects to duplicate either the simpleResponse ssml or textToSpeech parameters. I'm not sure why that would appear if you're set to v2, but it sounds like something might be confused there. I would make sure you're using v1 and that you have a speech parameter.
Also keep in mind that the reviewers will look for suggestion chips about how to move the conversation forward during or after the audio if this isn't a final response.
I have setup a "get started" button on my page:
curl -X POST -H "Content-Type: application/json" -d '{
"get_started":{
"payload":"GET_STARTED_PAYLOAD"
}
}' "https://graph.facebook.com/v2.6/me/messenger_profile?access_token=token"
This works fine and responds with {"result":"success"}
If I check the data:
curl -X GET "https://graph.facebook.com/v2.6/me/messenger_profile?fields=get_started&access_token=token
I've got a good answer: {"data":[{"get_started":{"payload":"GET_STARTED_PAYLOAD"}}]}
However when I receive the postback webhook, I've got this payload:
{
"object": "page",
"entry": [
{
"id": "id1",
"time": 1501688073860,
"standby": [
{
"recipient": {
"id": "id1"
},
"timestamp": 1501688073860,
"sender": {
"id": "id2"
},
"postback": {
"title": "Get Started"
}
}
]
}
]
}
There is now way I can get the payload I defined (GET_STARTED_PAYLOAD) in the webhook.
On this page the doc says
payload parameter that was defined with the button. This is only visible to the app that send the original template message.
This message is kind of confusing. Any ideas ?
Putting this answer here in case anyone needs it.
The standby prop indicates you're using the handover protocol, and that the app doesn't have thread control. This causes you not to receive the expected postback event, since the receiving app is not the same as the app that sent the postback.
I am currently writing an integration into Actions on Google using PHP. I have generated a action.json file with my test endpoint as the fulfillment. I use ngrok to expose my local development machine publicly.
Unfortunately the simulator keeps insisting that the app isn't responding. In the access logs, and the ngrok Inspector I do see that a request came in, and it has been neatly answered with a JSON reply.
In an act of pure desperation I even upload a JSON response, directly taken from the Fulfillment documentation page to a server and set that as the fulfillment URL. The result is the same, same error.
I do not see a way to get a more detailed error message from Actions on Google, explaining why it does not work.
My action.json:
{
"actions": [
{
"name": "MAIN",
"intent": {
"name": "actions.intent.MAIN"
},
"fulfillment": {
"conversationName": "development4"
}
},
{
"name": "TEXT",
"intent": {
"name": "actions.intent.TEXT"
},
"fulfillment": {
"conversationName": "development4"
}
}
],
"conversations": {
"development4": {
"name": "development4",
"url": "https:\/\/02c085c0.ngrok.io\/actionsongoogle\/process\/development4"
}
}
}
The json I response with:
{
"expectUserResponse": false,
"expectedInputs": [{
"inputPrompt": {
"richInitialPrompt": {
"items": [{
"simpleResponse": {
"textToSpeech": "hello"
}
}]
}
},
"possibleIntents": [{
"intent": ["actions.intent.TEXT"]
}]
}]
}
The result output in the Simulator then displays:
{
"response": "my test app isn’t responding right now. Try again soon.\n",
"audioResponse": "//NExAARq...",
"debugInfo": {
"sharedDebugInfo": [
{
"name": "ExecutionResponse",
"debugInfo": "Failed to call your endpoint."
}
]
},
"visualResponse": {}
}
Two things to investigate:
Check how long it is taking for the reply to be sent. Agents need to send a response back within about 5 seconds or the Assistant will time out and give the "not responding" message.
Check what the reply code being issued is. If you think you're sending something back, but your server actually crashes before an HTTP 200 response code is sent back with the body, the Assistant never gets the response. I've also seen it where people are thinking they're sending a response - but no response is actually sent at all.
Testing with curl or wget to your ngrok URL (so it makes a full round trip) might be able to help diagnose both of these.
The issue was in the returned JSON. I have set up the nodejs SDK and reply in the same way as the PHP cod will (a simple hello). The response generated by the NodeJS SDK is very different. Apparently the example that I used from the docs earlier is also invalid.
I advice anyone building their own implementation using Webhooks to first generate the desired response via de NodeJS sdk, the reference in the manual is not always clear (at least for me).
Response by NodeJS SDK:
{
"expectUserResponse": false,
"finalResponse": {
"speechResponse": {
"textToSpeech": "hello"
}
}
}
Implementation in SDK:
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))
// parse application/json
app.use(bodyParser.json())
app.use(function (req, res) {
let ActionsSdkApp = require('actions-on-google').ActionsSdkApp;
const actionsApp = new ActionsSdkApp({request: req, response: res});
//const inputPrompt = actionsApp.buildInputPrompt(false, 'hello')
actionsApp.tell('hello');
})
app.listen(80);
This is the full JSON below, I'm sending using Facebook's Generic Template.
I've checked it line-by-line with the official documentation here -> https://developers.facebook.com/docs/messenger-platform/send-api-reference/generic-template
But I'm getting a 400 Bad Request Error for the following. What am i missing?
{
"recipient":{
"id":"1347939335300515"
},
"message":{
"attachment":{
"type":"template",
"payload":{
"template_type":"generic",
"elements":[
{
"title":"Welcome to Peters Hats",
"image_url":"https://cdn.pixabay.com/photo/2013/07/13/10/41/hat-157581_960_720.png",
"subtitle":"Weve got the right hat for everyone.",
"default_action":{
"type":"web_url",
"url":"https://cdn.pixabay.com/photo/2013/07/13/10/41/hat-157581_960_720.png",
"messenger_extensions":true,
"webview_height_ratio":"tall",
"fallback_url":"https://cdn.pixabay.com/photo/2013/07/13/10/41/hat-157581_960_720.png"
},
"buttons":[
{
"type":"web_url",
"url":"https://cdn.pixabay.com/photo/2013/07/13/10/41/hat-157581_960_720.png",
"title":"View Website"
},
{
"type":"postback",
"title":"Start Chatting",
"payload":"DEVELOPER_DEFINED_PAYLOAD"
}
]
}
]
}
}
}
}
The default_action is a URL Button and not supposed to be a postback button as in the JSON posted in the question.