https://stackoverflow.com/a/45890852/8413869
from the above answer, I got to know that emojis can be passed in Button formatted text.
But I am looking for emojis like google have it in Bubble response.
I tried to pass unicode in simple response but no luck.
....."data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"ssml": "<speak>Hello. How are you ? </speak>",
"displayText": "Hello U+1F600., how are you ? "
}
}
],
"suggestions": [],
"linkOutSuggestion": {}
}
}
}.....
Is there any way to do this ?
You can include emoji in simple responses for Google Assistant apps by including the actual emoji in the string instead of the Unicode code point for the emoji. Change your response to the below and it should work:
....."data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"ssml": "<speak>Hello. How are you ? </speak>",
"displayText": "Hello 😀., how are you ? "
}
}
],
"suggestions": [],
"linkOutSuggestion": {}
}
}
}.....
I made it work with this uni code " \uD83D\uDE00"
I just copied emoji from https://emojipedia.org/ which gave me this unicode
Related
I'm hoping to get some answers here, I have been searching and try almost everything. I even tried to customized my firebase sdk to fit to localization format but it doesnt work. the Push notification on my iphone shows: REMINDER_TO_BRUSH_YOUR_TEETH instead of the localization text i have on my app.
What i tried is:
"notification": {
"title": "CompanyTitle",
"body": "REMINDER_TO_BRUSH_YOUR_TEETH"
"body_loc_key": "REMINDER_TO_BRUSH_YOUR_TEETH" <----- Also tried to add that here
},
"androidConfig": {
"priority": "high",
"ttl": "120s",
"notification": {
"color": "#FFFF00",
"sound": "default"
}
},
"apnsConfig": {
"payload": {
"aps": {
"badge": 17,
"body_loc_key": "REMINDER_TO_BRUSH_YOUR_TEETH" <----- Added that
}
}
},
"token": "cwCqkaNhrE_GsNzrNXxn7V:APA91bHNLg0F7ho9tvOqosWopX9L28FnZxHNoiwxvWJDekVt9hX2RgNa6W_CNUaRe22J2iKcMc5waGHOiC57fLF1H4c-yqOtN7eTujYGWwdMAoGHJEM828-mIa1sMCrSQbZwL7MZOKJI"
}
My piece of code:
private Message.Builder getPreconfiguredMessageBuilder(PushNotificationRequest request) {
AndroidConfig androidConfig = getAndroidConfig(request.getTopic());
ApnsConfig apnsConfig = getApnsConfig(request.getTopic(), request.getBadge(), "REMINDER_TO_BRUSH_YOUR_TEETH");
Notification notification = Notification
.builder()
.setTitle(request.getTitle())
.setBody(request.getMessage())
.build();
return Message.builder()
.setApnsConfig(apnsConfig).setAndroidConfig(androidConfig).setNotification(notification);
}
private ApnsConfig getApnsConfig(String topic, int badge, String locKey) {
return ApnsConfig.builder()
.setAps(Aps.builder().setBadge(badge).setLocKey(locKey).setCategory(topic).setThreadId(topic).build()).build();
}
Thank you for your help in Advance.
This appears to be a syntax issue. Try the following:
"apnsConfig": {
"payload": {
"aps": {
"alert": {
"title": "some title",
"body": "some body"
}
}
}
}
For Basic Card Issue: We are sending one Button but on assistant we are
seeing two button one without any text.
Here's sample Request to Google:
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>The NAV for Franklin India Bluechip Fund as of 24 Jan 2022 is: \nDirect-Growth: 744.3406 \nDirect-Idcw: 48.0136 \nGrowth: 691.6646 \nIdcw: 42.6334 \n\nFor more information on the historical NAV of Franklin India Bluechip Fund, please visit our website at www.franklintempletonindia.com. Is there anything else, I can help you with?</speak>"
}
},
{
"basicCard": {
"buttons": [
{
"title": "Historical NAV",
"openUrlAction": {
"url": "https://www.franklintempletonindia.com/investor/fund-details/fund-historicalnavs/-4614"
}
}
]
}
}
]
}
}
}
]
}
Here's how it looks on smartphone, with one extra button with no text.
Can any body why is that so, and what's the issue here.
Check The Screenshot Here
I am trying to figure out how I can embed Google Actions responses, such as the carousel, in a webhook response for DialogFlow.
I am using V2 of the REST protocol, so I am filling ACTIONS_ON_GOOGLE in the source field and the payload field contains the Google Actions field as specified (as per How can I integrate the Google Actions responses in a webhook response in Dialogflow?). I am sending the following response:
{
"fulfillmentText":"This is a carousel.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"conversationToken":"",
"expectUserResponse":true,
"expectedInputs":[
{
"inputPrompt":{
"initialPrompts":[
{
"textToSpeech":"This is a carousel."
}
],
"noInputPrompts":[
]
},
"possibleIntents":[
{
"intent":"actions.intent.OPTION",
"inputValueData":{,
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"key1",
"synonyms":[
"Option 1"
]
},
"title":"Option 1",
"description":"Option 2"
},
{
"optionInfo":{
"key":"key2",
"synonyms":[
"Option 2"
]
},
"title":"Option 2",
"description":"Option 2"
}
]
}
}
}
]
}
]
}
}
When trying this out in the console, no carousel is shown. Only the text This is a carousel. is displayed. For your information, I did not include the image field, as it is optional according to the specification, but even with images there is no carousel displayed.
It's hard to debug this, as my actions.intent.OPTION intent is not displayed in possibleIntents[] array in the response tab. I expected this actions.intent.OPTION intent to be merged with the other intents (such assistant.intent.action.TEXT) as generated by the Dialogflow response.
What am I doing wrong here? Am I maybe shooting myself in the foot by using V2 instead of V1 of the Dialogflow REST protocol?
update after initial feedback by Prisoner
I tried with the following response, but still not getting any carousel:
{
"fulfillmentText":"Here you go.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"expectUserResponse":true,
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Here are your results."
}
}
]
},
"systemIntent":{
"intent":"actions.intent.OPTION",
"data":{
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"Option1",
"synonyms":[
"Option2"
]
},
"title":"Option3",
"description":"Option4"
},
{
"optionInfo":{
"key":"Option5",
"synonyms":[
"Option6"
]
},
"title":"Option7",
"description":"Option8"
}
]
},
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
}
}
}
}
I also tried to manually create an intent in Dialogflow with returns a 'hardcoded' carousel (that is, without fulfillment callback) and this carousel is shown perfectly. So I am sure that the console is correctly configured.
I am also comparing my response with the ones in Google Assistant flow with multiple actions_intent_OPTION handlers, but without success so far.
You haven't shot yourself in the foot - but you have made something that was already a bit complex even more complex. There are two things to look out for:
CORRECTION: Make sure the payload is the same as what used to be data, but other fields have changed format.
You need to make sure the simulator is in Phone mode and not Speaker mode.
Details follow.
Documented Dialogflow Differences
The Actions on Google documentation for the Dialogflow response is a little confusing. Instead of providing the full example, it just says that the response that would be under expectedInputs.possibleIntents should, instead, be under data.google.systemIntent. For V2, that would be payload.google.systemIntent.
The inputPrompt object is also restructured somewhat so you should be sending a richResponse that contains a simpleResponse object.
UPDATE I've tested this. This is the entire JSON that should be returned. Note the contents of payload is exactly what the contents of data used to be. The source field is ignored and has nothing to do with the payload, apparently.
See also https://github.com/dialogflow/fulfillment-webhook-json which contain some examples.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is a carousel"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [
{
"optionInfo": {
"key": "key1",
"synonyms": [
"Option 1"
]
},
"title": "Option 1",
"description": "Option 2"
},
{
"optionInfo": {
"key": "key2",
"synonyms": [
"Option 2"
]
},
"title": "Option 2",
"description": "Option 2"
}
]
}
}
}
}
}
}
Simulator Surface Setting
Make sure that your simulator is set for the "Phone" surface rather than the "Speaker" surface. Options won't display on a Speaker.
The setting should look like this:
Not like this:
I'm trying to code a Media Response in a Custom Payload with no luck. I'm surely doing something wrong but I have no idea :) The Media Response does not show up when testing. (Please note that I'm trying this in an english action). Here's the JSON code:
{
"platform": "google",
"type": "custom_payload",
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "Exercises",
"description": "ex",
"largeImage": {
"url": "https://firebasestorage.googleapis.com/...",
"accessibilityText": "image..."
},
"contentUrl": "https://firebasestorage.googleapis.com/..."
}
]
}
}
]
}
}
}
}
UPDATE:
I've updated the JSON to something like this. But I get an error :"API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: ": Cannot find field."
{
"platform":"google",
"type":"custom_payload",
"payload":{
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"https://firebasestorage...",
"accessibilityText":"..."
},
"contentUrl":"https://firebasestorage.googleapis.com/..."
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
And here's the debug information:
{
"audioResponse": "//NExAARWG...",
"conversationToken": "GidzaW11bG...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": "{\"message\":\"Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response\",\"apiResponse\":{\"id\":\"cd7204ac-ab80-42aa-9755-6123cbb938a6\",\"timestamp\":\"2018-03-11T09:02:35.827Z\",\"lang\":\"en-us\",\"result\":{},\"status\":{\"code\":200,\"errorType\":\"success\"},\"sessionId\":\"1520758955600\"}}"
},
"assistantToAgentDebug": {
"assistantToAgentJson": "{\"user\":{\"userId\":\"AA9douaa4XGkqtmcU_EDjPy7PQ_9\",\"locale\":\"en-US\",\"lastSeen\":\"2018-03-11T09:02:09Z\"},\"conversation\":{\"conversationId\":\"1520758955600\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to Zen Coach\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}",
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=0def1bb6be4b4bf2810ec68bf6f37a6a' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6ImFjMmI2M2ZhZWZjZjgzNjJmNGM1MjhlN2M3ODQzMzg3OTM4NzAxNmIifQ.eyJhdWQiOiJ6ZW4tY29hY2giLCJhenAiOiI0OTYwOTIwOTE1NzEtMGNhY3VtczVkZ3F1OWpkM2k0dHZpOGFiOTVydXQ2NnQuYXBwcy5nb29nbGV1c2VyY29udGVudC5jb20iLCJleHAiOjE1MjA3NTkwNzUsImlzcyI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbSIsImp0aSI6IjY4NDc0NThhNTNhZGExODAxZjMwMjAyYjkxZGIyODZhMjk1NzA2YmIiLCJpYXQiOjE1MjA3NTg5NTUsIm5iZiI6MTUyMDc1ODY1NX0.e1cqg96F5L-BvD0yJz3UFgsnX_0TRox0Lu8R9K5NhhXcQVfC7mq1QwCqs2DGrUJGquGdW2GhzBU2lzf4ro2TUeieg4ozak1OmiYAMqtiCH0EodeHy59AXXqzb3a35YuD7CmSDu6qVQRfEp8uaaH2t-Sq9lUchudNOgjucip3ex9Rr2XacHm0qWtV69H1o-Yq5INl5HHR0kNqtEIsxUox961imKvDLN5s--F35yTbAhIWibr6OmaACyzSQW5X7OjrJ2781DSmEdYn73poDbuwMS9E2l9B-QTUHAIpUM5b4WqrFkD6XKALdf2pQFwZlRRhDzRiDKWLA-i1w-mcak0LWw' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"AA9douaa4XGkqtmcU_EDjPy7PQ_9\",\"locale\":\"en-US\",\"lastSeen\":\"2018-03-11T09:02:09Z\"},\"conversation\":{\"conversationId\":\"1520758955600\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to Zen Coach\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
},
"response": "Zen coach isn't responding right now. Try again soon.",
"visualResponse": {
"visualElements": []
}
}
Are you adding "platform":"google" and "type":"custom_payload" in the custom payload? If so, try removing that.
I made the following work with my Voice Metronome application:
{
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText":"..."
},
"contentUrl":"https://freepd.com/Chill/Chill Air.mp3"
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
}
The issue is that the richResponse property still needs to follow the rules of the RichResponse object. The first item in it must be a SimpleResponse object. (I haven't tested, but you can probably have that say nothing, but it is a good spot to have an introduction to your audio.)
The error message Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response suggests that it might also be looking for a speech parameter on the top-level object in the response, which is what Dialogflow v1 expects to duplicate either the simpleResponse ssml or textToSpeech parameters. I'm not sure why that would appear if you're set to v2, but it sounds like something might be confused there. I would make sure you're using v1 and that you have a speech parameter.
Also keep in mind that the reviewers will look for suggestion chips about how to move the conversation forward during or after the audio if this isn't a final response.
Is it possible to play audio file or stream using actions-on-google-nodejs library?
Using SSML you can return an audio clip up to 120s.
<speak>
<audio src="https://actions.google.com/sounds/v1/animals/cat_purr_close.ogg">
<desc>a cat purring</desc>
PURR (sound didn't load)
</audio>
</speak>
Edit
If you want to play audio the mp3 file (over 120s), you need to use Media Responses
if (!conv.surface.capabilities.has('actions.capability.MEDIA_RESPONSE_AUDIO')) {
conv.ask('Sorry, this device does not support audio playback.');
return;
}
conv.ask(new MediaObject({
name: 'Jazz in Paris',
url: 'https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
description: 'A funky Jazz tune',
icon: new Image({
url: 'https://storage.googleapis.com/automotive-media/album_art.jpg',
alt: 'Ocean view',
}),
}));
To add one more point to Nick's answer, you can also build a Media Response which will allow you to play a long audio file (I'm currently playing 50 mins album in my app).
You can find it on Google's doc here.
A short example in Node.js could be:
const richResponse = app.buildRichResponse()
.addSimpleResponse("Here's song one.")
.addMediaResponse(app.buildMediaResponse()
.addMediaObjects([
app.buildMediaObject("Song One", "https://....mp3")
.setDescription("Song One with description and large image.") // Optional
.setImage("https://....jpg", app.Media.ImageType.LARGE)
// Optional. Use app.Media.ImageType.ICON if displaying icon.
])
)
.addSuggestions(["other songs"]);
And then you can just do
app.ask(richResponse)
UPDATE:
As per a comment request, here is the JSON response sent by my app for a mediaResponse:
{
"conversationToken": "[\"_actions_on_google\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here is my favorite album."
}
},
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": my_name,
"description": my_descr,
"largeImage": {
"url": my_url
},
"contentUrl": my_contentURL
}
]
}
}
],
"suggestions": [
{
"title": my_suggestion
}
]
}
},
"possibleIntents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "0a3c14f8-87ca-47e7-a211-4e0a8968e3c5",
"parameterNames": [
my_param_name
]
}
},
"userStorage": "{\"data\":{}}"
}