Google Assistant does not recognize the command for Smart Home Modes Trait - actions-on-google

I implement the Smart Home Modes Trait for Google Smart Home Action,but when I say "Ok, Google, What is the mode on D5s?" to query mode or "Ok, Google,set D5s to smart" to control mode, Google Assistant receives the command but does not send the intent to my Fulfillment service.How should I solve this problem?
Device: Google Nest Mini
action.devices.SYNC response is:
{
"payload":{
"agentUserId":"uid-1",
"devices":[
{
"traits":[
"action.devices.traits.StartStop",
"action.devices.traits.Modes"
],
"willReportState":true,
"name":{
"defaultNames":[
"D5s"
],
"name":"D5s",
"nicknames":[
"D5s"
]
},
"attributes":{
"availableModes":[
{
"settings":[
{
"setting_name":"smart",
"setting_values":[
{
"setting_synonym":[
"smart"
],
"lang":"en"
}
]
},
{
"setting_name":"mop",
"setting_values":[
{
"setting_synonym":[
"mop"
],
"lang":"en"
}
]
},
{
"setting_name":"dock",
"setting_values":[
{
"setting_synonym":[
"dock"
],
"lang":"en"
}
]
},
{
"setting_name":"spot",
"setting_values":[
{
"setting_synonym":[
"spot"
],
"lang":"en"
}
]
}
],
"ordered":false,
"name":"mode",
"name_values":[
{
"lang":"en",
"name_synonym":[
"mode"
]
},
{
"lang":"en",
"name_synonym":[
"Clean"
]
},
{
"lang":"en",
"name_synonym":[
"Mode"
]
}
]
}
]
},
"customData":{
"uid":"uid-1"
},
"id":"device-id1",
"type":"action.devices.types.VACUUM",
"deviceInfo":{
"swVersion":"sw1.0.0",
"model":"D5s Pro",
"manufacturer":"Smart ",
"hwVersion":"hw1.0.0"
}
}
]
},
"requestId":"11614522009820639979"
}
The mode control button already displayed on the Google Home App and that command is successfully working.Fulfillment service will receive the intent request.

Related

Getting Blank Buttons for Basic Card on Google Actions

For Basic Card Issue: We are sending one Button but on assistant we are
seeing two button one without any text.
Here's sample Request to Google:
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>The NAV for Franklin India Bluechip Fund as of 24 Jan 2022 is: \nDirect-Growth: 744.3406 \nDirect-Idcw: 48.0136 \nGrowth: 691.6646 \nIdcw: 42.6334 \n\nFor more information on the historical NAV of Franklin India Bluechip Fund, please visit our website at www.franklintempletonindia.com. Is there anything else, I can help you with?</speak>"
}
},
{
"basicCard": {
"buttons": [
{
"title": "Historical NAV",
"openUrlAction": {
"url": "https://www.franklintempletonindia.com/investor/fund-details/fund-historicalnavs/-4614"
}
}
]
}
}
]
}
}
}
]
}
Here's how it looks on smartphone, with one extra button with no text.
Can any body why is that so, and what's the issue here.
Check The Screenshot Here

Web view page is forced to close when Google assistant finishes talking

When we have a simple response and basic card with link out button. See below code, if user clicks the link to open the web view page before Google assistant finishes reading. Once Google assistant finishes reading, the web view page is forced to close. User has to click the button again to open the web view. It looks like the the response gets reset when Google assistant activate the listening mode.
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"ssml": "<speak>\r\n <audio src=\"…audio file location url…">some long text message\ r\n</speak>",
"displayText": " some long text message”
}
},
{
"basicCard": {
"title": "",
"subtitle": "",
"formattedText": "",
"buttons": [
{
"title": "title",
"openUrlAction": {
"url": "… url…",
"urlTypeHint": 0
}
}
]
}
}
]
}
}
}
],
"isInSandbox": false
}

How to provide custom message for actions_intent_SIGN_IN?

I want to provide a custom question message for sign-in intent.
For DATETIME we can provide custom message.
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.DATETIME",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.DateTimeValueSpec",
"dialogSpec": {
"requestDatetimeText": "When would you like to schedule the appointment?",
"requestDateText": "What day was that?",
"requestTimeText": "What time works for you?"
}
}
}
]
}
]
}
For SIGN_IN intent there is no such option.
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.SIGN_IN",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.SignInValueSpec"
}
}
]
}
]
}
You'll need an optContent attribute with the custom text you want used for (part of) the sign-in request. So it might be something like
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.SIGN_IN",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.SignInValueSpec",
"optContext": "In order to know who you are"
}
}
]
}
]
}

Is it possible to play audio file or stream?

Is it possible to play audio file or stream using actions-on-google-nodejs library?
Using SSML you can return an audio clip up to 120s.
<speak>
<audio src="https://actions.google.com/sounds/v1/animals/cat_purr_close.ogg">
<desc>a cat purring</desc>
PURR (sound didn't load)
</audio>
</speak>
Edit
If you want to play audio the mp3 file (over 120s), you need to use Media Responses
if (!conv.surface.capabilities.has('actions.capability.MEDIA_RESPONSE_AUDIO')) {
conv.ask('Sorry, this device does not support audio playback.');
return;
}
conv.ask(new MediaObject({
name: 'Jazz in Paris',
url: 'https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
description: 'A funky Jazz tune',
icon: new Image({
url: 'https://storage.googleapis.com/automotive-media/album_art.jpg',
alt: 'Ocean view',
}),
}));
To add one more point to Nick's answer, you can also build a Media Response which will allow you to play a long audio file (I'm currently playing 50 mins album in my app).
You can find it on Google's doc here.
A short example in Node.js could be:
const richResponse = app.buildRichResponse()
.addSimpleResponse("Here's song one.")
.addMediaResponse(app.buildMediaResponse()
.addMediaObjects([
app.buildMediaObject("Song One", "https://....mp3")
.setDescription("Song One with description and large image.") // Optional
.setImage("https://....jpg", app.Media.ImageType.LARGE)
// Optional. Use app.Media.ImageType.ICON if displaying icon.
])
)
.addSuggestions(["other songs"]);
And then you can just do
app.ask(richResponse)
UPDATE:
As per a comment request, here is the JSON response sent by my app for a mediaResponse:
{
"conversationToken": "[\"_actions_on_google\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here is my favorite album."
}
},
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": my_name,
"description": my_descr,
"largeImage": {
"url": my_url
},
"contentUrl": my_contentURL
}
]
}
}
],
"suggestions": [
{
"title": my_suggestion
}
]
}
},
"possibleIntents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "0a3c14f8-87ca-47e7-a211-4e0a8968e3c5",
"parameterNames": [
my_param_name
]
}
},
"userStorage": "{\"data\":{}}"
}

Error "RECIPIENTS_NOT_PROVIDED" when recipients provided

I'm getting the error "RECIPIENTS_NOT_PROVIDED" when using this code block below; from what I can see on the REST API Explorer, my code is correct. Is there anything I have missed??
I'm sure it's something basic, but I just can't see it ...
{
"documents":[
{
"documentBase64":"<Base64BytesHere>",
"documentId":"1",
"fileExtension":"pdf",
"name":"Doc1.pdf",
"order":"1"
},
{
"documentBase64":"<Base64BytesHere>",
"documentId":"2",
"fileExtension":"pdf",
"name":"Doc2.pdf",
"order":"2"
},
{
"documentBase64":"<Base64BytesHere>",
"documentId":"3",
"fileExtension":"pdf",
"name":"Doc3.pdf",
"order":"3"
},
{
"documentBase64":"<Base64BytesHere>",
"documentId":"4",
"fileExtension":"pdf",
"name":"Doc4.pdf",
"order":"4"
}
],
"emailSubject":"Important announcement from us",
"carbonCopies":[
{
"recipientId":"1",
"email":"test3#test.com",
"name":"Test3"
}
],
"signers":[
{
"recipientId":"2",
"name":"Test1",
"email":"test1#test.com",
"tabs":{
"signHereTabs":[
{
"documentId":"1",
"pageNumber":"1",
"xPosition":"20",
"yPosition":"500"
},
{
"documentId":"2",
"pageNumber":"1",
"xPosition":"20",
"yPosition":"500"
},
{
"documentId":"4",
"pageNumber":"1",
"xPosition":"20",
"yPosition":"500"
}
]
}
},
{
"recipientId":"3",
"name":"Test2",
"email":"test2#test.com",
"tabs":{
"signHereTabs":[
{
"documentId":"1",
"pageNumber":"1",
"xPosition":"20",
"yPosition":"500"
},
{
"documentId":"4",
"pageNumber":"1",
"xPosition":"20",
"yPosition":"500"
}
]
}
}
],
"status":"sent"
}
Move the carbonCopies/signers under "recipients" property
{
"documents": [
],
"recipients": {
"carbonCopies": [
],
"signers": [
]
},
"emailSubject": "Important announcement from us",
"status": "sent"
}