I am trying to integrate camera to my smarthome with google assistant. I followed CameraStream in actions on google. i have synced my camera and tried execute and responding with an URL(cameraStreamAccessUrl).
Command: "show camera"
google home reply: "Sure, streaming camera" <-- but it is not streaming the video.
Command: "show camera on my phone"
google home reply: "Sorry, i don't know where to play the video. Please tell me the exact name of the screen"
My question is how can i stream it on my phone or on the app?
execution request google json:
{
"inputs": [
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [
{
"devices": [
{
"customData": {
"barValue": true,
"bazValue": "lambtwirl",
"fooValue": 74
},
"id": "id"
}
],
"execution": [
{
"command": "action.devices.commands.GetCameraStream",
"params": {
"StreamToChromecast": false,
"SupportedStreamProtocols": [
"hls"
]
}
}
]
}
]
}
}], "requestId": "requestId"
}
my json response:
{
"requestId": "requestId",
"payload": {
"commands": [
{
"ids": [
"requestId"
],
"status": "SUCCESS",
"states": {
"cameraStreamAccessUrl": "https://url.url"
}
}
]
}
}
If you have a Chromecast in your home, the Google Home will forward the video stream to the Chromecast. I don't know where the stream would go if you don't have a display.
Saying "on my phone" is not a valid target from a Google Home. To get it on your phone, you'd need to use the Assistant on your phone or the Google Home App.
Related
1.We are debugging our smart home camera stream using Google Nest Hub.
2.We have access to device sync and passed the validator.This is our Device Sync response:
{
"payload": {
"agentUserId": "b4ad4e18-ab90-4b0e-bc02-264da5bb6469",
"devices": [{
"traits": ["action.devices.traits.CameraStream"],
"name": {
"defaultNames": ["Imilab"],
"name": "camera1",
"nicknames": ["camera1"]
},
"attributes": {
"cameraStreamNeedAuthToken": false,
"cameraStreamSupportedProtocols": ["hls"],
"cameraStreamNeedDrmEncryption": false
},
"id": "gejiayu2",
"type": "action.devices.types.CAMERA",
"deviceInfo": {
"model": "a1znn6t1et8",
"manufacturer": "Imilab"
}
}]
},
"requestId": "8664974301718985362"
}
3.We provide the HLS address, which can be played normally using ffplay ffplay info or HTML, This is our demo HLS URL: https://cdn.cnbj2.fds.api.mi-img.com/cloud-storage-test/test1.m3u8.
4.But we can't use Google Nest Hub to play it. I recorded the debug video in the attachment. This is our camera stream response:
{
"payload": {
"commands": [{
"ids": ["gejiayu2"],
"status": "SUCCESS",
"states": {
"cameraStreamReceiverAppId": "",
"cameraStreamAuthToken": "",
"cameraStreamAccessUrl": "https://cdn.cnbj2.fds.api.mi-img.com/cloud-storage-test/test1.m3u8"
}
}]
},
"requestId": "1625829984244045201"
}
I have tested Google Home and Google Home Mini and neither is capable of playing HLS streams. Our radio station is in the TuneIn database which supplies Google devices with radio streams. Our Icecast streams work but not HLS streams. So I'm sure that your Google Nest Hub has the same lack of ability to play HLS.
When we have a simple response and basic card with link out button. See below code, if user clicks the link to open the web view page before Google assistant finishes reading. Once Google assistant finishes reading, the web view page is forced to close. User has to click the button again to open the web view. It looks like the the response gets reset when Google assistant activate the listening mode.
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"ssml": "<speak>\r\n <audio src=\"…audio file location url…">some long text message\ r\n</speak>",
"displayText": " some long text message”
}
},
{
"basicCard": {
"title": "",
"subtitle": "",
"formattedText": "",
"buttons": [
{
"title": "title",
"openUrlAction": {
"url": "… url…",
"urlTypeHint": 0
}
}
]
}
}
]
}
}
}
],
"isInSandbox": false
}
Sometimes Google Assistant does not answer me even though I receive correct response from the fulfillment. That happens only when I use voice command, by using keyboard it always works fine.
What I receive instead of the response
It's just 'thinking'.
After using conv.close('You've punched-in into demo as Jack'); in DialogFlow history I can see following response:
{
"queryText": "Jack",
"fulfillmentMessages": [
{
"text": {
"text": [
"[{\"type\":0,\"speech\":\"\"}]"
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You've punched-in into demo as Jack"
}
}
]
},
"expectUserResponse": false
}
},
"outputContexts": [
...
],
"intent": {
"id": "96f93154-0ae4-4bb4-91c3-c1b796d7cda3",
"displayName": "punch-in"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
Does any one experienced such issue?
Noticed on Galaxy S7, Android 6.0.1.
actions-on-google v.2.2.0
That mostly happens to me when the internet connection is not good. With voice, there is an extra layer of Voice to Text conversion. Same latency issue might be causing the issue in your case.
The google assistant team resolved the issues I created to them, and after that the issues is not reproduced.
I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}
I have the problem that the google home/assitant action console simulators response and request tab are not working. At least when I am using a custom action.json.
For me I am not sure if all have this problem or only some. That are using an custom action sdk. Or is it a problem only because something of my action.json is maybe not 100% correct configured.
here is the action.json:
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "testApp"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"open special manager",
"open s p m"
]
}
}
}
],
"types": [],
"conversations": {
"testApp": {
"name": "testApp",
"url": "https://572e66a2.ngrok.io/",
"fulfillmentApiVersion": 2,
"in_dialog_intents": [
{
"name": "actions.intent.NO_INPUT"
},
{
"name": "actions.intent.SIGN_IN"
}
]
}
}
here is a picture of the request:
As you can see it is only the dummy content in request tab anyway if the chat is working.
The response tab is completly empty. But the messages and voice is correctly working. Also on my google home.
Does anybody have an idea? I will add of course more debug informations if necessary. Can it be trouble with the response or request messages from my server?
But actually the messages, they are working...