Trait onoff is not working with Raspberry Pi and google assistant - raspberry-pi

I try to setup google Assistant on Raspberry Pi 3B+
on this step here
this is the onoff-Trait to switch the LED
Query: Turn on
INFO:root:Recording audio request.
INFO:root:Transcript of user request: "turn".
INFO:root:Transcript of user request: "turn on".
INFO:root:Transcript of user request: "turn on".
INFO:root:Transcript of user request: "turn on".
INFO:root:End of audio request detected.
INFO:root:Stopping recording.
INFO:root:Transcript of user request: "turn on".
INFO:root:Playing assistant response.
INFO:root:Finished playing assistant response.
Responses
Sorry Power controls is not yet supported
I can not do that yet
I’m sorry, I don’t quite understand
I found also this topics and tried it:
How do you make your project available so you can use the OnOff trait with google assistant on raspberry pi?
Google Actions return Sorry power control is not yet supported this is quite old and now only SERVICE and pushtotalk ist available)
I also looked in Troubleshoot: https://developers.google.com/assistant/sdk/guides/service/troubleshooting#traits
In Troubleshout and 2. Topic is written: you may should register the device manually with this command
googlesamples-assistant-devicetool register-device [OPTIONS]
I tried to do, but no kind of command variation is working.
And also it shows diffrent syntax/Usage on help and on Google Manual
and it seems that the sequence is important
Thank you very much

Related

how to wake and give a command to a google assistant device via API?

I would like to issue a command to a google device remotely using google's api.
I have multiple google devices (Nest mini and Nest hub) that I would like to trigger a specific one by sending a request and issue a command with code.
ie.
Send request to Office's nest mini equivalent to OK Google, <something>
or
Send request to Kitchen's nest hub equivalent to OK Google, start 30 min timer
or
Send request to Living Room's nest hub equivalent to OK Google, initiate custom action
After the command is issue programmatically, the device would then behave as if the command was given in person.
I am new to Google's environment and I have not been able to find any docs that could potentially do this.

give Google Assistant device commands programmatically

Is it possible to give Google Assistant commands programmatically? For example, I'd like to be able to send a command as text "turn on the fan" and have GA react as if that was the spoken command. I would also accept sending a JSON request in whatever format needed (with device IDs or whatever the API needs).
My situation is I have a ceiling fan that is controlled by Google Assistant. I want to be able to control it programmatically. For example, some event happens and my code wants to turn the fan on. Is there any way my code can tell GA to turn on the fan?
I tried using the Google Assistant SDK. I can send it text like "what time is it?" and get back text and audio, eg "It is 11:00am". However, I have a test device called "washer" and if I send text "is the washer running?" I get back "Sorry, I didn't understand". If I speak the words into my phone, I get back "The washer is running".
Why can't the GA SDK interact with my device? The credentials I give to the GA SDK are the same I use for my SmartHomeApp that defines the "washer" device.
To do this, you can setup up a virtual Assistant device and then send commands to it.
Check out Assistant Relay, which is a service that sets up a virtual Assistant device and exposes a REST API so you can send text commands to it, as if they were spoken.
Per the Documentation:
Simply send Assistant Relay any query you would normally send Google
Assistant, and Assistant Relay will call the Assistant SDK an execute
your command.
Per the problem you are having with the Google Assistant SDK, I believe what you are trying to achieve is only possible with a device, be it physical or virtual and not by using the SDK directly.
There are a lot of firewall and security issues allowing each smart device to to connect to the Internet. To alleviate this problem, Google's design methodology uses a fulfillment device as a bridge to connect to the device locally from one of their devices.
You are locally, on your smartphone, hooking into Google Assistant.
The phone is the fulfillment glue for the "washer" device.
According to this page:
Google Home or Google Nest device is required to perform the
fulfillment.
Due to the portable nature of cell phones, it does not make sense to allow one to be used as the fulfillment device remotely, hence the local hook.

Does GVA (Google Voice Assistant) support RTC and Doorbell related fucntionality?

I want to access our doorbell product to Google Voice Assistant, but didn't find the Doorbell type in Google Assistant Doc. https://developers.google.com/assistant/smarthome/guides
I wonder whether GVA support RTC and Doorbell related functionality such as doorbell notification.
For example:
When someone rings the doorbell, the ringing message will be delivered to GVA automatically, and the device with GVA will playing "Someone is ringing the doorbell".
The device with GVA can display a live view of Doorbell when user say "show me the doorbell" and user can have a talk with the guest outside with a real time communication(RTC).
This is not currently supported by the platform, but you can subscribe to the doorbell type request on the public tracker to be notified once it is released.
In the the mean time, you could implement the CameraStream trait to enable users to request to "Show the front door", though this would need to be proactively requested by the user when they hear a doorbell ring.

What could be the reason for a Google Action with external endpoint (API) to work on the Actions Console Simulator, but not as deployed?

I am new to the whole creation-process of Google Actions.
I created an Action with the new Action Console.
It has an fulfillment endpoint to my server (as example: www.mypage.com/api).
For testing purposes it has no authentification, so it generates a public API response.
Said API generates a simple JSON response based on a send event handler.
In short: Action onEnter sends handler to API, API queries SQL database and sends the response back to my Google Action that then "speaks" the SQL result.
The result with the Actions Console Simulator is:
Testing with "Smart Display device: everything works.
Testing with "Speaker (e.g. Google Home): everything works.
Testing with "Phone": main intent is invoked and text is shown but does not get spoken.
So i tested the command "Hey Google, talk to the unicorn app" directly on my smartphone.
Here **everything works fine ** as said smartphone has the same e-mail like the one on my Action Console account: it recognizes the main invocation command and when i ask by voice to get the data, the data is received from my server and spoken.
So something is wrong with the "Phone" device in the simulator. This has been confirmed by me from other users saying that they often have trouble with the simulator function correctly.
I then deployed my Action.
It was reviewed and approved.
A few seconds later i received an auto-email saying that there were too many errors with my app, asking me to check its health status. I did so and in the Health tab, i can see that it has an error but it does not show me what the error is.
THEN I CONTACTED A ACTIONS-ON-GOOGLE EXPERT
They helped me a lot into the right direction but could not go deeper into the problem as my connection (the endpoint API) is outside of their servers.
So i ended with their tip on checking the Google Cloud Logging Console.
As said Logging Console is also new for me, i learned on how to query my results but:
How can i query for the so called "is_health_check" flag?
I am asking this because the Google Expert recommended me to search for said flag but i do not know on how to query it.
Sorry for this ultra-long entry but i am trying to be as transparent to you as possible, as i have been trying this out for several days now.
Thanks in advance for your time, ppl!
So the error is simple once that you know how Google handles the external webhooks. Thanks to the help of 2 Actions-On-Google Experts i was informed that Google pings your external Webhook from time to time.
As soon as they get an error as result from said ping, the Action will be deactivated from the Assistant until a new ping response sends that everything is fine again.
My problem was that after deploying the action and while it was under reviews, i continued to work on the code on my server. While coding, the Google server pinged and received an error code.
My fault, but at least i learned the thing about pinging your action!

Is it possible to retrieve the configured rooms/locations in the fulfillment service?

I have been experimenting with Google Smart Home and the protocol flow looks very clear for me. In summary:
action.devices.SYNC - sent by Google Smart Home to fulfillment service to find out the available devices
action.devices.EXECUTE - sent by Google Smart Home to fulfillment service to execute a certain action on a device
On the smartphone/tablet, the customer can place a device in a certain location. This allows him to ask questions such as Turn everything in my office off. Internally, Google Smart Home knows which devices are located in the office, and sends a action.devices.EXECUTE action for each device in the office subsequently, as explained above.
I am now wondering about the following: is it possible to retrieve the configured locations/rooms in the fulfillment service also? Is this information exposed and available to retrieve?
It is not possible to receive information about a user's home layout through the Home Graph API. When the user gives a command like "Turn everything in my office off", you may get several OnOff commands in your fulfillment, although you will have no way of knowing the original query.