Google Assistant on Raspberry Pi. Can I use voice commands with Home Assistant? - raspberry-pi

How can I use Google Assistant on Raspberry Pi 3 with Home Assistant (or any other home automation system)? I cannot find how to use voice commands to manage lights, tv, garage door, etc....
Can someone help?
Thank you.

Do you already have google assistant configured on a RPi or are you looking for how to do this ? Or is the question around creating skills ?
If you are looking to configure RPi, you need to go Google Assistant's SDK site - https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/
If it's the latter, you may want to check the google actions
https://developers.google.com/actions/

If you are a bit into IOT you can use an ESP8266 with the Google Assistant to control almost anything in your house: lights, pumps, garage door etc etc. You only need to replace the LED's by Relays.
You can find the complete story here:
https://lucstechblog.blogspot.nl/2017/05/google-home-and-esp8266.html
Luc

You have to connect your home assistant project to google assistant. To do this you need the gactions CLI, create a project in https://console.actions.google.com, link your account with the project, edit your configuration.yaml as follows:
google_assistant:
project_id: someproject-2d0b8
client_id: [long URL safe random string]
access_token: [a different long URL safe random string]
agent_user_id: [a string to identify user]
api_key: [a Homegraph API Key generated for the Google Actions project]
exposed_domains:
- switch
- light
- group
Finally, you'll need to open your Google Assistant app ( IN YOUR PHONE ) and go into Settings > Home Control, click on + and add your App created in Google actions to interact with your devices by using the Google assistant.
Here you can see the setup process:
https://home-assistant.io/components/google_assistant/
It works in my case.

Related

give Google Assistant device commands programmatically

Is it possible to give Google Assistant commands programmatically? For example, I'd like to be able to send a command as text "turn on the fan" and have GA react as if that was the spoken command. I would also accept sending a JSON request in whatever format needed (with device IDs or whatever the API needs).
My situation is I have a ceiling fan that is controlled by Google Assistant. I want to be able to control it programmatically. For example, some event happens and my code wants to turn the fan on. Is there any way my code can tell GA to turn on the fan?
I tried using the Google Assistant SDK. I can send it text like "what time is it?" and get back text and audio, eg "It is 11:00am". However, I have a test device called "washer" and if I send text "is the washer running?" I get back "Sorry, I didn't understand". If I speak the words into my phone, I get back "The washer is running".
Why can't the GA SDK interact with my device? The credentials I give to the GA SDK are the same I use for my SmartHomeApp that defines the "washer" device.
To do this, you can setup up a virtual Assistant device and then send commands to it.
Check out Assistant Relay, which is a service that sets up a virtual Assistant device and exposes a REST API so you can send text commands to it, as if they were spoken.
Per the Documentation:
Simply send Assistant Relay any query you would normally send Google
Assistant, and Assistant Relay will call the Assistant SDK an execute
your command.
Per the problem you are having with the Google Assistant SDK, I believe what you are trying to achieve is only possible with a device, be it physical or virtual and not by using the SDK directly.
There are a lot of firewall and security issues allowing each smart device to to connect to the Internet. To alleviate this problem, Google's design methodology uses a fulfillment device as a bridge to connect to the device locally from one of their devices.
You are locally, on your smartphone, hooking into Google Assistant.
The phone is the fulfillment glue for the "washer" device.
According to this page:
Google Home or Google Nest device is required to perform the
fulfillment.
Due to the portable nature of cell phones, it does not make sense to allow one to be used as the fulfillment device remotely, hence the local hook.

Smart Home Actions for non-commercial project

I created a free service that permits to control a French set-top box (which provides different services like TV, playing media, Netflix, …).
This set-top is a 3rd party product for me because I do not own the material, but because the constructor provides an API I've been able to create a service from end-to-end that controls the box. The box provider doesn't have any service published on Google to control their box and they do not plan to do it in the future.
I tested everything with my own Google Home account and everything is working fine. I'd now like to deploy/publish my service to all my users in Google Home… While I'm filling all the steps to publish my project, it's asking me to complete a form (Smart Home Certification form), but at the top of the form it says: “if your action is non-commercial (personal/hobby project) or you are implementing only the SCENE trait, do not submit the form.”
My action is non-commercial (it's a free service) and I'm maintaining it on my personal time (hobby project), so I'm not supposed to submit the form. But if I don't, then I cannot have my service published/deployed?!
Is it possible to publish a Smart Home Action without being a company that sells products/pays a developer to maintain the service?
For your information, I already published an Alexa Skill for this service 1 year ago and it works very well. I was waiting for Google to publish the Channel trait in French to release it. Right now I have to ask my users to create applets in IFTTT to make the service works with Google, which is not optimal and very painful…
I tried to reach to the ha-certification Google team but no answer after 2 weeks… So maybe someone in the community would already have experimented the same case as me!
Thanks
After sending emails around, I finally got an answer from a Google employee:
due to our new policies, we are now not launching any partners who are not tied to commercial products

Is it possible to retrieve the configured rooms/locations in the fulfillment service?

I have been experimenting with Google Smart Home and the protocol flow looks very clear for me. In summary:
action.devices.SYNC - sent by Google Smart Home to fulfillment service to find out the available devices
action.devices.EXECUTE - sent by Google Smart Home to fulfillment service to execute a certain action on a device
On the smartphone/tablet, the customer can place a device in a certain location. This allows him to ask questions such as Turn everything in my office off. Internally, Google Smart Home knows which devices are located in the office, and sends a action.devices.EXECUTE action for each device in the office subsequently, as explained above.
I am now wondering about the following: is it possible to retrieve the configured locations/rooms in the fulfillment service also? Is this information exposed and available to retrieve?
It is not possible to receive information about a user's home layout through the Home Graph API. When the user gives a command like "Turn everything in my office off", you may get several OnOff commands in your fulfillment, although you will have no way of knowing the original query.

Get text from voice command with Alexa echo dot

I am trying to create a new skill with Alexa Skill Kit and node-red. I created a web service and linked alexa to it. Now when I call the service via voice command with Alexa Echo dot I don't get the raw text (transcription of my voice). I only got the intents and slots...
Did anyone try that ? Did I miss something while configuring Alexa Skill ?
Amazon do not make the whole transcript available only the parts that match an intent and the extracted slots.
You used to be able to get this data but it was deprecated for US users and never made available to users in other regions.

Sorry, this action is not available in simulation

My test invocation name is "Mrs Tang", so i input "Talk to Mrs Tang", but it responds "Sorry, this action is not available in simulation"...
Does anybody know How can I resolve this error?
According to the doc:
Turn on the Web & App Activity, Device Information, and Voice & Audio
Activity permissions on the Activity controls page for your Google
account. You need to do this to use the Actions Simulator, which lets
you test your actions on the web without a hardware device.
And I had do what Jeremy Gordon suggested. To add a second google account in the GCP IAM console with a viewer action permission and then login with this second google account in an incognito window for the web simulator to work.
I had a related problem (I could test with my main developer account, but not my test credentials). I eventually got it working with the non-primary account.
The missing link for me was that when I was viewing the simulator, I was actually signed in to two accounts, my primary google account (developer account, shows up in the main frame of the page, upper right corner), and the account I authorized when 'starting' the simulator (email address shows up in the simulator frame), which was my test credentials. The second test account repeatedly gave me the "Sorry, this action is not supported in simulation" message, until I:
1) Added the test account as a Conversation API Viewer & Client in GCP IAM console
2) Visited the 'create link' (the one that comes up when you click share) in an incognito window, and signed into the secondary account there such that I was signed into only one account in that incognito window.
After that, invocations connected to the app.
Make sure you are logged into the same account you used to deploy the test action and that the deployment has been done within the past half hour or so. If you have not set all the information on the Actions on Google Console, you may need to use the invocation phrase "Talk to my test app".
I think sometimes I run into the same error. I get past it by toggling the Active switch off and on.
Same problem I encountered. You must be logged in via the secondary google account. Do logout from the account and login via the account that is paired with api.ai.
why I can't use google action in web simulator
I got this to work by saying "talk to my test app" or typing it in to the simulator prompt, that triggered my app to start in the simulator.
I had the same problem. I needed to set the location first (it defaults to Google-Headquaters), if you are in some other region (like in germany as i am).
Then go on with "Mit meiner Test-App sprechen" (Talk to my test app), or whatever it is in your language!
I did not get this message on my invocation, but on my my second input:"Sorry, this action is not available for your app.".
It turns out the simulator had left the conversation right after the invocation (and it did mention that in the small print).
This happened because I returned a FinalResponse for the invocation. And a final response is pretty final, it will terminate your conversation.
So, after a FinalResponse you can only get back in your action/conversation by a new invocation or a deep link. If you want to suggest questions/inputs, then you should return ExpectedInputs.
You might need to turn on Web & App Activity to let group members use some Google Assistant features.(if you are using organizational account)
https://support.google.com/assistant/answer/7219584?hl=en
If you are using any organization's Google Accounts then there might be an access issue. So use your own personal Gmail account.
Take a look here, organization's might not be given you access. So use your personal Gmail and follow the Docs, you will able to create your agent/ actions and able to test it with simulator as well as in android device.
At time of testing the Google Action you need to set the location to the country which you selected while developing or submitting your Google Action.
By default US is selected in testing but if your action is for one particular country only then you need to select that. You can see in image the field where you can select location